In Machines We Trust: Are Robo-Advisers More Trustworthy Than Human Financial Advisers?

Abstract

The term itself 'deep learning' refers to 'deep convolutional neural networks', hereafter abbreviated as 'deep NN'. 21 To clarify the meaning of several other key terms used in this paper, first note that 'artificial intelligence' is a very broad phrase that describes many types of technology that perform tasks typically thought to require human intelligence. 22 Similarly, 'machine learning' is a subset of AI, referring to a group of techniques that enables computer programs to learn a task without being explicitly programmed. 23 Therefore, 'deep learning' is a subset of machine learning, referring to the technique of using deep convolutional neural networks (or deep NN) as a method that facilitates computer programs to learn a task without being explicitly programmed. 24 'Neural networks' are a collection of many algorithms, individually called artificial neurons, organised hierarchically into layers to collectively form a network. They can be crafted from thousands of artificial neurons and organised into many layershence, the term 'deep'. The revolutionary feature of deep NN is an ability to continuously learn from data; that is, such networks can analyse enormous amounts of information and draw correlations between seemingly unrelated events. 25 The more data deep NN are given, the better it becomes at making predictions. This is unlike 'old-style' AI, in which the algorithm initially improves with more data, but at a certain point feeding more into the system does not result in improved performance. 26 An ability to digest vast amounts of data and make predictions based on historical trends makes deep NN well suited to the task of analysing financial markets and making learned projections. 27 Currently, there are a few financial service firms already offering this technology to advisers, 28 29 with many more developing similar AI services. 30 The application of deep learning to financial investment is also of keen interest in the wider research space. 31 32

Distinctive Features Of Deep Neural Networks Versus 'Old-Style' AI
Since around 2016, deep NN and deep learning have been the buzzwords in technology-but is the hype deserved? 33 To understand what is special about deep neural networks, we need to have some contextual knowledge about AI technology generally. With such keen interest on the rise, one could easily be led to believe that neural networks are the only form of AI. This is not the case, as these systems represent one of many approaches to machine learning. Hence, another method of significance is 'rules-based machine learning', which is a family of AI techniques that is based on application of a defined knowledge base, formulated as 'if … then …' rules. 34 For example, 'IF stock price moves X% AND currency moves Y% THEN buy'.
Rules-based machine learning requires the acquisition and organisation of knowledge and reasoning, such as the discovery of causal relationships. 35 One key advantage of this technique is that the machine applies a defined set of rules in a comprehensible form to humans. 36 This means its decision-making process is transparent and open to human analysis, with the discovery of new rules adding to the body of knowledge.
Conversely, some disadvantages include the rules becoming unmanageably complicated for more complex applications of machine learning. As such a rules-based method is limited to a defined knowledge base, it can only apply the body of knowledge that is already known, given that rules-based machine learning does not handle uncertainty well. 37  is neither a defined body of knowledge nor rules of reasoning humans can comprehend. 38 Essentially, such networks find patterns and predict future outcomes based on past patterns. Precisely how neural networks arrive at a prediction cannot truly be known, but what is certain is that they have shown great success in doing so.
Most of the 'old-style AI' applications used in the finance industry would fall into the category of being rules-based systems. 39 Thus, the transition to deep neural networks is not merely an incremental improvement in technology, but a shift towards a fundamentally different technology with different advantages and disadvantages. From a legal or regulatory perspective, one must understand the inherent nature of deep NN to effectively regulate use of the technology. Deep neural networks are intrinsically different from rules-based machine learning, with the key distinction being that the reasoning process used by neural networks is a 'black box', which is not open to meaningful analysis. 40 Indeed, this poses a significant limitation that researchers must accept and work with if they wish to gain the benefits of the technology's superior performance.
It should be noted that research is being done to find methods of uncovering the decision-making process within neural networks. 41 However, researchers have had little technical success in explaining some of their complex decisions. 42 Based on the current state of technology, it is highly probable that in the next few years fully automated deep NN financial advisers will be widely and commercially available. Due to the scalable nature of AI, these services can be offered at a fraction of the cost of human financial advisers. 43 There is great potential benefit in this prospect, as it opens up financial advice services to low-and middle-income people who could otherwise not afford such assistance.
That said, there are tangible risks that must be acknowledged. The 'black box' component involved in deep NN creates fear around the fact that we cannot predict what actions the agent might take; if there are adverse results, there is equal concern that the deep NN agent cannot explain its own actions. The perceived lack of transparency of this technology will, then, pose a challenge for financial regulators if they are unable to question or probe the decision-making processes in which deep NN financial advisers engage.

Part 2: Regulation Of Automated Financial Advisers
Part 2 gives a brief overview of how automated financial advice services in several financial markets are regulated, and further examines whether current financial regulations are equipped to handle the unique features of deep NN technology.

Regulation Of 'Old-style AI' Versus Deep Neural Networks
An important point to highlight regards the new and unique challenges that deep NN pose for regulators compared to the AI technology that has been in use until now. 'Old-style AI', or algorithms that are based on decision trees or decision rules, are amenable to scrutiny as regulators can assess the rationale of those rules according to conventional wisdom. However, with deep NN, regulators cannot review the rationale or rules behind the algorithms, as neither is actually intelligible to humans. As discussed in Part 1, the internal processes for how neural networks arrive at their decisions currently remain a 'black box', where no method is presently available to ascertain whether there exists any underlying rationale.
In terms of assessing the performance of deep NN financial robo-advisers, only time can reveal the quality of their advice and whether that advice delivers positive financial results. This is not a particularly reassuring method for financial regulators to monitor the provision of financial advice to the public, as it could take years before problems with such advice become apparent, wherein financial losses or other harms may occur before a problem is identified.
From a regulatory perspective, the advanced and intuitive learning capacity of deep NN concurrently presents a notable degree of risk. Part of the potential benefit of applying deep learning to financial advice is for deep NN agents to discover new methods 38 Hurdal. 39 Seetharaman, "Financial Applications of Neural Networks." 40 Knight, "The Dark Secret at the Heart of AI." 41 Norton, "Inside Darpa's Push to Make Artificial Intelligence Explain Itself." 42 Adadi, "Peeking Inside the Black-box." 43 Flores, "AI Platform Claims it Can Advise at 1/20th of Cost." of financial investment unknown to humans. In a research setting, new knowledge has already been uncovered ahead of human discovery.
One example regards AlphaGo Zero, Google's latest deep NN agent created to play the game 'Go'. 44 Given only the rules of the game, the agent learned to play the game purely through trial and error. It was neither given human knowledge, such as a game strategy, nor provided moves used by expert players. 45 What researchers found was that AlphaGo Zero quickly discovered from first principles well-known moves and strategies used by human players, after which it discarded these for new moves it had learned that were previously unknown. 46 Human expert Go players have since been analysing games of AlphaGo Zero playing against itself, with some describing its methods as 'amazing', 'strange' and 'alien'. 47 Some of the novel opening moves that AlphaGo Zero discovered, which seemed to go against the conventional wisdom of human Go players, are now being copied in professional tournaments. 48 It is possible that the application of deep NN to financial investing could follow a similar path, with the deep NN financial adviser discovering superior investment strategies than any contained within current human knowledge. This makes oversight of such an agent especially challenging, as a human financial adviser may not be able to assess their robot counterpart's advice except in hindsight. Even if this advice goes against conventional wisdom, that alone would not conclusively rule the advice as of poor quality, as part of the purpose of applying deep NN to financial advice is to discover superior investment methods yet unknown to humans.

Regulation of Automated Financial Advice
Most financial regulators have taken a technology-neutral approach in which regulations do not distinguish between the provision of AI-or human-based financial advice. While this may have been adequate when regulators were able to scrutinise the rationale and rules behind 'old-style AI', it may be insufficient to deal with the unique characteristics of deep NN.

United States
The United States (US) has been a major player in the development of AI over the past decade, with China in close second. 49 How lawmakers and regulatory bodies in these two jurisdictions approach deep NN technology will play a critical role in the industry's prospective development.
Generally, the US Government has taken a hands-off approach to regulating AI. 50 With the exception of some legislation targeted at self-driving cars, there has been little attempt to control the development of this technology. 51 The most recent Congress report on AI emphasised the value the nation has placed on innovation and entrepreneurial spirit, including the US Government's reluctance to hamper technological development with regulation. 52 Regarding the regulation of automated financial advice, the country's approach is technology neutral, with providers of such advice being subject to the same obligations as human financial advisers under the Securities and Exchange Commission's Investment Advisers Act of 1940. 53 There are no signs that in the near term US regulators will attempt to curb application of deep NN technology to financial advice services.

China
The market for AI financial services in China is rapidly growing. 54 Several factors make automated financial advisers especially well suited to the Chinese investment market, including a large middle class, a general lack of traditional financial advisers and 44 Hassabis, "AlphaGo Zero." 45 Silver, "Mastering the Game of Go Without Human Knowledge." 46 Silver,[357][358]. 47 Chan, "The AI That Has Nothing to Learn From Humans." 48 Chan. 49 Batson, "China May Become the World's Leader in AI." 50 Hurd, "Rise of the Machines." 51 West, "The State of Self-driving Car Laws across the US." 52 Hurd, "Rise of the Machines." 53 US Securities and Exchange Commission, "SEC Staff Issues Guidance Update and Investor Bulletin on Robo-advisers." 54 Huang, "How Fintech is Shaping China's Financial Services?" market conditions that favour active asset management-all of which make low-cost automated financial advisers very attractive. 55 Regulators in China have issued guidelines specifically addressing the provision of automated AI financial advisers that will come into force at the end of 2020. 56 Its main features are that providers must disclose the parameters of their respective AI model, divulge to customers any inherent flaws and risks of the algorithm, and have in place plans to address any system failures or instability to the market from the potential 'sheep-flock' effect of defects in their machines' systems. 57 The guidelines demonstrate that regulators in China are attentive to the emerging challenges posed by AI financial advisers; how this will be implemented to deep NN financial advisers precisely remains to be seen.

European Union
The approach in the European Union (EU) has also been technology neutral, with the same obligations applying to providers of financial advice, whether through automation or through human intervention. 58 There is no specific domestic legislation aimed at automated financial advice.
The general data protection laws that have come into force in the EU are likely to have a major effect on the development and provision of AI services in Europe, particularly concerning automated financial advice that uses deep NN technology. The EU's General Data Protection Regulations (GDPR) contains what has been termed the 'right to an explanation'. 59 That is, where a purely automated decision is made that significantly affects a person's rights, that person is entitled to 'meaningful information about the logic involved'. 60 This could be interpreted as requiring an explanation for how a deep NN agent arrived at a particular decision. 61 This 'right to an explanation' has been criticised as being based on misunderstanding of deep NN technology, and that it will have a chilling effect on AI innovation in the EU. 62 63

United Kingdom
The United Kingdom (UK) has similarly maintained a technology-neutral approach, where providers of financial advice have the same obligations, whether using automation or human financial advisors. 64 65 The UK has not legislated specifically on deep NN technology but is enacting general data protection legislation that implements the protections outlined in the European GDPR. 66 One House of Lords report into AI highlighted the 'black box' issue in deep NN as one key problem that must be overcome if the technology is to become a trusted and integral part of society. 67 The report also emphasised that AI technology must be 'explainable' to the public, in that AI systems must be able to clearly state the information and logic used to arrive at their decisions. 68 Where this is not possible, such as with deep neural networks, the report recommends delaying use of AI technology that is not explainable: In cases such as deep neural networks, where it is not yet possible to generate thorough explanations for the decisions that are made, this may mean delaying their deployment for particular uses until alternative solutions are found. 69

Australia
The Australian approach to the provision of financial advice is also technology neutral. The Corporations Act 2001 imposes the same obligations on people who provide financial advice to retail clients, whether through a computer program or with human financial advisers. 70 Where automated financial advice is provided through a computer without the direct involvement of a human agent, the person offering the technology is taken to be the person offering the financial advice. 71 Although providers of automated financial advice have the same legal obligations as humans, they are required to disclose additional information compared to human financial advisers to demonstrate that they have fulfilled their legal obligations. Australian financial regulators have given further guidance on what information and requirements are expected regarding computer programs used for automated financial advice. 72 These include: • having people within the business who 'understand the rationale, risks and rules behind the algorithms underpinning the digital advice' 73 • having people able to review the digital advice generated by algorithms 74 • having a documented test strategy that explains the testing of algorithms, including defect resolution and final test results 75 • regular review of a sample of the automated advice for compliance with the law by a suitably qualified person 76 • heightened scrutiny of the automated advice when changes to the algorithm are made. 77 Evidently, the unique characteristics of deep NN pose a challenge for regulators, particularly regarding the issue of 'black box' decision-making. Financial regulators in the abovementioned jurisdictions have not yet developed any specific regulations to address this challenge. While not specifically aimed at deep NN, the general data protection laws in Europe and the UK will have an effect on the application of this technology because it cannot be explained in the way that lawmakers are demanding.
Instead, what the public and regulators need is reassurance that there is a way for laypeople to have some understanding of deep NN agents, and that there is some level of predictability in their behaviour. This does not have to be achieved by demanding that individual decisions by a deep NN agent be explainable. As such, the remainder of this paper proposes an alternative solution through which such machines can meet societal expectations of transparency and predictability. 78

Part 3: Trust in Personality
One way deep NN agents can gain public trust, despite being unable to explain their decision-making process, is by disclosing their 'personality'. This section articulates the idea that deep NN agents can be understood as having 'personality traits'ranging from greediness, selfishness or prudence-which can give the public and regulators alike an adequate understanding of how they will behave.
Other commentators have argued that it is not necessary to open the 'black box' of neural network technology. Notably, Wachter, Mittelstadt and Russell 79 proposed the concept of unconditional counterfactuals as a way of giving the public a meaningful explanation of how AI makes automated decisions. This paper situates disclosing a deep NN agent's 'personality traits' as another potential method for proposing a meaningful framework for understanding neural network technology, without needing to crack the 'black box' problem.
With regards to the fear surrounding AI decision-making, it can be posited that such apprehensions are misplaced when considering a deep NN agent's behaviour and its susceptibility to regulation. Just because individual decisions cannot be explained does not mean that the agents cannot be controlled. The behaviour of deep NN agents can be controlled by rewarding 'wanted' decisions and punishing 'unwanted' decisions. As AI pioneer Alan Turing 80 envisioned back in 1950: we normally associate punishments and rewards with the teaching process. Some simple child-machines can be constructed or programmed on this sort of principle. The machine has to be so constructed that events which shortly preceded the occurrence of a punishment-signal are unlikely to be repeated, whereas a reward-signal increased the probability of repetition of the events which led up to it.
During the 1950s, early research work in the field of AI explored the creation of machines that learned through trial and error. 81 In the 1990s, Sutton and Barto 82 brought together principles from trial-and-error machine learning and theories of psychology to conceive the discipline of 'reinforcement learning'. Essentially, this field deals with the study of the nature of learning and the general concept of learning through positive reinforcement of desired behaviour and negative punishments for undesired behaviour. 83 Reinforcement learning as a method of machine learning is now at the forefront in the design of AI agents capable of understanding through trial and error. 84 For example, Google's AlphaGo, the first computer program to defeat a world champion at Go in 2016, was trained using this technique. 85 Google's AI research lab DeepMind continues to push the capabilities of their deep NN agents through reinforcement learning. 86 This paper proposes that the conceptual framework of 'reinforcement learning'-which sees an agent learn through reward and punishment-is the most appropriate framework through which AI developers can communicate to laypeople about deep NN agents. This is because: 1. reinforcement learning is at a level abstract enough that it will not become obsolete as deep learning techniques progress 2. it is a concept intuitive enough to be understood by people without any expertise in computer science.

Reinforcement Learning
Reinforcement learning is a method of machine learning by which an AI agent learns to 'take sequences of actions in an environment in order to maximize cumulative rewards'. 87 The agent learns to make better decisions through a system of positive rewards and negative rewards (punishments). 88 The deep neural network can be conceived of as an "agent", which interacts with its environment. The positive rewards and punishments are signals that the environment sends to the agent in response to the agent's actions. The goal of the agent is to take the actions that lead to maximal cumulative rewards.
Such a system seems conceptually simple, but it belies a highly complex decision-making process of balancing competing rewards. An AI developer can control an agent by setting the positive and negative rewards for particular states. For example, research has demonstrated that by manipulating the scheme of rewards and punishments, developers could control whether their machine exhibited purely competitive behaviour or purely cooperative behaviour, or variations of both actions. 89 Three key traits that will be discussed below are how a developer can control whether the agent is "greedy" or "prudent", how a developer can control the agent's appetite for risk-taking and how a developer can assign punishments for illegal actions. This is best illustrated with an example of how an AI developer can control the traits of an agent, through rewards and punishments. 90 Take for example an agent developed to drive a race car around a race track. We can set a positive reward of 20 if the race car agent records the fastest lap time, a positive reward of 10 for crossing the finish line, a neutral reward of zero for remaining on track, a penalty of minus 2 for being off-track, and a penalty of minus 10 for crashing. The agent will seek to maximise its reward by completing the race as fast as possible whilst staying on track.
The developer can control how greedy or prudent the agent will be in weighing the reward of a fast lap time versus the risk of punishment for going off-track or crashing. A race car agent who is greedy will try to make turns as fast as possible, which may result in a lower overall reward if it crashes or goes off-track. A more prudent race car agent will slow down at turns, which may yield lower rewards in the short term but higher rewards in the long term.
Developers can control how "greedy" an agent is by adjusting what is termed the "discounting factor". 91 The discounting factor is the proportion by which the value of a reward in the future is diminished, meaning that the agent places greater value on more immediate rewards than rewards further in the future. By modifying the discounting factor, a developer can control how greedy or prudent the agent is. For example, Sun et al. 92 discusses the ability of developers to create non-greedy agents by modifying the reward function with diminishing return.
An AI developer can also control the behaviour of the agent with regards to whether the agent is risk-averse or risk-seeking, by the relative rewards and punishments. 93 For example, if we set the reward for the race car agent recording the fastest lap time to 100, and the punishment for crashing or going off-track to minus 10, the agent will engage in more risky driving behaviour, because the rewards for a fast lap time far outweigh the punishments for crashing or for not completing the lap.
If instead we set the reward for the fastest lap time to 20, and the punishment for crashing or going off-track to minus 10, then the race car agent will drive in a more conservative manner, as the potential reward of a fast lap time is balanced against the risk of crashing or going off-track. 94 As such a developer can control the agent's appetite for risk, by controlling the relative value of the positive rewards versus negative punishments.
In the context of a Deep NN agent giving financial advice, the same principles apply as in our race car agent example. The Deep NN agent can be controlled with regards to how "greedy" or "prudent" it is, in balancing the pursuit of immediate financial gains versus long term growth of a portfolio. The Deep NN agent can also be controlled in terms of risk appetite in how much financial loss it is willing to risk for financial gains, in the relative values of rewards for financial gain, versus punishment for financial losses.
An important role of punishment is that it can also be a method of curbing illegal or unethical behaviour by the Deep NN agent. Actions which are illegal can be assigned negative rewards, so that the agent learns to avoid taking illegal actions. There is however the challenge that including violations of financial laws and regulations as punishments for the Deep NN agent is incredibly complex in real-world application. 95

Disclosing "Personality Profiles" For Deep NN Financial Advisors
This paper contends that AI developers should disclose the following three "personality traits" of Deep NN financial advisors, so that people without technical expertise are able to understand significant aspects of the agent's behaviour in a financial investment setting: 1. Greedy versus prudent: this trait measures how much the agent will prefer short term gain versus long term gains 2. Risk appetite: this trait measures how much risk of loss the agent is willing to take versus prospect of gain 3. Ethical behaviour: this trait indicates whether laws and regulations have been incorporated into the reward and punishment system of the agent. If not, then regulators will be alerted to the higher risk of the agent giving advice that may be illegal or unethical.
AI developers should give a plain language description of these 3 basic personality traits of the Deep NN agent. This should be disclosed before the Deep NN agent's services are offered to the public, and updated descriptions should be disclosed if the AI developers make changes to the agent's system. Financial regulators could undertake independent review of the performance of the Deep NN agent to assess whether the agent's "personality profile" matches the description as disclosed by the AI developers.
The goal of disclosing the Deep NN agent's "personality profile" is to allow a person without any expertise in AI technology to have a meaningful understanding of the likely behaviour of the agent. This allows consumers to make informed choices regarding which Deep NN agent is suitable for them. It also assists regulators in identifying Deep NN agents that pose a higher risk to the public, enabling regulators to use their resources in a targeted fashion.

Conclusion
Deep neural network agents have the potential to be the perfect financial advisers: one that has no self-interest and operates on reliable parameters of "reward" and "punishment". Compared to human financial advisers who will always feel temptation to act in their own self-interests.
However, like any powerful new technology there are good reasons for us to remain cautious. As Deep NN agents take over increasingly sophisticated tasks with serious social consequences, the community needs reassurance that Deep NN agents will adhere to legal and ethical codes of conduct.
The solution proposed here is a framework for AI developers to communicate the essential traits of Deep NN agents to the public. Understanding Deep NN through the concept of reinforcement learning and personality traits, provides a common language for people with and without computer science expertise to communicate about this technology. The ability for lay people to understand at a basic level how the Deep NN agent will act, allows the public and regulators to respond based on knowledge and understanding, instead of fear and suspicion. If from this understanding Deep NN can gain the trust of the public, the "black box" issue can be overcome and society can reap the benefits of affordable and innovative financial advice services.