The Ethical AI Lawyer: What is Required of Lawyers When They Use Automated Systems?

This article focuses on individual lawyers’ responsible use of artificial intelligence (AI) in their practice. More specifically, it examines the ways in which a lawyer’s ethical capabilities and motivations are tested by the rapid growth of automated systems, both to identify the ethical risks posed by AI tools in legal services, and to uncover what is required of lawyers when they use this technology. To do so, we use psychologist James Rest’s Four-component Model of Morality (FCM), which represents the necessary elements for lawyers to engage in professional conduct when utilising AI. We examine issues associated with automation that most seriously challenge each component in context, as well as the skills and resolve lawyers need to adhere to their ethical duties. Importantly, this approach is grounded in social psychology. That is, by looking at human ‘thinking and doing’ (i.e., lawyers’ motivations and capacity when using AI), this offers a different, complementary perspective to the typical, legislative approach in which the law is analysed for regulatory gaps.

and efficacy ('fit for purpose', adequacy) is grounded in social psychology. That is, by examining human 'thinking and doing', their motivations and capacity when handling AI, this perspective offers a different, complementary perspective to the typical, legislative approach in which the formal law alone is analysed for regulatory gaps. 19 To do so, we use psychologist James Rest's Four-component Model of Morality (FCM), recently used and extended by Hugh Breakey in the law context. 20 As detailed in this paper, the FCM represents the four requisite psychological components (or interactive elements) for ethical behaviour: awareness, judgement, motivation and action. Here, these components, with Breakey's additions, embody the necessary elements for lawyers to engage in professional, ethical conduct when they are using AI in their work. We examine issues associated with automation that most seriously challenge each component in context, and the skills and resolve lawyers need to adhere to their ethical duties. This is a context in which there is some active regulation, such as the United States' (US) requirement for technological competence, but mostly, as embodied by the Australian case, where the approach is a more passive 'continuity' approach. We take ethical duties to mean the specific, technical rules; the wider, cornerstone professional values that underpin them and apply where they are silent; and the personal regulation needed to enact them. An 'ethical professional identity' is one in which the core values and ideals of the profession and its codes of conduct have been internalised. 21 Meanwhile, several studies in the parallel field of sociology of professions have looked at how new features of legal practice, from new public management and the decline of legal aid to corporatism and billable hours, are shaping the ethics and identities of lawyers. 22 This article can be meaningfully connected to this literature, with AI being the latest in a series of dramatic changes to legal professionalism. Our paper adopts a similar analytical approach to Parker and her co-authors, who investigated the ways in which features of large commercial law firms affect lawyers' decisions and behaviour. 23 In line with the wider research that emphasises the situational dimension of ethics, such studies reveal the ways in which new work environments are influencing ethics capacity and motivation. Thus, we also consider the context of lawyers' use of AI, including the effect of formal professional controls, and the workplace or organisational circumstances to support moral capacity and resolve. This sheds light on risk areas and regulatory effectiveness, including where 'soft' interventions, such as better legal education and training, might be needed. Therefore, this article takes a detailed look at what ethical AI practice might entail, noting that, professional ethical practice already asks a lot of individuals within their personal and organisational work environments. It must be both feasible and fair that lawyers are individually responsible when using AI, further raising issues of equality. This raises the apparent need for some degree of equality of regulation for lawyers and the non-lawyers using AI to mimic legal offerings. 24 The article is structured as follows. First, we introduce and contextualise Rest's FCM through detailed explication. Although used primarily as a framing device to discuss different elements of ethical practice and evaluate the regulatory demands made of lawyers, we also identify some of the criticisms made of the model. Thereafter, the main body of the article applies the FCM to the AI context, focusing on its four elements of ethical practice, and how each is challenged by the presence and use of automated systems. The article concludes with reflection on what is being asked of lawyers when they use AI, how that behaviour is affected by the technology, and how issues of lawyer regulation are illuminated by an analysis of ethical process.
Before we begin, we note that 'AI' is an amorphous term that includes many different elements. In this article on lawyers' ethical AI practice, we adopt an expansive definition 25 that encapsulates ML systems as well as less autonomous 'expert systems' or variants thereof, 26 such as decision-support tools or programs for automated drafting. Thoughts of lawyers' engagement with AI have tended to focus, in recent years, on the use of ML. This is understandable, as it is possible for such systems to learn and act autonomously, giving rise to issues around control and responsibility. 27 To provide brief definitions, 19 Tranter, "Laws of Technology," 755. 20 Breakey, "Building Ethics Regimes." 21 Hamilton,"Assessing Professionalism," 488,496. 22 Sommerlad, "Implementation of Quality Initiatives"; Alfieri, "Fall of Legal Ethics"; Campbell, "Salaried Lawyers." 23 Parker, "Ethical Infrastructure of Legal Practice." 24 While we do not develop an analysis of the tensions around non-lawyers' engagement (often through technology) in the legal services market, we note the complex issues generated-of what it means to practise law, the meaning of professionalism and the workings of professional trust. See, for example, Remus, "Reconstructing Professionalism," 872; Wendel, "Promise and Limitations"; Beames, "Technology-based Legal Document Generation"; Bennett, Automated Legal Advice Tools, 19; Bell, "Artificial Intelligence and Lawyer Wellbeing." 25 Following Tranter, who asks us to avoid piecemeal approaches: "Laws of Technology," 754. 26 Bennett, Automated Legal Advice Tools. 27 Scherer, "Regulating Artificial Intelligence Systems," 362-363.
an ML system is one that, when trained with data, can build a model of patterns and correlations in that data and apply the model to new and not previously seen data. This allows for sophisticated statistical analysis of hundreds or even thousands of input variables. As noted, though, pre-programmed systems are also widely used for legal applications, and can assist in structuring advice or decision-making, or automating forms or documents.

Rest's Four-component Model
Professor James Rest was a late twentieth-century psychologist who, with a team of US researchers, theorised on moral behaviour and development .28 His four-component model (or FCM) identified the 'semi-independent psychological processes' that must occur for moral behaviour to take place. 29 According to certain writers, this model better represents how an individual 'brings multiple systems to bear on moral situations'. 30 For our purposes, when lawyers are required by the 'law of lawyering' and other legislation-not to mention their own moral convictions-to use AI ethically, in fact several demands are being made of them, each of which must feature for regulation (including self-regulation) to be effective. Rest defined the components in the following way: 3. Moral motivation implies that the person gives priority to the moral value above all other values and intends to fulfil it.
4. Implementation or Action combines the ego strength with the social and psychological skills necessary to carry out the chosen action. 31 These processes might interact and influence each other, as evidence suggests they do, 32 but Rest argued that they still have 'distinctive functions'. 33 According to Rest and his team, the four components are 'the major units of analysis in tracing how a particular course of action was produced in the context of a particular situation'. 34 In a recent piece on how lawyers and their firms can enhance ethics capacity, Breakey extended the FCM to add achievement (or moral competence) and review (requiring moral reflectiveness to correct a course of action or improve it in future). 35 For Rest, individuals need 'skills and persevering character in the face of opposition' to their ethical behaviour. 36 The FCM was designed in a scholarly context in which theorists, according to Rest, focused too much on ethical judgement (Component II) and simply assumed that good thinking led to good behaviour. 37 Rest also viewed ethical sensitivity (Component I) as being distinct from judgement. As a corollary, training in ethics reasoning might not influence the interpretative process or how and whether ethics issues are detected in the first place. 38 While the categorisation of motivation as separate from judgement has been subject to intense debate in the psychology field, 39 in whichever way it occurs, motivation is critical: when a person chooses whether to act on the demands of ethics in competition with other values and interests. 40 This decision to act is driven by both intrinsic and extrinsic motivations. As Breakey says of lawyers' possible motivations, these include common morality (e.g., honesty, respect and dignity), desirable role identity (e.g., the status, dignity and honour of being a professional), excellence (successfully performing 'professional' activities), fair bargain (sense that honouring professional obligations is a fair exchange for status rewards), constructed virtues (habituated responses to meet peer approval and functioning), social admiration, and avoidance of sanction or punishment. 41 The emphasis on motivation provides 'the bridge between knowing the right thing to do and doing it'. 42 That said, the FCM has been criticised including for its suggestion that moral behaviour occurs in a stepwise fashion as a series of distinct stages in a set order (something that Rest did not intend, as explained below). Current debate within social psychology focuses on the extent to which 'actual conscious reasoning and deliberation' influences moral decisions and actions, 43 with the weight of the literature strongly favouring models that emphasise the intuitive, innate and evolutionary foundations of morality. 44 Evolution has 'etched into our brains a set of psychological foundations' that underlie human virtues: these are adaptive mechanisms to help us rapidly solve problems, including care/harm, fairness/cheating, and liberty/oppression. 45 For these contemporary writers, scholars like Rest are too rationalist, privileging, for instance, the act of thoughtful moral deliberation (as part of Component II: judgement), even though in actuality it rarely occurs. 46 It is further argued that they have overplayed people's individual traits and abilities, and the extent to which these can change-grow, mature-across time and experience. 47 These more recent writers argue that people are primarily influenced by their social contexts and the people around them, and that these influences typically lead to moral disengagement. 48 This trend has seen the rise of behavioural approaches to legal ethics, a scholarship that emphasises the situational effects on lawyers' decision-making. 49 In response, Rest and his colleagues were clear that the FCM was intended as a means of analysing moral actions, not depicting a linear sequence in real time. 50 They were also explicit about their overlapping and often simultaneous nature, and demonstrated through empirical research how, for instance, concentrated effort in one component can diminish attention to another component or to a new ethics situation. 51 In addition, Rest demonstrated (and emphasised) that each component contains a combination of cognition and affect, comprising feelings of empathy and disgust, mood influences, attitudes and valuing. 52 Overall, 'the major point … is that moral behavior is an exceedingly complex phenomenon and no single variable (empathy, prosocial orientation, stages of moral reasoning, etc.) is sufficiently comprehensive to represent the psychology of morality'. 53 Moreover, Rest was himself often tentative about his scheme, and many scholars have tweaked with the categorisations into subdivisions to allow for granularity 54 and into layers of abstraction, 55 adaptations he actively supported. 56 Indeed, in the law context, Hamilton accepted from the FCM research and Rest's writing that an ethical professional identity can be developed over a lifetime. Further, that ethical behaviour was more or less likely depending on both the individual's personality and character, and the social dynamics of their law practice. 57 To further incorporate current theory, Breakey's recent use of the FCM, also in the legal domain, connects it-personal capacities and motivations alike-with the social context in which lawyers work. 58 As he writes, a lawyer's organisation and wider profession may act as 'obstacles' to ethics capabilities and motivation or to each of the FCM components. 59 Meanwhile, institutional initiatives-by the profession as a whole (e.g., codes, guidance and training) and the workplace (e.g., policies, incentives, training and mentoring)-can support moral capacity and resolve, reduce the effects of any obstacles to moral capacity or resolve, or otherwise leave in place (or even multiply) those obstacles. 60 Noting its comprehensiveness and the decades of credible research findings that support its various elements, 61 the FCM is used in this discussion of AI lawyering because, as an essentially competence-based, motivational model, it is especially well suited to approaching how ethics (and, therefore, regulation) is practised by individuals or (more specifically in this context) lawyers using AI. As Rest saw it, the FCM offers a set of criteria for testing the merits of ethics education, 62 and asks how successfully this educational approach inculcates each component. 63 In contrast, we first use it for a somewhat different purpose-that is, to look at what we are asking of lawyers as ethical users of AI, specifically their capacity and resolve when handling it. As Hamilton argues, the foundations of an individual's professionalism are these FCM capacities and motivation within their respective social context. In this way, we can identify the stressors testing the regulatory system. Then, turning back to Rest's own advocated use of the model, the FCM can also shed light on 'soft regulation'. Until any formal regulatory changes are introduced (if any are in fact needed), does a lawyer's education and training-by law schools, training courses, workplaces and professional bodies-support each element with respect to AI and, in turn, help ensure ethical practice?

Using AI Ethically In Legal Practice
Before applying the FCM to AI, we momentarily turn to the sociological literature to consider some of the wider conditions for lawyers' ethical practice. In the process, we emphasise, as Hamilton and Breakey have, the compatibility of the FCM with the social dimensions of ethics. These are the social factors that enable and constrain lawyers' ethics practice. Extensive research shows how lawyers' capacity and motivation to be ethical, including by enacting the duties of professionalism (and receiving its privileges), are under strain. 64 Four main factors (or 'obstacles') which have been documented are: size, specialisation and the move to in-house (e.g., fragmentation, loss of community and diminished independence); the loss of monopolies; competition and aggressive commercialism (supported by increasingly powerful clients and governmental economic reform agendas, offering real opportunities to large firms especially, but also adding to intense internal competition, elongated organisational hierarchies, delayed rewards and dramatic forms of adaptation, such as global expansion); and managerialism or performance tracking and review focused on efficiencies (found across the profession). These obstacles reflect and reinforce those demands of the organisation, with which even entity regulation has so far failed to come to terms. 65 The workplace is now the site and source of professional norm-setting and control, wherein (as mentioned) a significant proportion of unethical behaviour is done with or at the behest of others or organisational systems. 66 Throughout our analysis, we emphasise that the proliferation of AI products in the legal arena is occurring against a backdrop of existing stressors that are impacting traditional (albeit imperfectly enacted) ethics practice. The incursion of AI into lawyers' work may inflame some of these existing trends and ethics risks. For example, two core motivational drivers-the construction of a desirable role identity and the meaning of excellence in professional work-have already shifted through increasing profit orientation, and may alter further. Many larger law firms have taken up AI technologies more rapidly and to a greater extent than their medium and small counterparts. 67 This extends to acquiring tech companies developing or extending in-house IT capabilities, or entering into partnerships or agreements with larger providers of legal AI. 68 Increasing automation may also exacerbate the negative elements of a large law firm environment, which detrimental and unethical aspects have been documented. 69 Meanwhile, the loss of monopoly-previously thought of as primarily affecting lawyers working in legal aid, conveyancing and probate 70 -is being more widely felt, with the proliferation of technologies such as automated document drafting challenging both 'big law' and smaller firms, as well as sole practitioners. 71 Another set of contextual variables concerns the lawyer-client relationship. This involves the type of client, how the relationship is characterised (as one of agency, contracts or trusts-however, in reality, all are encompassed), 72 the degree of control exerted by the lawyer and how the client exercises choice. Generally, while the client directs the lawyer as to big-picture outcomes, the lawyer is responsible for how tasks and outcomes are achieved. 73 Yet, the salient ethical considerations vary depending on the degree of the client's knowledge and autonomy, including by influencing what and how much information the lawyer needs to give the client to enable the client to be properly informed. Further, different types and applications of AI require and allow for varied degrees of ethical capacity from lawyers. A system that is, essentially, a pre-programmed representation of existing legal knowledge is different to one that extrapolates from an analysis of data. For example, a program that simply automates existing legal precedent documents 74 has controlled parameters, and lawyers are likely to be competent to review the system's outputs.
Notwithstanding all these factors, if using an automated system to either support or perform some elements of a legal task, the lawyer still retains professional responsibility and liability for the work they have been retained to do-an individualised responsibility.
We now apply the FCM to the lawyers' AI context, with Breakey's achievement (or moral competence) category merged, for brevity, with Component IV: action. We do not approach this analysis by looking at one ethical issue or case right through all the components. Rather, we use this framework to allow for a focused treatment of the implications of AI for the ethical lawyer in relation to the component that is most challenged by a certain feature or features of this technology having accepted that each process needs to occur for the lawyer to be ethical in their use of automated systems. This means that every AI issue we examine could feature in any of the component sections since they all require each element to have occurred for good and effective ethical handling. However, we have arranged the issues according to which component they seem to strain the most dramatically, thus building up a picture of what we are demanding of the ethical AI lawyer.

Component I: Awareness
This first component requires lawyers to identify that their own use of automated systems can have 'morally salient features'. 75 Lawyers need to be sensitive to and curious about how the issues mentioned in the Introduction (including bias, opacity and absence of accountability) 76 intersect with their own professional responsibilities-particularly professional loyalty, competence and care, integrity and independence. This involves recognition not just that there may be general or big-picture issues with automated systems used in other areas, such as health or the criminal justice system, 77 but also that these same issues may affect the very systems that lawyers themselves use. It is also possible that there could be an ethical requirement to use AI software where it can perform a task more efficiently or effectively than a human. 78 This point appears to have been reached in relation to Technology Assisted Review (TAR) for large-scale discovery, as studies indicate that computer-assisted review of documents is both faster and more accurate than human review. 79 As explained below, the task of recognition is complicated by two additional factors: the perceived value neutrality and infallibility of AI systems; 80 and the social and organisational context in which automated systems are being used. We first look at professional competence and care.
When using AI, an issue arises as to whether the automated system (and by extension the lawyer) is 'competent' to do the work. Competence is found in the Australian Solicitors' Conduct Rules in the requirement to 'deliver legal services competently, diligently and as promptly as reasonably possible'. 81 To identify the possibility of ethical risk or whether a system is indeed competent, lawyers require awareness of the shortcomings or limitations of the tools they are using. This presents a significant challenge, as such knowledge is not likely to be readily available (i.e., the product developers are unlikely to readily explain a product's flaws 82 Even if openly accessible, lawyers may lack the technical knowledge to make sense of the explanation; in the case of complex applications (e.g., where a prediction is being generated), it may be difficult for lawyers to evaluate outputs themselves. 84 Further, there is ongoing debate regarding whether the results of some ML systems, which are able to 'learn from' huge datasets and apply the resulting models to new data, are, indeed, interpretable at all. In some cases, even if transparent, it will not be possible to comprehend how a system arrived at its outputs. 85 Lawyers may possess little capacity, then, to identify problems in the software's operations and will therefore have to take its outputs on face value. The Organisation for Economic Cooperation and Development (OECD) Working Party on Competition and Regulation has referred to this as the creation of new information asymmetries, wherein those reliant upon technology are unable to assess its quality. 86 Notwithstanding, these lawyers are still required to be ethically and legally responsible for AI. For example, in undertaking TAR, lawyers are dependent upon software providers to understand and correctly train an ML system to code documents; however, if outputs are incorrect (e.g., documents are mistakenly identified as non-privileged and disclosed), lawyers retain professional responsibility for the error. Accordingly, and bringing in now another core value, professional independence, 87 if lawyers are relying on software outputs (e.g., concerning the most important contractual clauses, most relevant precedent case or likely outcome of proposed litigation), they may not be exercising independent judgement. 88 Conversely, a system that relies on pre-programmed rules is more straightforward; though, issues may still arise if the system is not accurate to begin with or is not kept up to date with legal developments. These risks should be more readily detectable by lawyers who, to jump forward to Component IV (action and achievement), will be able to follow through with the requirements of professionalism. Having said this, delegation to a pre-programmed system effectively leaves no room to move outside the program. This is akin to our reliance on social 'scripts' in shared interactions, which concern the 'cognitive processes that map existing knowledge onto a template for understanding and response'. 89 As Breakey says, they are a potential obstacle to general awareness of moral issues. 90 Indeed, pre-programmed systems automate a series of steps, but if the script does not include reference to relevant ethical factors it can hinder a person's capacity to 'see' them. Moreover, the very fact of codification into a technological system may give such 'scripts' the appearance of precision and completeness, a potential threat to professional competence. Finally, any ethical parameters included in an automated system may have been defined by those designing the system 91 rather than the lawyers whose role the system seeks to emulate. A related problem is the typical (and comparatively narrow) demographic and educational background of these designers, known as the 'sea of dudes' problem. 92 Perceptions of technology as value neutral or incapable of error might result in dulled moral sensitivity and insufficient scrutiny being applied to AI systems, leading to over-reliance and a failure to question their operations. 93 Awareness that a system is neither value-free nor error-free entails at least some understanding of AI technology and represents a new dimension of professional competence -and integrity. 94  An additional, confounding factor is the practice context. Parker and her co-authors observed that the structure of large law firms might already complicate individual lawyers' ability to identify ethical issues. 98 They give the example of how such firms' diffusion of work among many practitioners may impede lawyers from 'seeing' an ethical issue, as they lack sight of the bigger picture. 99 In the case of automated systems, this becomes even more likely. Susskind and Susskind have argued that lawyers' work is increasingly 'decomposed' or broken down into components. 100 With automated systems, a lawyer's tasks are not only broken down, but some are excised altogether, as they are performed by a machine. This may further obscure ethical issues.
Finally, Rest explained that emotions can both highlight ethical cues or hamper our interpretations. 101 Increasing use of AI systems in legal work may provoke emotional responses in lawyers, as such systems have been widely and popularly portrayed as superior to, and replacing of, lawyers. 102 Although this may increase lawyers' sensitivity to the problems of AI, increased emotion might (to flag some of the issues explored in the following section) also result in a sense of futility or diminished motivation. Evidently, this has implications for education and training.

Component II: Judgement
In seeking the morally ideal course of action, an individual must try to integrate the various needs and expectations at stake. 103 In relation to law, professional moral reasoning is said to include reasoning in three layers: Here, we consider what existing general laws, law of lawyering and ethical codes might be relevant for lawyers in understanding what ethics requires of them. The professional codes represent the most salient expression of lawyers' extensive obligations to the court, client and community. Yet, as discussed, these codes include and/or are supported by wider professional values, which are enforced by disciplinary and liability mechanisms, including the lawyer's paramount duty to the court, their fiduciary duty of loyalty to the client (summed up colloquially as no-conflict, no-profit) and their competence. They also include lawyers' duties to themselves as professionals and the wider legal institutions, in independence and integrity, or the exercise of professional judgement. Working out what these demand of lawyers in an AI context is at present a highly complex task, as how these various elements might intersect in relation to specific technology is unclear. For example, the Victorian Legal Services Commissioner announced the creation of a 'regulatory sandbox'-a concept borrowed from financial services regulation 105 -for LawTech. 106 A regulatory sandbox allows interested parties to trial or attempt innovations without fear of regulatory sanction. This approach is being utilised by the Solicitors Regulation Authority for England and Wales, which describes the sandbox as a 'safe space' to test new ideas about legal service delivery. 107 These initiatives can be seen as an attempt by regulators to clarify when the rules do not apply, but uncertainty still shrouds how existing rules about the lawyer's retainer and obligations to the client interact with the use of AI, or what demands are presently being made of lawyers. It is equally unclear whether a sandbox approach can protect lawyers from their present, individualised responsibility. 97  Another example concerns the new 'law of lawyering' contained in the US Model Rules of Professional Conduct, 108 adopted in many US states, which sees lawyers' duty of competence extend to staying up to date with relevant technology. Its parameters are, however, unclear. The change was suggested to not have imposed any new requirements on lawyers, but rather to represent a symbolic step in acknowledging the importance of technology. 109 Alternatively, one commentator has said that 'lawyers who fail to keep abreast of new developments [in technology] face a heightened risk of discipline or malpractice'. 110 It would be difficult for most lawyers to attain more than a basic understanding of AI without extended study. For reasons described above, such as opacity and lack of foreseeability, even with extended study it is likely not possible to know how an automated system arrived at its output. Other than reaffirming the individualised responsibility for technology, which lawyers must take on, the further explication of the duty of competence in the Model Rules does little to assist lawyers in adhering to their ethical duties. Arruda draws parallels with earlier technologies (e.g., noting that lawyers are now expected to use email rather than postal services) to argue that lawyers have a duty to use AI technologies. 111 However, the latter are arguably qualitatively different to 'digital uplift' projects, as they may be used to generate output, that goes directly to lawyers' core work. Moreover, the scope of potential errors is both wider and carries more serious ramifications, as explained below in relation to action and achievement. 112 Hence, the following sections demonstrate some of the value conflicts that arise due to indeterminacy of the rules.
Regarding AI systems generally, Gasser and Schmitt have noted the recent proliferation of ethical AI principles both from tech companies and other bodies, including interest groups and non-government organisations. 113 For example, in Australia there is interest in the regulation of AI, evidenced by current inquiries being conducted by the Australian Human Rights Commission 114 and the Commonwealth Scientific and Industrial Research Organisation's computer science arm. 115 Yet, despite this concern and a burgeoning of ethical codes, there is no legal or regulatory regime presently governing 'AI' specifically. 116 Automated systems are regulated though, in that they are subject to the same general law obligations as other products: tort law, consumer law and even criminal law may all be applicable. 117 Nonetheless, it remains difficult to attribute responsibility for AI products to designers or developers due to the likelihood of unforeseen or unintended results. If those involved in the creation of a complex AI system cannot know what its eventual outputs might be, it is difficult to demonstrate foreseeability of harm or intention to cause harm. 118 The number of people, both professionalised and not, who may be involved in engineering a piece of software adds to this complexity and potentially leaves a liability vacuum. 119 There is, as explored by Gasser and Schmitt, the possibility that the AI industry will self-regulate or self-impose some form of certification on its own products and services. 120 Yet, other authors are sceptical about the effectiveness of self-regulation in this area. 121 Gasser and Schmitt are more optimistic, but note that at present there is no 'coherent normative structure … [but] rather a patchwork' of norms and principles, both existing and emerging. 122 Accordingly, lawyers must be aware of the difficulties involved in attempting to attribute or share liability with those who designed or built the system, and also that AI systems may, or may not, themselves have been designed and created in a way that adheres to ethical principles or codes of conduct. 108 American Bar Association, "Model Rules of Professional Conduct," r 1.1, cmnt 8 (Competence). Reportedly, this has been adopted in 36 States: Ambrogi, "Tech Competence." 109 American Bar Association, Commission on Ethics. 110 Macauley, "Duty of Technology Competence?" (quoting Andrew Perlman). 111 Arruda, "An Ethical Obligation," 456. Note that Arruda is the CEO of Ross Intelligence, an AI-powered legal research tool. 112 For example, Sheppard has argued that even something as apparently straightforward as conducting legal research may have serious consequences if a key case is overlooked: "Machine-learning-powered Software." 113 Gasser, "The Role of Professional Norms," 5. 114 Australian Human Rights Commission, Artificial Intelligence. 115 Dawson, Australia's Ethics Framework. 116 Kroll, "Accountable Algorithms," 633. 117 Scherer, in sentiments echoed by other authors, has commented on the shortcomings of these traditional modes of regulation when applied to AI: "Regulating Artificial Intelligence Systems," 356; Millar, "Delegation, Relinquishment, and Responsibility," 123 (noting that product liability is not appropriate); Karnow, "Application of Traditional Tort Theory." A further suggestion is to attribute legal personhood to autonomous systems akin to that given to companies: Select Committee on Artificial Intelligence, AI in the UK, Chapter 8; Solum, "Legal Personhood"; Creely, "Neuroscience, Artificial Intelligence," 2323. 118 Millar, "Delegation, Relinquishment, and Responsibility"; Parker, "Ethical Infrastructure of Legal Practice," 124. 119 Scherer, "Regulating Artificial Intelligence Systems," 370-371; see also Gasser, "The Role of Professional Norms," 6-7. 120 Gasser, "The Role of Professional Norms." 121 Guihot, "Nudging Robots"; Calo, "Artificial Intelligence Policy," 408. 122 Gasser, "The Role of Professional Norms," 25.

Component III: Decision-making
According to Rest, 'research (and common sense) have clearly demonstrated that what people think they ought to do for moral reasons is not necessarily what they decide to do'. 123 Having made a judgement about what is required of him or her ethically, a lawyer must then have the moral motivation to follow through. While some writers have noted that we can be 'too rigid (with an exaggerated sense of moral responsibility)', 124 here we focus on complacency-and worse, wilfully ignoring moral urges and sidestepping moral imperatives. Research has highlighted the various sources of values we hold that might conflict with ethics, including career and financial goals, important relationships, religion and aesthetic values-all of which might be preferred over one's ethical commitments. 125 Some of these values will be compatible with ethical action. For example, career and financial goals can support extrinsically motivated ethical decision-making-that is, following the 'law of lawyering' to avoid punishment, which can result in, inter alia, suspension or exclusion from practice. However, for Rest (and others), external motivations such as avoiding sanctions or the desire for peer approval are not truly ethical motivations, since they are not self-determined. 126 Rather, motivation to prioritise ethics ought to be intrinsic-to develop personal integrity and character, or at least the motivation should be more integrated-to, for example, adhere to standards of excellence and find work satisfaction 127 (with which the use of AI may better align). 128 This article posits two dimensions of motivation relevant for discussion: though there is the moral motivation to do something about a specific ethics problem that has been detected in a certain context, first there is an overall motivation, closer to a sense of moral responsibility, for AI. Before any engagement with the issues and the rules to which they give rise, the lawyer must have accepted (and for these rules to be legitimate, and achieve self-regulatory efficacy, ought to have accepted) that the technology is within their due responsibility. 129 However, lawyers may not see themselves as responsible for AI, which then affects each component of ethical decision-making. Indeed, as discussed in relation to Component IV (action), lawyers may then seek to deliberately excise their responsibility for AI from the scope of the retainer.
For several reasons, it is understandable that lawyers might not feel connected with, let alone accountable for, AI systems, in the sense of this broader concept of overall motivation. First, consider the character of such systems. Certain AI systems can act autonomously, may 'teach themselves' and can produce outputs without providing reasons. 130 Yet, such a system cannot be legally liable for a decision or output, as machines are not legal persons. 131 While some commentators have called for greater accountability for those involved in creating autonomous systems, 132 as detailed above, attribution is complex. 133 These products are-in direct contrast to the highly regulated nature of legal practice-designed, developed, manufactured and implemented in a largely unregulated context.
Second, consider the workplace context in which lawyers are likely using automated systems. As noted, the legitimacy of professional accountability (i.e., an individual accountability) ultimately rests on the lawyer's exercise of independent judgement. Yet, lawyers are subject to greater managerialism in their work than ever before. The pursuit of organisational profit and client demands has already reduced their personal autonomy. 134 The ensuing siloing and hierarchies within large legal organisations may 'degrade individual lawyers' sense of professional autonomy and their capacity to take responsibility for their own work'. 135 As technology likely falls under the auspices of the firm, individual lawyers may have little choice in whether autonomous systems are used within the organisation, whether that lawyer must use them, and how the technology is chosen and scrutinised.
Third, though related to the second issue, the professional model has already shifted to a more consumer-focused orientation, away from a model in which the lawyer, as expert, was ascendant over the client. In many publicised instances, it seems that legal AI developments are being driven by clients' demands for efficiency and cost-effectiveness 136 and thus, 'in some areas, consumer protection might be experienced by lawyers as consumer demands. Within a wider competitive environment, clients, armed with information technology, can come to the legal practitioner with their own ideas, ready to test the lawyer's expertise'. 137 Rather than adopting AI technology for the sake of professional excellence or improving workflow, lawyers and firms may feel compelled or pressured to do so. A potential conflict also exists between the disciplinary rules and associated professional values, which require lawyers to take responsibility for any shortcomings or flaws in the operation of an AI system; and the demands of clients and/or workplace success, which may require use of AI tools for reasons of excellence and efficiency. 138 Finally, use of automated systems in law is also marketed or upheld as a means of improving access to justice, through the creation of simpler and more accessible apps and online services. 139 Commentary here has focused on the failure of lawyers and traditional modes of pro bono work and legal aid schemes to ensure equitable access to justice. 140 Accordingly, the use of automated systems may also undermine lawyers' public-service-minded motivations as being both disparaging of those working within the sector, and in characterising access to justice as something that is now being primarily addressed by others outside legal practice. 141 In motivational terms, these factors all pose a threat to lawyers' 'fair bargain' motivation, 142 or their sense that it is acceptable and legitimate 143 for them to be subjected to the demands of ethics (given the privileges they enjoy) when it comes to AI. Indeed, as detailed here, these privileges are for many areas of law under strain or depreciating. Increasingly, lawyers might feel burdened with extensive responsibilities for technology which non-lawyers create, distribute and profit from with impunity. Moreover, the possibility of AI adding to 'decomposition', 144 and its ability to perform (at least in relation to discrete tasks) to a higher standard than a professional, is confronting to professional identity and bespoke, 'trusted advisor' work. 145 Additional pressures relate to AI performing more quickly and more cost-effectively than lawyers, which may affect lawyers' sense of self-efficacy or trust in their own abilities when AI technology can take over aspects of their work.
Noting that legal responsibility for AI predictions will continue to rest with humans, Cabitza has asked (within the context of medicine) whether it will become professionally impossible to disregard the 'machine'. He observes that 'being against [a program's prediction] could seem a sign of obstinacy, arrogance, or presumption: after all that machine is right almost 98 times out of 100 … and no [professional] could seriously think to perform better'. 146 In these ways, AI might itself be eroding lawyers' identities and rewards, and, therefore, the drivers of ethical motivation-including when it is suitable for use.

Component IV: Action and Achievement
Ethics action involves 'figuring out the sequence of concrete actions, working around impediments and unexpected difficulties, overcoming fatigue and frustration, resisting distractions and other allurements, and keeping sight of the original goal'. 147 As Breakey illustrates, these features of ethics action and achievement (his extension) require both personal courage and perseverance, interpersonal skills and strategic nous. 148 After deciding to exercise supervision over and responsibility for AI, lawyers must then follow through and do so effectively. However, in this context, lawyers' success in terms of moral action will depend on their technological understanding and experience. Importantly, though, there is a lack of consensus regarding the degree of technological competence that lawyers should possess, including because ethical competence is beset by issues of autonomy and explainability. As we now illustrate, these issues are germane to the lawyer-client interaction, centred on the lawyer assisting the client understand relevant legal issues and obtaining informed consent regarding decisions to be taken. 149 They extend to lawyers' overarching duty to the court and the administration of justice.
The issue of autonomy was discussed in relation to motivation, but it also has a practical element: the greater a program's autonomy, the less control any human has over its actions and outputs. As such, a lawyer cannot have any 'say' in the program's 'ethical' actions or implementation. Likewise, the less explainable, the less insight people have into how a program has generated its answers. These attributes are linked: a high degree of autonomy tends to correspond to a low degree of explainability. In this sense, 'explainability' differs from 'transparency', as it encapsulates the idea that while an ML system's workings may be made visible (e.g., through revealing source code), they may still be unintelligible even to an expert. 150 Thus, the most sophisticated or 'frontier' ML systems are able to act with the greatest autonomy, but this tends to make their actions all but indecipherable to humans.
If lawyers cannot themselves understand the reasons for a system's particular outputs, it will not be possible to relay that information to clients, heightening the challenge of informed decision-making. 151 A further issue is that ethical frameworks tend to prioritise the production of records (e.g., a lawyer's case files). 152 Yet 'contemporary AI systems often fall short of providing such records …either because it's not technically possible to do so, or because the system was not designed with [this] in mind'. 153 Both issues affect the lawyer-client relationship; technically, the lawyer cannot obtain the client's informed consent to a course of action if they are not truly 'informed'. Even appropriate record-keeping may be difficult.
One commentator has argued that: Al can't be defended unless it's possible to explain why and how the Al system or tool reached the conclusion it reached … a law firm would need to find out which analytics and variables programmed into the technology sparked the conclusion that particular facts about a case are relevant. 154 Yet, in the case of a sophisticated system, this may simply not be possible. Other writers have noted the fact that in ML, accuracy and intelligibility often have an inverse relationship. 155 Thus, the 'best' or most accurate systems are likely the least intelligible. Moreover, Goodman has argued that 'it is one thing for [AI] to assist attorneys in making better, or fairer, or more efficient judgments, but it is a different situation where the human is simply ratifying what the computer has chosen to do'. 156 Even if the lawyer has used such software enough to 'trust' its outputs, this will not assist with providing an explanation. Further, if the lawyer does not or cannot place complete confidence in the results an automated system produces, then he or she may end up undertaking a review which nullifies any efficiency benefits of using the system in the first place. 157 This final section considers three possible avenues for moral action that lawyers may take at this point. These are neither mutually exclusive, nor is any one a complete answer. These are: to seek the client's informed consent to the use of the technology; to 'supervise' the technology; or to seek to excise responsibility for the technology altogether, via unbundling or limiting the scope of the lawyer's retainer.
It is not clear that lawyers' use of automated systems must in all circumstances be disclosed to clients. There are two elements to this: one involves client decision-making, and the other concerns fairness in costs disclosure. In terms of decision-making, while arguments are made that the client's informed consent is imperative, 158 if the lawyer is (in any event) legally responsible (to a high standard) for the advice, documents and so on provided to the client, then arguably how that advice was arrived at is irrelevant. The information that the lawyer gives should be directed to assisting the client in determining what their best interests are, so that the lawyer can advance them. 159 The information must therefore help the client assess the risks and drawbacks of the proposed course of action. 160 This all might hinge on whether the lawyer's decision to use AI relates to subject matter (which must be subject to the client's instructions) or to tactics and procedure (where the case law is less straightforward and, depending on the context, advocates' immunity may apply). 161 In some circumstances, the use of AI may not be apparent to clients-for example, if lawyers or firms are using automated drafting software to create 'first drafts' of documents or when undertaking review of documents using ML programs. The lawyer may not wish to disclose use of AI software as this may be viewed as diminishing their time and effort, and correspondingly appear to not warrant the fees being charged. That said, lawyers' fees are always subject to fair disclosure and reasonableness requirements. Lawyers may also want to know the software's output first, particularly if the output is that the lawyer, and/or the lawyer's firm, is not best placed to act for the client due to a poor previous success rate with similar types of cases 162 (or whatever parameters are being measured).
A lawyer's duties of supervision may also intersect with the use of automated systems. Parallels can be drawn with the outsourcing of legal work to third parties: the ABA Model Rules indicate that lawyers remain responsible for work outsourced and must be competent for its consequent review. 163 Lawyers must also understand when it is appropriate to outsource work. 164 Outsourcing and the general 'commoditisation' of legal services are not new phenomena. Rostain has noted, for example, that: the processes underlying the provision of legal services, once centralized in law firms, have been disaggregated and outsourced. In litigation, for example, law firms have developed supply chains that rely on outside consultants, contract lawyers, and nonlawyer service providers to standardize tasks that were at one time performed by associates. 165 Medianik has also suggested that the ABA rules around outsourcing could be used as a guide that informs how AI should be managed under the professional conduct rules. 166 She suggests that lawyers should treat AI 'like a junior associate' and carry out their usual supervisory role. 167 Yet, if lawyers cannot independently evaluate the functioning of the software, this is undeniably different to supervising a junior. 168 Medianik's proposal also relies on the use of technology that is 'qualified through … requisite programming', 169 but does not explain how this 'qualification' could be verified or standardised. This leaves unanswered more pertinent questions concerning the design of such systems, including how lawyers can trust their operation.
Finally, to further consider the decomposition of legal work. Limited-scope representation or 'unbundling' is where the lawyer performs some tasks but not others, with the scope of work clearly delineated in the retainer. It is permitted and indeed encouraged in some jurisdictions 170 as a means of improving access to justice and reducing the cost of legal services but is disallowed in others. In Australia, it is not clear that a retainer that limits the lawyer's scope of work can be effective to guard against breaches of the professional conduct rules. 171 If allowed in relation to AI, unbundling could permit the lawyer to excise responsibility from completing certain tasks (which would be performed by the AI program) from the scope of the retainer. This might work effectively for some legal work. For example, TAR requires lawyers to be involved in 'training' the ML system to identify documents correctly, where the bulk of 'review' is then performed by the machine with checks or oversight subsequently completed by lawyers. 172 Here, the elements performed by humans and those undertaken by the automated system can be clearly delineated. In other cases, however, there may be blurring or overlap between tasks-particularly if, say, the lawyer relies on contract-review software to review a document and the software identifies only portions of the contract as other than standard. It is unclear whether the lawyer can shape the retainer so as to not review the other parts. 173 In Australia, courts have indicated that limiting the retainer remains subject to the lawyer independently evaluating the client's understanding and its business position: 174 [The solicitor's] retainer would have extended beyond the formal or mechanical tasks of preparing the loan agreements and mortgages.
[He] could not fulfil his duty without ascertaining the extent of the risk his client wished to assume in the transactions, evaluating the extent of the risks involved in the transactions and advising in that regard. 175 In the case of AI tools, a limited retainer may work for sophisticated clients who can evaluate the risk of not having lawyer review. Indeed, it seems likely that it is these clients (who are large and influential) who are primarily driving the uptake of AIassisting tools among law firms. 176 In the case of less astute clients, however, it is not clear that a lawyer can proceed under a limited-scope retainer, complicating the argument for AI enhancing access to justice. Returning to the issues foreshadowed above in relation to Component II (judgement), it is equally unclear how a lawyer is able to 'evaluate the extent of the risks involved' if he or she is completing only part of the work.

Conclusion
Consideration of the legal professional ethics and regulatory implications of the increasing use of AI or automated systems in legal practice is in its early stages. This article has sought to take a different approach to this kaleidoscope of intersecting issues by focusing on the social psychological elements that underpin the regulation of professionals. This paper analysed lawyers' use of AI through Rest's FCM (extended by Breakey) of the psychological components for ethical behaviour, encompassing awareness, judgement, decision-making, and action and achievement. These elements are fundamental to regulation, which relies upon, inter alia, its targets (lawyers) having both motivation and capacity to uphold their professional obligations. We suggest that it is only when these features are supported, that regulation will be legitimate and effective-in terms of both the rules and related education and training. To support rule of law values, the laws that govern legal practitioners in their use of AI must be clear, certain and adequately publicised 177 to ensure that lawyers know what is required of them and how the disciplinary and malpractice regimes operate. Individuals can then conduct their practices with a satisfactory level of security, supporting professional efficacy and a 'greater good' orientation. 178 Of course, AI is entering and contributing to a complicated context for lawyers' professional identities and behaviour-and, therefore, for their effective regulation. Even before the arrival of AI tools to legal practice, the task of professional regulation, both for ensuring standards and securing monopoly, is more difficult than ever. The profession's special promise of ethicality and competence is difficult to quantify and deploy as part of the regulative bargain, both to justify to the state that the profession deserves monopoly protection as well as to validate to clients that using a professional's services is better than those of a nonprofessional. Conceptions of desirable role identity (the meaning of being a professional), of achieving high standards in one's work, and of the 'fair bargain' (i.e., professional obligations in return for professional status) are all further challenged by the use of AI in law. As demonstrated, the environment in which lawyers work is also important. The large and global firms and in-house corporate legal departments, where much AI use is being promoted and developed, already complicate lawyers' ethical practice. Lawyers may not be able to choose whether they use automated systems in their work have the opportunity to understand how these machines actually (or could) function.
Against this backdrop, the combination of professional rules, the general law and the context of AI's development and regulation may not be sufficient to incentivise and otherwise influence responsible use of such technologies by lawyers. Seemingly little clarification, education and regulatory guidance are being proffered to legal practitioners, increasing especially 173 It is also unclear that this is wise, given that contractual terms are generally interdependent and cannot be dissociated from the contract as a whole. 174 '. 178 While noting the ways in which the public interest has sometimes been deployed by the professions to maintain exclusivity, extensive empirical behavioural ethics research has shown how ethical behaviour diminishes under conditions of stress and uncertainty. For some of this research in the legal context, see Robbennolt, "Behavioral Legal Ethics," 1140-1143. Indeed, the self-regulatory (or monopoly protections) model maintains its supporters-those who argue that getting on with being a professional (exercising independent judgement and contributing to the advancement of professional knowledge)-are incompatible with intense insecurity, competition and status-seeking. For a full discussion, see Rogers, "Large Professional Service Firm," Part II.
(as demonstrated) the complexity of the stages of awareness and judgement. Ensuring that lawyers are able to adhere to the set standards of ethics and competence requires both capacity and motivation for individual professionals and their workplaces. This includes the necessary skill and motivation to continue to strive for a professional identity, and subject themselves to statutory and disciplinary regimes beyond those that apply to non-professionals.
Rest resisted framing the FCM as a presentation of the ideal person. 179 Nevertheless, some writers, including in law, now see this reticence as a missed opportunity-that the FCM represents not just an explanation of ethical failure, but a gold standard for the morally good person. 180 Indeed, the influx of AI into professionalised occupations such as law heightens the need for human skills, as at present AI cannot undertake moral reasoning. The FCM helps regulators, as well as lawyers, heads of legal practice and legal educators clarify their 'moral ambitions as well as their images of the "successful" professional'. 181 Moreover, it can be used to evaluate ethical education and training as well changes to regulation, so that lawyers do not shoulder the entire burden of responsibility for AI alone. Right now, this responsibility is neither straightforward, nor does it encourage high standards.