This article focuses on individual lawyers’ responsible use of artificial intelligence (AI) in their practice. More specifically, it examines the ways in which a lawyer’s ethical capabilities and motivations are tested by the rapid growth of automated systems, both to identify the ethical risks posed by AI tools in legal services, and to uncover what is required of lawyers when they use this technology. To do so, we use psychologist James Rest’s Four-component Model of Morality (FCM), which represents the necessary elements for lawyers to engage in professional conduct when utilising AI. We examine issues associated with automation that most seriously challenge each component in context, as well as the skills and resolve lawyers need to adhere to their ethical duties. Importantly, this approach is grounded in social psychology. That is, by looking at human ‘thinking and doing’ (i.e., lawyers’ motivations and capacity when using AI), this offers a different, complementary perspective to the typical, legislative approach in which the law is analysed for regulatory gaps.
Law, Technology and Humans 2019-11-27 1 0
The Ethical AI Lawyer: What is Required of Lawyers When They Use Automated Systems?
Issue:Volume 1 2019
Pages:80 to 99
Section: Symposium: Automation, Innovation and Disruption in Legal Practice
0 citation(s) in Scopus
0 citation(s) in Web of Science
Search Google Scholar
How to Cite
Rogers, J., & Bell, F. (2019). The Ethical AI Lawyer: What is Required of Lawyers When They Use Automated Systems?. Law, Technology and Humans, 1, 80-99. https://doi.org/10.5204/lthj.v1i0.1324
Total Abstract Views: 3251 Total PDF Downloads: 1548