When Art Becomes a Lemon: The Economics of Machine-Enabled Artworks and the Need for a Rule of Origin

In 2021, an artificial intelligence system wrote a law article. The results were far from perfect but begged the question of whether a human author will still be able to compete against artificial intelligence. Leaving aside the Luddites scenario, this paper starts with the premise that human-made art might be more valued than machine-enabled art. However, to be properly valued, machine-enabled and human-made art must be distinguishable—they are not. Indistinguishability creates an asymmetry in information. This leads to a ‘lemons problem’—that is, a market erosion of good-quality products (in this scenario, human-made products). Against that background, this paper proposes a solution in light of international law and rules of origin. This paper argues that the lemons problem induces the need for a rule of origin labelling work as either human-made or machine-enabled. Determining human or machine authorship may be dauntingly complex when the artwork owes its existence to both humans and machines. One solution may be to review how the country of origin is identified whenever products are not created in a single location and then to apply, mutatis mutandis, to rules of authorship origin the solutions once identified in the context of geographical origins,  that is, the so-called ‘substantial transformation test’. In the context of machine-enabled artwork, this test is whether a human edited the machine output and, if so, whether those edits constituted a substantial transformation of the work of art.


Introduction
The year 2021 marked a turning point in legal scholarship.A law journal published the first legal article written by artificial intelligence (AI). 2 This specific AI was an (autoregressive) large language model (LLM) 3 referred to as Generative Pre-Trained Transformer 3 (GPT-3) and was ironically tasked with arguing that humans will always be better than machines.Although that article did not meet academic standards and lacked 'citation [s] to supporting sources', 4 it was still 'cogent and coherent'. 5GPT-3's human co-authors left open the question of whether law professors will one day 'be able to push a few buttons and generate a well-written and well-researched article'. 6If so, would the emergence of machine-enabled texts 7 mean an obsolescence of Volume 5 (1) 2023

De Cooman
The upshot is this.Some of GPT-3's outputs are misogynistic41 and racist. 42Similarly, GPT-3 replicates stereotypes against Muslims 43 and Jews. 44GPT-3's 'perfidious' writings are due to a mindless reproduction of the 'built-in biases of the data that it mines to teach itself to write'. 45This is likely to be the case 'when the prompt it is fed strongly correlates with overtly sexist or racist language'. 46Any automated model can certainly be a 'very powerful tool when properly developed and implemented [but] if you put garbage in, you get garbage out'. 47In this case, the idiom becomes 'bias in, bias out'. 48GPT-3's extremely large training dataset is, therefore, both its main strength and 'its Achilles heel'. 49GPT-3 makes outrageous statements but 'does so with correct grammar'. 50This is a compelling example of 'the dark side' of AI. 51 GPT-3 developers are aware of these limitations.In January 2022, they released a new version called InstructGPT. 52Its objective is to reduce mistakes and outrageous language.This upgrade improves GPT for better and for worse.When prompted to be respectful, InstructGPT generates 25% less toxic language than GPT-3.However, efficiency is a double-edged sword.InstructGPT produces far more offensive language when prompted to produce toxic language. 53 a 'sibling model to InstructGPT', ChatGPT is the latest iteration of the GPT family. 54Like previous GPT models, ChatGPT generates texts, this time in a conversational way.OpenAI has solved some of the challenges raised by GPT-3 (and partially solved by InstructGPT).The dialogue format encourages user feedback that allows ChatGPT to 'admit its mistakes, challenge incorrect premises, and reject inappropriate requests'. 55However, it is still not error-proof.OpenAI acknowledges some limitations, namely that ChatGPT still provides inaccurate or incorrect answers in a somehow verbose style.More critically, it still sometimes exhibits biased behaviour or answers inappropriate requests despite its content filter. 56e flaws of the GPT family demonstrate that a LLM has to be supervised 57 and that its output must be edited. 58As smart as a LLM seems to be, it still 'needs a human babysitter at all times to tell it what kinds of things it shouldn't say'. 59LLM autonomy must not take precedence over human agency.LLM is a 'sociotechnical system'-that is, a system that combines the AI system and the human who monitors and intervenes whenever it is appropriate.60

Algorithmic Massification: From Reproduction to Production
So far, so good.Given LLM's nature and limitations, there is still a need for a human in the loop.Does this suffice to exorcise the Luddite's reactions that LLM may provoke?Probably not.It has also been argued that machine-enabled work may threaten human creativity through 'a massification of algorithmic creations and, as a result, a saturation of the range of possible creations … as the creative capacity of artificial intelligence is vastly greater than human activity'. 61The question is whether human authors may 'still be able to compete'. 62e massification of culture is not something new.Radio raised similar questions.In a somehow controversial essay, Theodore Adorno 63 argued that radio altered the ideal of music.Once a 'living force', wireless transmission transformed music into a 'museum piece'. 64He argued that the 'society of commodities' has led music to 'become a means instead of an end, a fetish'. 65ence, music became an object of 'standardization and mass production' that ceased 'to be a human force and [was] consumed like other consumers' goods'. 66In a nutshell, music became a commodity.The production of music took place 'not primarily to satisfy human wants and needs, but for profit'. 67If human needs were to be satisfied, this would only happen 'incidentally'. 68herefore, the commodification of music leads to 'commodity listening', which aim is 'to dispense as far as possible with any effort on the part of the recipient' or to suspend 'all intellectual activity when dealing with music and its content'. 69He ultimately asked whether 'the mass distribution of music really means a rise of musical culture'. 70His answer was unambiguous.As a 'new technique of musical reproduction', radio has led to a 'retrogression of listening'. 71What is on air is not music that dares but music that entertains.Adorno finally concluded, 'entertainment may have its uses, but a recognition of radio music as such would shatter the listener's artificially fostered belief that they are dealing with the world's greatest music'. 72stified or not, Adorno's fear sets the scene for what follows. 73Translated to LLM, the question becomes whether machineenabled artwork will transform art into a commodity that can be consumed like any standardised consumer's goods.This assumption is more concerning than Adorno's thesis.A LLM is not, in Adorno's words, a 'new technique of … reproduction'. 74n the contrary, a LLM creates something new so easily and at such a pace that it is hypothesised that the number 'of texts available will skyrocket'. 75This paves the way for mass production and, ultimately, raises the question of creativity.In this regard, the work of Margaret Boden is illuminating.Boden distinguished 'psychological creativity' and 'historical creativity'. 76ith psychological creativity, newness is evaluated 'with respect to the individual mind which has the idea'. 77With historical creativity, an idea is new if it is 'novel with respect to the whole of human history'. 78When GPT-3 was able to continue Jane Austen's unfinished Sanditon, 79 it displayed historical creativity.In computer science terms, GPT-3 is creative because the output is not a mere replication of what composed its training dataset. 80That GPT-3's output is based on what it has learnt does not mean its subsequent output is not novel.Since the Renaissance, 'students were trained to work in the master's style and Volume 5 (1) 2023 De Cooman 29 succeeded to such a degree that it is sometimes hard for today's art historians to distinguish the hand of a master from that of his [sic] most talented pupils'. 81wever, novelty is only a proxy for creativity in Boden's work.She has distinguished combinational, exploratory and transformational creativity. 82Combinational creativity means the combination of 'familiar ideas … in unfamiliar ways'. 83A textbook example is Thomas Hobbes's Leviathan questioning 'what is the Heart, but a Spring; and the Nerves, but so many Strings; and the Joynts, but so many Wheeles, giving motion to the whole body'. 84Exploratory creativity 'exploits some culturally valued way of thinking'. 85It is the case of a Renaissance painter who explores the limit of this genre but remains within this 'familiar stylistic family'. 86On the contrary, transformational creativity is 'triggered by frustration at the limits of the existing style', which is then 'radically altered (dropped, negated, complemented, substituted, added …)'. 87Thus, the generated output is 'often initially unintelligible for they can't be fully understood in terms of the previously accepted way of thinking'. 88That rococo followed the baroque style is one example of transformational creativity.
According to Boden, AI can display combinational, exploratory and transformational creativity. 89But it is exploratory creativity that is best suited for AI.GPT-3 proves that.The generated output depends on the prompt.When GPT-3 was prompted with Dante Alighieri's Italian sonnet, 90 the machine-enabled text was à la Dante. 91Further, GPT-3's output may well be confused with Dante's original sonnet.GPT-3 writes better than many people and passes a Turing test with flying colours. 92

The Issue of Indistinguishability: The Lemons Problem
The indistinguishability of human-made and machine-enabled writings has a major downside.That indistinguishability creates what economists call a 'lemons problem'. 93The terms of the issue are as follows.Assume that there are two types of books on the book market-that is, human-made books and machine-enabled books.Indistinguishability implies an asymmetry in the available information.The publisher knows whether the writing sold is human-made or machine-enabled.The reader does not.Floridi and Chiriatti argued that 'readers and consumers of texts will have to get used to not knowing whether the source is artificial or human'. 94They believe readers 'will not notice, or even mind'. 95This assumption is controversial.The lemons problem explains why they should mind.Although the valuation of machine-enabled artworks is still terra incognita, preliminary studies show that, all other things being equal, humans' works 'are evaluated significantly more highly than those perceived as being made by AI'. 96This does not mean that machine-enabled works will be of lesser 'quality' than human-made ones.The above examples prove otherwise.On the contrary, what is hypothesised here is that the 'pecuniary value' of machine-81 Italian Renaissance Learning Resources, "Training and Practice," quoted in Brown, "Artificial Authors," 25. 82 Boden, Artificial Intelligence, 60. 83 Boden, Artificial Intelligence, 60. 84 Hobbes, Leviathan, 8 (emphasise omitted, spelling not corrected).For a discussion on Hobbes and this metaphor, see Kaplan, "Afraid of the Humanoid?" 85Boden, Artificial Intelligence, 60. 86 Boden, Artificial Intelligence, 60. 87 Boden, Artificial Intelligence, 60. 88 Boden, Artificial Intelligence, 60. 89 Boden, Artificial Intelligence, 61. 90Dante, Vita Nova, 44. 91 Other examples abound.In Harold Cohen's painting computer program, labelled Aaron, painted in Cohen's style (Ginsburg, "Authors and  Machines," 409).Patrick Tresset and Frederic Leymarie's AI system named Paul the Robot is 'a robotic installation that produces observational face drawings of people … mimicking drawing skills and technique[s]' based on the style of artist-scientist Tresset, Alberto Giacometti and Dryden Goodwin (Tresset, "Portrait Drawing," 361). 92Elkins, "Can GPT-3 Pass," asking at 12 whether GPT-3 can 'pass a writer's Turing Test?Probably not, if all output considered.But with a judicious selection of its best writing?Absolutely'.See also Bridy, "Evolution of Authorship" (discussing at 399 a 'Turing test for creativity'). 93Akerlof, "Market for Lemons." 94Floridi, "GPT-3," 691. 95Floridi, "GPT-3," 691. 96Ragot, "AI-Generated vs. Human Artworks," 1.It is true that Portrait of Edmond Belamy skyrocketed during its auction at Christie's when it was auctioned at USD 432,500-that is, approximately 45 times its high estimate (Cohn, "AI Art at Christie's").However, it was a world premiere, excepting the private sale of Le Compte de Belamy to Paris-based collector Nicolas Laugero-Lasserre for EUR 10,000 (Nugent, "Painter Behind These Artworks").The following auctions of machine-enabled artworks were far more disappointing.Memories of Passersby I was estimated at GPB 30,000 to GBP 40,000 (USD 40,000 to USD 53,000, using the average exchange rate for 2018 of GBP 1 = USD 1.3349) and auctioned at Sotheby's for USD51,000 (Sotheby's, "Memories of Passersby I").Shortly after, La Baronne de Belamy, estimated at USD20,000 to USD 30,000 was auctioned, still at Sotheby's, for USD 25,000 (Sotheby's, "La Baronne de Belamy").The announcement effect seems to be over.enabled works will be lower than human-made ones and that the more machine-enabled art exists, the more human-made art will be valued.
Assuming this scenario is correct, a machine-enabled book would, therefore, be referred to as a 'lemon'-that is, a product of low value in United States (US) slang.With symmetric information, the price of a machine-enabled book (p1) should be lower than the price of a human-made book (p2).But information is asymmetrical.As a result, human-made and machine-enabled books 'must still sell at the same price-since it is impossible for a buyer to tell the difference'. 97erefore, let p be the book price (where p = p1 = p2), q the probability the book is human-made and (1q) the probability the book is machine-enabled (where 0  q  1).Assuming the reader is risk neutral, she will price a particular book based on the probability that the book is human-made (i.e., given its expected quality).In turn, the reader will adapt her willingness to pay to internalize the risk of being sold 'low price' machine-enabled products rather than 'high price' human-made ones. 98This means the reader will only be willing to pay (p * q).Because q is a probability (0  q  1), the reader's willingness to pay will be lower than the book price ((p * q)  p).Professor Nicolas Petit illustrated the problem. 99Assuming, on the one hand, that human-made books are worth USD20 (p2) and machine enabled ones USD10 (p1) and, on the other hand, that a buyer believes there is a 50/50 chance that a book is human-made (q = 0.5), then that buyer will internalise half (q = 0.5) the difference between the price of a human-made book and a machine-enabled one (p2 -p1) in its willingness to pay.The market equilibrium price is USD15 (p2 -(p2 -p1) * q = 20 -(20 -10) * 0.5 = 15).As a result, while no publishers of human-made books will come to this market, suppliers of machine-enabled one will 'make a killing.' 100 Assuming a market 'in which goods are sold honestly or dishonestly'-that is, in which the reader's problem is to identify human-made writing despite asymmetrical information-'dishonest dealings tend to drive honest dealings out of the market'. 101his echoes one necessary condition for the emergence of a lemons problem.Besides asymmetry of information, an incentive must exist for the publisher to sell a machine-enabled product as human-made.Edgar Allan Poe used 'The Imp of the Perverse' to explain why people do a thing even if they should not. 102As long as machine-enabled and human-made books are indistinguishable, the Imp will be more and more convincing in selling machine-enabled books as human-made.Indeed, indistinguishability incentivises the adoption of misleading statements on the (lack of) use of an AI system during the writing creation.
There are already illustrations of this.In music generation, a label that invested in a machine that composed music 'did not want to disclose that its songs had in fact been written by a machine and not by human musicians'. 103In addition, the decision of the US Copyright Office that parts of the graphic novel Zarya of the Dawn generated by an AI system (Midjourney) are not protected by copyright incentivises its user to not disclose such use. 104As hinted above, selling a machine-enabled book without specifying that it is not human-made (or worse, dishonestly sold as human-made) could drive out actual human-made writing.This will be the case if the reader's willingness to pay is lower than the production cost of a human-made book.If so, then the book market will not profit from human-made books.On the contrary, it should be borne in mind that the production cost of a machine-enabled book is 'negligible'. 105Therefore, it is fair to assume that the production cost of machine-enabled books will be lower than the production cost of human-made books.As such, the break-even point for a machine-enabled book would be lower than for a human-made book.The upshot is this.Human-made books will be long gone before machine-enabled books cease to be profitable.This is the real cost of indistinguishability whenever there is uncertainty about the origin of the product sold. 97Akerlof, "Market for Lemons," 489 (although writing about new and old, good and bad (lemons) cars). 98Petit, "Artificial Intelligence, Rules of Origins." 99Petit, "Artificial Intelligence, Rules of Origins." 100 Petit, "Artificial Intelligence, Rules of Origins." 101Akerlof, "Market for Lemons," 495 (adding that 'the cost of dishonesty, therefore, lies not only in the amount by which the purchaser is cheated; the cost also must include the loss incurred from driving legitimate business out of existence', and that 'the presence of people who wish to pawn bad wares as good wares tends to drive out the legitimate business'). 102Poe, "Imp of the Perverse."However, the Imp is initially a metaphor for self-destructive behaviours.In this case, the behaviour the Imp prescribes is in the interest of the publisher. 103Bonadio, "Artificial Intelligence as Producer," 122. 104Actually, the human user of Midjourney did not disclose the use of that AI system when she submitted an application to the US Copyright Office.It was only subsequently that the office became aware (through social media) of the use of Midjourney.US Copyright Office, "Zarya of the Dawn." 105Floridi, "GPT-3," 692 (adding at 690 that GPT-3 is able to 'mass produce good and cheap semantic artefacts').

Substantial Transformation Test
A solution to the lemons problem lies in a rule of origin. 106As defined above, the rules of origin concern the identification of the provenance of goods or services.Classically, rules of origin are related to trade agreements that grant members access to a domestic market at a preferential tariff. 107The origin of the good engages (or does not) a tariff cut.As such, it has been argued that rules of origin are 'barriers to trade' 108 that constitute a 'hidden protection' of domestic markets. 109However, in the context of human-made versus machine-enabled products, what matters is not the geographical origin but the authorship origin.
Just like geographical origin, authorship is 'ultimately a question of fact'. 110However, determining human or machine authorship may be dauntingly complex when the product owes its existence to both humans and machines. 111One solution may be to review how the country of origin is identified whenever products 'are not created in a single location' and then to apply, mutatis mutandis, to rules of authorship origin to solutions once identified in the context of geographical origins. 112In this regard, European Union (EU) rules of (geographical) origin state that 'goods the production of which involves more than one country or territory shall be deemed to originate in the country or territory where they underwent their last, substantial, economically-justified processing or working … resulting in the manufacture of a new product or representing an important stage of manufacture'. 113 was up to the Court of Justice of the EU (CJEU) 114 to establish this 'substantial transformation test'. 115In Gesellschaft für Überseehandel mbH v Handeskammer Hamburg, the CJEU held that a process or an operation is substantial if 'the product resulting therefrom has its own properties and a composition of its own, which it did not possess before that process or operation'. 116More concretely, the CJEU held in Yoshida Nederland BV v Kamer van Koophandel en Fabrieken voor Friesland that an assembly operation is substantial when it constitutes the decisive stage of production during which the purpose of the product is achieved and during which that product is given its specific qualitative properties. 117The CJEU later explained in Brothers International GmbH v Hauptzollamt Gießen that 'in practice the substantial transformation criterion can be expressed by the ad valorem percentage rule, where either the percentage value of the materials utilized or the percentage of the value added reaches a specified level '. 118 This means the added value is a legal, objective and clear criterion for qualifying a transformation as substantial. 119turning to LLMs, the question becomes whether a human edited the machine output and, if so, whether those edits constitute a substantial transformation of the text.The occurrence of human editing is not enough to qualify the work as human-made.As 'computers today, and for proximate tomorrows, cannot themselves formulate creative plans or "conceptions" to inform their Volume 5 (1) 2023 De Cooman execution of expressive works', there will always be a human in the creative loop. 120In essence, an AI system is a mere 'piece of chattel' 121 that, according to Ada Lovelace, 'has no pretensions whatever to originate anything' and that 'can do (only) whatever we know how to order it to perform'. 122Therefore, the human edits still need to be done.Based on the CJEU jurisprudence, one way to conclude this would be to compare the qualitative properties of the text before and after the editing process-that is, the added value of the editing.In this regard, it should be borne in mind that edits consisting of sorting, classifying or assembling a LLM's outputs are unlikely to be considered substantial. 123e analogy with the rules of geographical origin has limitations.Classically, rules of origin are enshrined in binary logic.Either that product originates from that country, or it does not. 124This may be inappropriate for machine-enabled work given the hybridisation of human creativity (or, at least, some degree of creativity) and machine computation.Therefore, the rules of origin might take the form of a ladder.On one side of the spectrum would be fully machine-enabled outputs.As hinted above, this does not yet.On the other side would be outputs that are fully human-made.This is the case, for instance, of Vincent van Gogh's Still Life with Lemons on a Plate. 125This is more generally the case of all artwork solely created by a human author.Between these two ends of the spectrum would lie grey cases-that is, outputs that owe their existence to both humans and machines.The level of granularity of this category will depend on the required degree of human intervention.Therefore, grey cases will be a threefold category-that is, one that distinguishes low, medium and high human inputs.One cannot treat a LLM's outputs equivalently when given either a one-sentence prompt or very concrete and plentiful instructions.The intermediate category of medium human input may then be subdivided again and again.Whatever the degree of granularity selected, the objective is to 'draw clear boundaries between what is what, e.g., in the same way as a restored, ancient vase shows clearly and explicitly where the intervention occurs'. 126Table 1 illustrates the argument.

Bottom-Up and Top-Down Rules of Origin
Such a rule of origin can be achieved by human authors themselves, at least for some artistic productions.This argument comes from two video game practices. 127First, video game players who want to establish a record must prove they hit a high score.
To do so, they record themselves playing the game to prove they were truly behind the joystick-and, incidentally, that they did not cheat. 128The second practice is known as speedrunning-that is, 'going through a game from beginning to end as fast as possible'. 129There are two types of speedrunning: finesse runs leave the narrative of the game intact, while deconstructive runs allow the reconfiguration of the game using glitches. 130In both cases, the performance is recorded to establish how quickly the player was able to complete the game and (in the case of finesse runs) to prove no glitches were used. 131Similarly, the artistic community could create its own rules of origin.Just as video gamers record themselves while playing to prove their achievements, artists could record themselves while creating their work and, thus, prove its human origin.Actually, time-lapse videos of sculptors, painters or blacksmiths showing them sculpting, painting or forging already abound on the internet.With the development of machine-enabled artwork, this practice should become more widespread.In addition, if the assumption that human-made art is more valuable than machine-made ones is correct, then human artists would have a strong incentive to record themselves.
The parallel with video games goes further.During the 1980s, a 'video game aficionado' 132 founded Twin Galaxies to provide a 'comprehensive authentication system that can evaluate any player's video game performance and verify legitimacy (elimination of cheating / manipulation / misrepresentation)'. 133 This organisation standardised scorekeeping and high score authenticating. 134Just as it was a member of the video game community that created the platform for verifying game recordings, it is quite conceivable that it could be a member of the art community that develops an equivalent platform for authenticating the origin of works of art.
However, this solution might not be suitable for book writers.Very little would be proved by filming them hunkered over their keyboard.A second-best solution could be the one proposed by OpenAI itself, namely, to 'indicate that the content is AIgenerated in a way no user could reasonably miss or misunderstand'. 135In the context of academic publishing, editors-in-chief of Nature and Science, as well as publisher Taylor & Francis, have decided that a LLM cannot be listed as an author, that its use should be duly noted in the acknowledgement section and that the 'use of AI-generated text without proper citation could be considered plagiarism'. 136What are trying to achieve is a rule of origin.
This type of bottom-up, actor-based rule of origin could be easily strengthened by a top-down regulation.The good news is that there is already an embryonic rule of origin in EU law.Article 52(1) of the Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (AI Act) 137 states that 'AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed they are interacting with an AI system'.In 2017, AI Professor Toby Walsh already argued for the introduction of such a rule, stating that 'an autonomous system should be designed so that it is unlikely to be mistaken for anything besides an autonomous system, and should identify itself at the start of any interaction with another agent'. 138The AI Act goes one step further than this 'law of identification'. 139It is not only the AI system interacting with a natural person that has to be labelled as such, but also the output this AI system produces.Pursuant to Article 52(3) of the AI Act, 'users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful ("deep fake"), shall disclose that the content has been artificially generated or manipulated'. 140It would perhaps be useful to extend this provision to all machine-enabled artworks.
Given the incentive to cheat hinted at above, the enforcement of such a user-focused rule of origin will be arduous.One solution is to support that requirement with technical measures, such as designing the AI system in such a way that it watermarks the generated output. 141Watermarking the output means 'embedding signals into generated text that are invisible to humans but algorithmically detectable from a short span of tokens'. 142Technologically savvy users might find a way to remove these watermarks, but average users are unlikely to be able to do so. 143The enforcement of a rule of origin should therefore be usefully complemented by algorithmically screening alleged human-made art to detect whether it has or has not been machinegenerated and mitigate the risk of users bypassing watermarks. 144nally, the rule of origin has a major benefit.It does not ban LLMs from the market, nor does it subject them to a disproportionate regulatory burden. 145A rule of origin brings transparency and allows for the parallel development of humanmade and machine-enabled books while ensuring they compete 'on the merits' (i.e., on their inherent value) by erasing asymmetrical information.A rule of origin is a proportionate response to those, like Adorno, who fear art commodification, without preventing the use of LLMs by those who, like Floridi and Chiriatti, do not care whether the text has a human provenance as long as it is of high quality.

Conclusion
What does all the above lead to?First, given LLM's nature and limitations, it is possible to answer Asimov.LLMs will not 'take over the original writing, the searching of the sources, the checking and cross-checking of passages, perhaps even the deduction of conclusions'. 146LLMs will not only leave the scholar 'the barren decisions concerning what orders to give the robot next'. 147A LLM is not an automated scholar who will retire esteemed professors but 'an indefatigable shadow-writer with the ability to access, comprehend and uniquely synthesise humanity's best thoughts in mere seconds'. 148However, a LLM does so blindly, and by replicating biases it has learnt by virtue of vast and extensive datasets.
Second, despite a LLM's potential for art standardisation, human authors may still be able to compete.This paper has hypothesised that human-made art is more valued than machine-enabled art.The more machine-enabled art there is, the more human-made art is valued.However, human-made and machine-enabled art are indistinguishable.This creates a lemons problem.Asymmetrical information threatens the profitability of human-made art.A rule of origin constitutes a simple but efficient solution to this issue.Only this will prevent art from becoming a lemon.