To App or Not to App? Understanding Public Resistance to COVID-19 Digital Contact Tracing and its Criminological Relevance

In the context of the COVID-19 pandemic, digital contact tracing has been developed and promoted in many countries as a valuable tool to help the fight against the virus, allowing health authorities to react quickly and limit contagion. Very often, however, these tracing apps have faced public resistance, making their use relatively sparse and ineffective. Our study relies on an interdisciplinary approach that brings together criminological and computational expertise to consider the key social dynamics underlying people's resistance to using the NHS contact-tracing app in England and Wales. The present study analyses a large Twitter dataset to investigate interactions between relevant user accounts and identify the main narrative frames (lack of trust and negative liberties) and mechanisms (polluted information, conspiratorial thinking and reactance) to explain resistance towards use of the NHS contact-tracing app. Our study builds on concepts of User eXperience (UX) and algorithm aversion and demonstrates the relevance of these elements to the key criminological problem of resistance to official technologies.


Introduction
In response to the COVID-19 pandemic, contact-tracing apps have been developed and released in several countries, including the UK, as a measure to combat the spread of COVID-19, speeding up the tracing of the contacts of individuals found to be infected. 1 At the core of this approach is the notion that, although the novel coronavirus spreads too rapidly to be contained by manual contact tracing, it can be controlled and contained through the use of automatised contact tracing via apps, if used by a sufficient number of people. 2 Such apps are generally based on practical hardware technologies (e.g., Bluetooth low energy and possibly GPS data), meaning they can be used by virtually anyone with a smartphone. In practice, however, these apps lack sufficient real-life testing. This represents a problem because their effectiveness, irrespective of the technology used, depends on socio-behavioural factors, such as public confidence and trust in the protection of privacy. 3 As stated in a recent editorial in Nature (2020), despite the global nature of the pandemic, at present, there is no global standard for the development of COVID-19 tracing apps. This raises a series of concerns, particularly accuracy concerns (because incorrect information being sent could create severe harm) and privacy concerns (in terms of whether individuals can be identified from the aggregated datasets). 4 In light of this, it is arguably unsurprising that these apps-particularly in privacyconscious countries-have faced strong public resistance, and their resulting use has made them relatively ineffective.
1 Ada Lovelace Institute, Exit Through the App Store? 2 Ferretti, "Quantifying SARS-CoV-2 Transmission," 6491. 3 Sweeney, "Tracking the Debate on COVID-19 Surveillance Tools," 301-304; von Wyl, "A Research Agenda," 29. 4 Among others, see Farronato, "How to Get People to Actually Use Contact-tracing Apps"; Rowe, "Contact Tracing Apps and Value Dilemmas." In the context of the COVID-19 pandemic, digital contact tracing has been developed and promoted in many countries as a valuable tool to help the fight against the virus, allowing health authorities to react quickly and limit contagion. Very often, however, these tracing apps have faced public resistance, making their use relatively sparse and ineffective. Our study relies on an interdisciplinary approach that brings together criminological and computational expertise to consider the key social dynamics underlying people's resistance to using the NHS contact-tracing app in England and Wales. The present study analyses a large Twitter dataset to investigate interactions between relevant user accounts and identify the main narrative frames (lack of trust and negative liberties) and mechanisms (polluted information, conspiratorial thinking and reactance) to explain resistance towards use of the NHS contact-tracing app. Our study builds on concepts of User eXperience (UX) and algorithm aversion and demonstrates the relevance of these elements to the key criminological problem of resistance to official technologies.
Volume 3 (2) 2021 Lavorgna et al. 29 The present study relies on an interdisciplinary approach that brings together criminological and computational expertise to investigate a large Twitter dataset as a means of unravelling the social dynamics underpinning people's resistance to the NHS contact-tracing app across England and Wales. This study focuses on the broader issue of resistance to governance strategies that, in most cases, may not or should not be explicitly defined as 'criminal' or 'deviant' acts but that has nevertheless attracted the attention of criminology, with cultural criminologists, for example, exploring public resistance to forms of power and authority that are perceived as harmful and unjust. 5 Our study focuses on a specific form of resistance: public resistance to the perceived harms of governance technologies (i.e., data-driven tools for surveillance and control). This crucial area of study has attracted limited criminological attention but has become increasingly topical. Our study brings criminology to the forefront of this fast-growing area by offering new insights into the issue of public resistance to the NHS tracing app, the underpinning mechanisms (particularly the perceived harms of surveillance through governance tools) and its implications. More broadly, our study builds on conceptual tools from tech-design studies and demonstrates their relevance to a key criminological problem which is resistance to official technologies. We focus on the concepts of User eXperience (UX) and algorithm aversion, both of which draw attention to the real-world contexts in which technology is deployed, to enhance criminological understandings of the factors underpinning the resistance to official technologies, such as digital-tracing apps and identity-remedial strategies. The rapid proliferation of such automated decision-making technologies across several western and non-western jurisdictions renders this enquiry essential.

Theoretical Framework
Data-driven technologies are increasingly used to automate key policy decisions in the public and private sectors, and research suggests that public buy in or acceptance of a technology is crucial for adoption. This is particularly true for technologies that, unlike coercive systems (e.g., electronic-surveillance devices deployed in justice systems), rely on voluntary adoption. Examples include the plethora of surveillance technologies that have emerged with the advent of recent technological advances. This paper focuses on the COVID-19 Track and Trace app that was introduced across England and Wales in 2020 to surveil individuals who tested positive for the virus to contain the pandemic. Research, however, has pointed to growing public resistance to this and similar apps. 6 Despite this, there is limited criminological insight into the mechanisms underpinning this resistance.
Contact-tracing apps are, in a broad sense, surveillance technologies that seek to govern and control human conduct. They differ substantially in nature and scope from technologies used traditionally in criminal justice settings, such as biometric surveillance technologies 7 and electronic monitoring devices. 8 Unlike criminal-justice technologies, tracing apps are (or should be) designed to reduce the risk of their use for mass surveillance and should have a defined purpose of promoting public health outcomes. Contact-tracing apps also occupy a very different space in public discussions on surveillance and, so far, appear to have received limited media and scholarly attention. However, although the technologies we focus on in this contribution are substantively different, there is a common denominator connecting them all to the currently proliferating smart technologiesthey are all data driven and give rise to similar concerns of data injustice, privacy violations, opacity and other harms that erode public trust. 9 Criminological studies of surveillance technologies focus mainly on coercive systems, such as electronic monitoring devices. 10 The paucity of criminological insight is surprising, not least because the problem of resistance to official, policy-driven technologies is of great relevance to the discipline and particularly to the fast-growing strand of criminology that focuses on the design and adoption of emerging data-driven technologies, some of which include the rapidly proliferating predictive algorithms. To address this dearth of criminological insights, the present study draws on sections of the artificial intelligence design literature that explores the UX of data-driven technologies. Originating in industry settings and initially used by organisations seeking to embed user feedback in tech designs, UX studies generate the information required to develop responsive, user-friendly systems. UX studies map people's perceptual and behavioural responses to an anticipated or alreadydeployed system. 11 While UX is dynamic and evolves in tandem with technological advances, a key discovery of UX research is that user endorsement of the functionality, utility, usability and efficiency of a system is necessary for tech adoption. Added to this, to encourage uptake-even in multi-stakeholder conditions-tech design should be responsive to broader concerns, such as the sociocultural contexts of use and users' entrenched beliefs and interests. 12 Tech design should also factor in the 5 Ferrell, "In Defence of Resistance"; Smith, "Driving Politics." 6 See, for instance, Abeler, Support in the UK. 7 For instance, Fussey, "'Assisted' Facial Recognition." 8 Nellis, "Surveillance-based Compliance." 9 Denick, "Exploring Data Justice"; Lavorgna, "The Datafication Revolution." 10 Nellis, Standards and Ethics in Electronic Monitoring. 11 Hinderks, "Developing a UX KPI." 12 Ferreira, "Universal UX Design." Volume 3 (2) 2021 Lavorgna et al. 30 extent to which a system could influence daily routines or even generate or exacerbate stressful conditions for users. 13 Related factors, such as public trust in the technologies 14 and the authority encouraging adoption (in this case, the government), are also relevant, not least because, as Devine and colleagues observed, 'high levels of trust are seen to be a necessary condition for the implementation of restrictive policies and for public compliance with them '. 15 Studies investigating barriers to tech uptake also suggest that ignoring or paying insufficient attention to UX can foment 'algorithm aversion', a term that refers to the general reluctance of target users to adopt technologies designed to fully or partly automate tasks, instead preferring human judgement, particularly after observed or reported failures of the technologies. 16 Dietvorst and colleagues, however, found that uptake can be improved if users can modify what they consider to be flawed algorithms, demonstrating the importance of user input in tech design. 17 Insights from research into UX and algorithm aversion draw attention to the importance of exploring users' discourses about their experiences of new technologies to uncover mechanisms of resistance. Our study explores these issues, focusing on the digital-tracing app, the NHS Test and Trace, introduced in England and Wales in 2020.

Tracing Apps, Public Concerns and Compliance
In the unfolding of the COVID-19 pandemic, contact-tracing apps have been developed and released in various countries worldwide, and their use has become a subject of public debate. In response to a degree of public resistance to these apps, researchers and media commentators (as exemplified below, and generally by relying on studies based on national surveys or expert interviews) have sought to explain the factors underpinning this resistance. 18 While the results of these contributions are difficult to generalise in the evolving COVID-19 situation (where people's opinions can easily change depending on the evolution of the pandemic and the health, social and economic crises it provokes) and across countries (as many factors behind people's resistance may be situational and culture specific), they nonetheless offer important insights into individual choices on this issue, allowing common patterns to be identified. From this literature, in line with Farronato and colleagues,19 privacy concerns appear to be the main barrier to the adoption of tracing apps. Privacy, as appears in those studies, seem to be broadly intended and is mainly associated with an ideological commitment to avoid interference from governments or big tech companies; in any case, this concept is not discussed in detail. This could be symptomatic of the sociocultural contexts of use. According to UX research, users vary in their beliefs about the extent to which the authority encouraging use can be trusted to embed privacy protections in the system. Privacy concerns could also reflect the problem of algorithm aversion stemming from highly publicised data breaches in recent years. 20 Developers and proponents contend that the adoption of stricter privacy protections limits the effectiveness of the technology in controlling the spread of the virus. Nevertheless, the privacy concerns experienced by the public appear to outweigh the perceived benefits to an extent. For example, the value of tracing apps is not as immediately clear as it may have been if such apps had initially been implemented in small-sized communities before national rollout. This reinforces what UX studies have revealed about the importance of considering users' views on the utility of a new technology.
Concerns about privacy and data security are, unsurprisingly, also at the core of the debate in contexts in which minority groups risk persecution or where segments of the population are concerned that data leakages may lead to increased risk for and stigma against individuals who test positive for COVID-19. More generally, individuals may be reluctant to provide personal information when there is insufficient information and transparency about how an app works and how data are collected, protected, stored and shared. 21 In these cases, the concept of privacy appears to align with the defence of positive rights-such as freedom of expression and the conditions necessary for human flourishing-as well as with data protection rights. However, similar concerns have also been expressed in countries with stronger data-protection regulations (including the UK), with potential users concerned about privacy violations and the possibility that their personal data will be used to fuel data-driven surveillance by private companies or the government after the pandemic is over. 22 13 Tromp, "Design for Socially Responsible Behaviour." 14 Consider, for instance, Shin, "User Perceptions of Algorithmic Decisions"; Shin, "Beyond User Experience." 15 Devine, "Trust and the Coronavirus Pandemic." 16 Berger, "Watch Me Improve"; Dietvorst, "Overcoming Algorithm Aversion." 17 Dietvorst, "Algorithm Aversion." 18 For instance, Farronato, "How to Get People to Actually Use Contact-tracing Apps." 19 Farronato, "How to Get People to Actually Use Contact-tracing Apps." 20 See, for instance, BBC News, "NHS Data Breach." 21 Fitriani, "COVID-19 Apps." 22 Farries, "Covid-tracing App may be Ineffective and Invasive of Privacy"; Garret, "A Representative Sample of Australian Participant's Attitudes"; O'Callaghan, "A National Survey of Attitudes"; Weaver, "Don't Coerce Public." In line with the findings of UX studies that highlight perceived tech efficiency and trust in authorities are key concerns of the public, Panda Security's survey of nearly 2,000 members of the UK public found that the pandemic has brought issues of trust and its link to perceived competence to the fore. 23 In the survey, one-third of respondents had no trust in the government to successfully track and trace the virus through mobile apps at all. Similarly, using a 10-minute online survey administered in March 2020 (n = 1055) among British residents (asking hypothetical questions about future behaviour), Abeler and colleagues identified wide support for app-based contact tracing (with about three-quarters of respondents reporting that they would definitely or probably install the app-a figure that is comparatively higher compared with populations in other countries). However, respondents who lacked trust in the government were less favourable. Abeler and colleagues found that respondents' main reasons for not installing the app were a perceived increased risk of government surveillance after the epidemic, the fear that installing the app would lead to increased anxiety about the epidemic, and the fear of having your phone hacked. The same survey conducted in other western countries noted very similar results. 24 A number of other social and practical barriers to tracing app adoption have also been identified in UX research, including poverty, the inability to buy or use a smartphone, the inability to download the app, an unmet need for more information and support and concerns about phone-battery usage. 25 However, in line with findings that perceived utility and efficiency can encourage user endorsement and adoption, 26 many individuals surveyed (with the precise proportion varying across countries from approximately 50% to 60% in most studies to higher in Abeler and colleagues' UK study) have recognised some benefit in downloading and using tracing apps, in primis for their potential to help family and friends, to engage in collective responsibility to the wider community and when the system is perceived as efficient, rigorous and reliable. 27 Lia and colleagues identified pro-socialness (i.e., voluntary actions a person conducts to help, take care of, assist or comfort others), COVID-19 risk perception, general privacy concerns, technology readiness and demographic factors as more important than app-design choices (e.g., decentralised design vs centralised design, location use and app providers) and the presentation of security risks as predictors of an individual's willingness to use tracing apps. 28 Along with studies focusing on (potential) users' behaviours, conceptual, theoretical and systemic considerations surrounding contact-tracing apps have also been debated, with discussions on their system architecture, data management, privacy, security, proximity estimation and attack vulnerability. 29 Of particular interest are the concerns raised by Rowe, who, focusing on a tracing app launched in France after a heated public debate, took a critical stance. 30 Rowe stressed how the app-despite its short-term benefits-could create long-term concerns about a potential encroaching on civil liberties if the app induced significant risk to informational privacy, surveillance and habituation to security policies, potentially fomenting discrimination and public distrust. Others have also cast doubt on the necessity of tracing apps; for example, commenting on the situation in Singapore, Woo argued that it was the fiscal, operational and political capacities that were built up after the SARS crisisrather than tracing apps-that contributed to Singapore's relatively low fatality rate (despite its high infection rate) and contacttracing capabilities. 31 Indeed, it was likely a lack of technological literacy among some quarters of the population, though more likely concerns about data privacy and a lack of trust in the government's ability to safeguard individuals' personal data, behind the local TraceTogether app not being widely downloaded by the Singaporean population.
While the studies discussed so far reveal the key themes and issues-in primis privacy-affecting individuals' willingness to comply with the use of contact-tracing systems, they have some limitations. Methodologically, most of the studies used surveys administered over a limited timeframe, though public opinion on the pandemic likely changes as the pandemic evolves. Simko and colleagues longitudinally measured the evolving nature of public opinions in the US on the tension between effective technology-based contact tracing and the individuals' privacy using online surveys; however, the study's sample size of 100 participants per survey was very limited. 32 In addition, while UX studies suggest that exploring the willingness to comply with contact tracing (mostly in quantitative terms) can inform policy-making (as the effectiveness of an app largely depends on public willingness and the ability to support this type of measure), it is important to qualitatively consider the sociocultural and practical dynamics underlying public resistance to tracing apps or the socio-technical impediments to use them. 23 Panda Security, "Apathy in the UK." 24 Abeler, Support in the UK. 25 Farries, "Covid-tracing App may be Ineffective and Invasive of Privacy"; Garret, "A Representative Sample of Australian Participant's Attitudes"; Megnin-Viggars, "Facilitators and Barriers to Engagement"; Tromp, "Design for Socially Responsible Behaviour." 26 Ferreira, "Universal UX Design." 27 O'Callaghan, "A National Survey of Attitudes"; Megnin-Viggars, "Facilitators and Barriers to Engagement"; Simko, "COVID-19 Contact Tracing and Privacy"; Walrave, "Adoption of a Contact Tracing App." 28 Lia, "What Makes People Install a COVID-19 Contact-Tracing App?" 29 Ahmed, "A Survey of COVID-19 Contact Tracing Apps"; Mbunge, "Integrating Emerging Technologies." 30 Rowe, "Contact Tracing Apps and Values Dilemmas." 31 Woo, "Policy Capacity and Singapore's Response to the COVID-19 Pandemic." 32 Simko, "COVID-19 Contact Tracing and Privacy." In our study, we offer an empirical, methodological and conceptual contribution that combines computational capacities to investigate a large social media dataset covering 10 months. We further offer qualitative expertise in criminology to reflect on emerging issues of public trust, governance and the use of personal data for public good. These issues are the basis of people's resistance to using tracing apps but are unlikely to peter out after the debate about COVID-19 contract tracing apps is over. From a criminological standpoint, our study uncovers new insights that can expand current understandings of resistance to the new data-driven surveillance technologies that are currently transforming the landscape of decision-making across the private and public sectors, including the justice system. While existing criminological literature has, to date, focused on coercive surveillance systems, such as electronic tags, generating highly useful insights, 33 our study expands the field by investigating a surveillance technology that relies on public acceptance and voluntary adoption for effective deployment.

Research Design
Inspired by insights from UX and algorithm-aversion studies, as well as other studies on public reactions to digital-tracing apps, we analysed a relatively large dataset of tweets to examine public discourse on the NHS COVID Track and Trace app in England and Wales and identify mechanisms of resistance. We identified and analysed relevant accounts and the interactions between them to understand the drivers of this national conversation and to identify the main narrative frames and mechanisms explaining and enabling people's resistance to using the tracing app. Tweets were collected retrospectively, starting with the oldest relevant tweet being published on 6 March 2020 and continuing until 31 December 2020. Tweets were collected if they included any combinations of the keywords and phrases (track and trace NHS; track and trace app; no to track and trace; track and trace I refuse; not use track and trace; against track and trace), which were chosen after preliminary manual searches to determine the most frequently occurring terms in this context. We identified relevant tweets post hoc from searches on the Twitter Web app (https://twitter.com/search-advanced) using the Web Data Research Assistant software developed by one of the present study's authors. 34 The search yielded a total of 54,941 tweets (including 4,269 hashtags) tweeted from 38,713 Twitter accounts over the 10 months considered. 35 We adopted a sociotechnical approach, developed through a sequence of five main iterative stages: (1) developing keywords and hashtag lists, (2) automatised data collection through a computational tool, (3) identification of relevant hashtags and keywords in the dataset, (4) information extraction and qualitative analyses (through the use of keywords-in-context displays, sentiment analysis with n-grams, development and refinement of a qualitative conceptual map, and social network analyses) and (5) qualitative checks for bias minimisation. The methodological process is described in detail in the project report. 36 Though we used computational tools for the analysis (often associated with positivistic research approaches), we mainly relied on a constructivist epistemology. As such, this research did not aim to identify ultimate laws but rather to offer meanings that are relevant through interpretation.

Interactions Between Accounts and Conversation Drivers
First, we sought to understand the interactions between the accounts to identify the drivers behind relevant conversations. To do so, we mapped the conversational network obtained by connecting two accounts where one replied to or mentioned the other. We considered only those accounts that contributed a tweet in our collection-not the 18,351 other accounts that were mentioned-but that did not otherwise participate in the conversation by tweeting on the topic. The network was plotted in Gephi using the Force Atlas layout (see Figure 1). As Figure 1 shows, we identified (1) an outer ring consisting of 26,140 isolated individuals who tweeted but received no replies to their tweets; (2) a middle ring of 1,107 small disconnected groups (ranging from 2 to 99 accounts each) that replied to each other, accounting for 2,832 accounts in total; and (3) a strongly connected central core of 9,741 accounts. In other words, the outer ring consisted of just over two-thirds of the accounts, the central core of just over a quarter and the middle ring of the remaining 7%. 37 The network was coloured according to Gephi's 33 Nellis,Standards and Ethics in Electronic Monitoring. 34 Available for download at http://bit.ly/WebDataRA. The software is a Chrome browser extension that monitors pages that the researcher browses (e.g., social media timelines and search results) and saves relevant data and metadata as a spreadsheet. 35 Of 38,713 accounts, 2,530 were considered 'dormant' (i.e., they were used to tweet on any subject less than once per week) and 1,437 were probably automated (as they tweeted more than 50 times per day). 36 See Lavorgna,Understanding Public Resistance. 37 The existence of the rings and their placement is an artefact of the Force Atlas network layout algorithm, determined by the balance of attractive forces configured between linked accounts and the repulsive forces between non-linked nodes. The network is coloured according to its partition/modularity/cliques and the node sizes are related to the number of inlinks (that is, the number of times that other accounts have contacted that account). modularity calculations, which identify the parts of a network that are highly modular in the sense that they are internally linked or well-connected clusters located in the central core and correspond to the appearance of 'clumps' in the layout.

Figure 1. Network of interactions (full)
The majority of users we observed (isolated individuals) outside the core cluster were, therefore, 'shouting into the void': they were not part of any joined-up conversation, but their voices-as is further explored in the next section-were still relevant to our analysis. Through tweeting, they raised a number of themes that offered valuable insight into their feelings on the topic of interest.
Next, to understand the sociocultural and other key drivers at play, as identified in the UX and algorithm-aversion literature, 38 we sought to identify the conversation drivers (i.e., the type of social media actor setting the tone in the conversations observed). When focusing only on the connected core of the network (see Figure 2) in which most of the conversations occurred, we noticed that this was dominated by large clumps of nodes, with long threads emerging from the clumps. The clumps were centred around high-status public broadcasters (e.g., Sky and the BBC) and political institutions (e.g., Downing Street). Clumps formed when many individuals responded only to a single account (a network hub) but did not interact with others. The threads that emerged from these clumps were chains of commentators that responded to each other's contributions. Occasionally, the discussants took part in multiple chains and hence created the 'tangles' shown in Figure 2. This central interaction consisted mainly of individuals responding to journalists and prominent politicians, as well as to the official account for the NHS app (as detailed in Table 1 in the Appendix, which lists accounts with more than 150 replies [indegree > 150]), suggesting that much of the visible conversation was driven by tweets initiating responses from broadcasters and political accounts.

Figure 2. At the core of the network
High-status Twitter accounts with many followers tended to generate more engagement in a conversation because a greater number of individuals saw their tweets. However, when considering the 25 highest-status organisations (with 1M+ followers) in our dataset (see Table 2 in the Appendix), we observed that many of them (e.g., the Economist) obtained almost no response to those of their tweets that were relevant to the scope of this study. Further, there was a notable absence of health organisations and professionals 39 : overall, it appeared that health organisations did not participate significantly in Twitter debates about the use of the NHS app.

Frames and Mechanisms of Resistance
Having clarified the structural aspects of our social network of interest and identified the conversation drivers, we then looked more in-depth and qualitatively at our dataset to unpack the key dynamics underlying people's resistance to using the NHS contact-tracing app.

Analytical Approach
To understand the prevailing themes underlying Twitter users' resistance to the use of the NHS tracing app, we first focused on their language and built a concept map. Frequency tables were created based on hashtags, keywords and n-grams (i.e., phrases of two or more words) present in the full dataset. Two researchers began by looking manually at hashtags, beginning with the most frequently used hashtags. Though only a minority of tweets (15%) used hashtags, we interpreted their use as a way of deliberately and explicitly entering the public discourse on our topic of interest on Twitter. The researchers then considered keywords and n-grams to expand and refine their conceptualisations until thematic saturation was reached (after approximately 800-1,000 words per table). Though overall frequency was a useful indicator of the 'value' of a keyword in our analysis, we decided to avoid setting a predefined frequency threshold, as less frequently used words could still be valuable for illuminating relevant themes. We used the concordance tool to analyse the context in which useful words emerged, as it allowed us to highlight the keyword in its original context 40 (see Figure 3).

Figure 3. Example of the keyword-in-context display for 'privacy'
This approach was used initially for the complete dataset and again for the frequency tables that were built separately for each part of the network, as described above (isolated individuals, disconnected small groups and connected core) to identify whether there were differences among these parts.
To conceptualise the data, notes were individually taken and then shared, discussed and integrated, drawing from insights from UX literature and algorithm-aversion studies, as well as their links to tech resistance, into the qualitative conceptual map shown in Figure 4. For the purpose of clarity, this map only the main connections identified, as well as highlighting the main themes and showing how they are connected. 40 Ross, "Discursive Deflection."

Figure 4. Conceptual map
Through this conceptual map, we identified two main narrative frames (lack of trust and negative liberties), reinforcing the findings of algorithm-aversion studies on public distrust or a 'data trust deficit' as being central to public resistance to datadriven technologies. 41 We also identified three main mechanisms (polluted information, conspiratorial thinking and reactance) underlying people's resistance to using the NHS contact-tracing app, which are discussed in the following subsections.
While all these frames and mechanisms were identified in all the parts of the networks, from the frequency tables that were separately built for each part of the network, certain differences emerged. For example, the isolated individuals were predominantly oppositional to the Conservative government and its members, indicating an entrenched lack of trust in the government and suggesting that, to understand UX, algorithm aversion and their links to tech resistance, consideration should be given to sociocultural contexts of new tech deployment (e.g., the level of trust in the authority encouraging adoption). Lack of trust can manifest in many different ways, and future research should further examine its relationship with different sociocultural features to better understand its nuances and ideate ad hoc interventions to restore public trust, which is fundamental in public health contexts. 42 We also noted the presence of unsubstantiated, imprecise or misleading claims on the more scientific aspects of the pandemic (e.g., herd immunity). Only in the disconnected small groups did the theme of 'suffering' (which included a broad range of sub-themes on a number of harms suffered, ranging from suicide and domestic violence to trauma and injustice) emerge as prevalent. As previously noted, UX research has shown that practical concerns about the capacity of technologies to generate or exacerbate stressful conditions for users should be considered during design and deployment to avoid resistance. In the connected core, discussions also appeared to pivot around issues of privacy and alleged/perceived corruption, reinforcing the findings of UX research on the importance of sociocultural contexts in tech design, as well as findings from algorithm-aversion research, which has cited privacy violations as a factor fuelling public resistance. The UX literature also suggests that users' beliefs and values can provoke resistance. The finding of the link between alleged and perceived corruption and resistance also reflected a lack of trust in the government and formed part of the sociocultural milieu from which resistance emerged. This reinforces previous findings from the algorithm-aversion scholarship on the inextricable link between lack of trust in government and resistance.

Frames of Resistance
Though the themes identified in our manual analysis were varied and heterogeneous, we traced them back to two main narrative frames: lack of trust and negative liberties.
In the tweets examined, 'Trust' was declined in many different ways (e.g., lack of trust towards the Conservative government, towards a private company considered involved in the NHS app, towards the security and the effectiveness of the app and towards societal trends increasing datafication). The various declinations of trust were linked to diverse types of concerns (which are beyond the scope of this contribution) but that nonetheless appeared interrelated, as suggested by previously cited studies. Prior research has emphasised how various factors intersect to fuel public distrust and resistance. Future studies should further delineate the precise impact of each of these factors. Our study provides new insights into a range of factors and illuminates how the unique libertarian opportunities provided by Twitter and other social media platforms allow users of various sociocultural, socioeconomic and political backgrounds to broadcast their distrust and resistance to the new smart technologies of surveillance and governance-in this case, COVID-19 digital-tracing apps. The diverse but possibly intersecting narratives of distrust are "pushed through" via a large number of sites for engagement, hence being able to attract the interest of diverse populations of individuals. 44 At the heart of individuals' lack of trust is the perceived incompetence of the actors involved, who are seen as flawed, corrupt, hypocritical and unaccountable for their actions or inaction. This mistrust is pivotal to understanding why, in some of the tweets observed, users appeared to be unencumbered by the social norm of protecting themselves, those at risk and, consequently, society at large and the economy, with their beliefs and behaviours becoming dependent on situational factors. This is in line with the 'drift' and 'digital drift' approaches in criminology, according to which the perceived lack of legitimacy or effectiveness of the criminal justice system can lead to delinquency, creating a 'sense of injustice' towards authorities; here, individuals feel they are freed from social norms, and their behaviours become dependent on transient opportunities and preferences. 45 Further, tech scepticism-including algorithm distrust-appeared to play a key role. Techno-scepticism and public distrust of algorithms, which typically manifest as claims about the purpose and effectiveness of technological solutionism and automated processes, as well as the decisions algorithms make, have been fuelled in part by highly publicised cases highlighting the harms of certain data-driven algorithms (e.g., biases in areas such as criminal justice decision-making 46 and the distribution of healthcare resources 47 ). Tech scepticism is also reinforced by the growing awareness of ethical issues, such as privacy violations, and the interrelated problems of poor explanability, transparency and accountability. 48 In the context of contacttracing apps, concerns have been raised about the potential for widespread techno-surveillance, the outsourcing of expertise and sensitive (including health) data to tech giants and the consequent infringement of citizens' rights during times of emergency politics. 49 The propagation of polluted information (as discussed below) adds yet another vital dimension to the growing problem of tech scepticism. Researchers who explored how social media has been used to improve or reduce trust in scientific expertise during the COVID-19 pandemic, for example, have highlighted the capacity of social media to be deployed as a mechanism of misinformation to undermine public trust in scientific expertise and accompanying systems, such as algorithms. 50 Such problems can trigger algorithm aversion which, as we have seen, refers to resistance to technologies designed to automate tasks and a preference for human judgement or intervention. 51 In line with recent reports in the literature on the resistance to tracing apps, the value of privacy (broadly intended), and more generally, the importance of protecting personal data from unwanted surveillance or control from the government or big tech companies, appears to be an important concern. There is a desire to contrast perceived unwelcome incursions and attacks that hinder the right to privacy with dimensions of vertical (institutional) privacy being of greater concern in the tweets observed than dimensions of horizontal privacy (i.e., privacy between users of social media). 52 From this perspective, it is important to contextualise privacy issues, as well as other issues observed in the analysis, such as COVID-19 denialism and algorithmic distrust, in the broader frame of negative liberties (i.e., a specific type of individualistic freedom that manifests in the absence of constraints, as opposed to ideas of collective freedoms and liberties, focusing on the possibility of acting to realise one's 44 For instance, Johnson, "The Online Competition"; Lavorgna, "To Wear or Not to Wear." 45 See Matza,Delinquency and Drift;Holt,"Digital Drift";Lavorgna,"Information Pollution as Social Harm." 46 For instance, Angwin, "Bias in Criminal Risk Scores." 47 For instance, Price, "Hospital 'Risk Scores'." 48 Pasquale,The Black Box Society. 49 Csernatoni, "New States of Emergency." 50 Clayton, "Real Solutions for Fake News?"; Llewellyn, "COVID-19: How to be Careful with Trust." 51 Dietvorst,"Overcoming Algorithm Aversion." 52 In line with Lavorgna, Information Pollution as Social Harm. fundamental purpose). This is in line with populist libertarian views and ideas of self-reliance (and, often, minimal government). 53 These systems of beliefs and worldviews have an important role in science denialism, 54 as scientific evidence is rejected when it is perceived as a threat to personal freedom, in line with the psychological mechanisms of reactance 55 discussed below; in the context of the pandemic, these systems have played a fundamental role in the opposition to preventive measures, such as lockdowns, limitations to travelling and gathering and the use of masks, which are seen as undue interferences into individual and group liberties. 56

Mechanisms of Resistance
Besides the narrative frames that inform individuals' resistance to using the NHS contact-tracing app discussed above, from the conceptual map, we identified three main mechanisms of resistance (polluted information, conspiratorial thinking and reactance) to illuminate the factors that breed high levels of public distrust. These factors are primarily sociocultural in that they reflect the current social and cultural climate of app deployment. As the previously discussed UX studies suggest, these factors should be considered during design and subsequent deployment to enhance responsiveness and minimise resistance.
'Polluted information' is an umbrella term that encompasses misinformation (i.e., false information that is shared without the intent of harm), disinformation (i.e., false information that is knowingly shared to cause harm) and mal-information (i.e., genuine information that is shared to cause harm). 57 Polluted information began to be studied in cyberspace as a particularly devious variant of information warfare that can be propagated via countless platforms and cause great social harm by making individuals less knowledgeable, sharpening existing sociocultural divisions and fomenting scepticism towards legitimate news producers and accurate reporting. 58 In the context of public health, the phenomenon of polluted information has received criminological attention in recent times as an important enabler of the propagation and success of medical misinformation, causing major social harm. 59 This study revealed that polluted information has facilitated a wealth of misleading health-related information (e.g., enabling antimask and antivax views and questioning the importance of physical distancing) and-together with conspiratorial thinking-has also fostered COVID-19 denialism and foregrounded discourses that minimise the health risks of COVID-19 (meaning that fewer people download and use the app, as they do not believe there is a real or serious health problem). A strand of polluted information has also propagated false and misleading information on the role of public companies in the NHS app; notable in this sense are the tweets focusing on Serco linked to negative themes, such as corruption, conflict of interest and cronyism and a lack of trust. Serco is a private company contracted to provide a range of public services in the UK (including in the Test and Trace process, as it manages some facilities and call centres). However, the company played no role in the creation of the NHS Test and Trace app and is not processing its data. 60 The term 'conspiratorial thinking' conjures images of a group of agents working together in secret, often for a sinister purpose. 61 In the present study, conspiratorial thinking was mostly observed as the driving force behind COVID-19 denialism and appeared to be behind the idea that the NHS Test and Trace app is part of a clandestine plan for mass control. Similar to what was recently observed in ethnographic studies grounded in criminology that looked at online communities during the pandemic, 62 while some concerns over the use of mechanisms of social and institutional control may be legitimate, conspiratorial thinking manifests in the fact that the existence of a clear direction is assumed, and science-fiction elements end up overshadowing realistic concerns about the potential for extreme dataveillance. It is not surprising to encounter conspiratorial thinking in the context of the pandemic: conspiracy theories are often adopted defensively, as they offer individuals a compensatory sense of control and help them feel a sense of power by rejecting official narratives, 63 particularly when there is a need to overcome feelings of alienation or anxiety in times of large-scale social change. 64 Importantly, conspiratorial thinking should not be dismissed as a weird or fringe belief, as such thinking can drive majorities to act in political, health, and social decisionmaking. 65 'Reactance' refers to how individuals tend to be averse to having their freedom or ability to act in a particular way restricted. When this happens, they tend to reject evidence that is perceived as a threat to their ability to act (or not act) in a certain way. 66 The NHS app is, in a way (and similarly to other preventive and mitigative measures imposed or suggested during the pandemic, such as physical distancing, the use of masks and lockdowns), seen as undue interference with individual liberties, with official recommendations, at times, being disregarded or opposed

Discussion and Conclusion
In our study, we combined criminological expertise and qualitative approaches with computational capacities to investigate people's resistance to using the NHS Test and Trace app. We identified three main parts of the network (isolated individuals, disconnected small groups and a connected core), with some differences in the type of accounts involved and themes discussed. The prevailing narrative frames (lack of trust and negative liberties) and mechanisms (polluted information, conspiratorial thinking and reactance) underlying people's resistance to using the app were also discussed. Our interdisciplinary research team adopted an exploratory and iterative process that aimed to make larger (and more complex) datasets better accessible for qualitative investigation and untangle our research puzzle in a more comprehensive way. The interaction between the computer scientist managing the data collection and quantitative aspects of network analyses with the subject-matter expertise and theoretical oversight provided by the social scientists enabled us to observe general trends and zoom in and analyse more indepth subsets of relevant data. 67 We aimed to uncover insights based on the language contained in the tweets, the context of the authors of the tweets and the interactions between those authors who contributed to a national conversation (because the content of the tweets analysed was likely relevant to a broad segment of the population, as it related to a behaviour that the entire adult population of England and Wales was expected to engage in. While this discussion played out across other social platforms, not just Twitter alone, including in offline spaces, focusing on Twitter allowed us to examine aspects of these wider conversational engagements. The conversations (and the lack thereof) that took place in our dataset suggested some avenues of further research and had practical implications. For example, our results invite the question of whether health organisations (that, as we have seen, rarely entered Twitter debates about the use of the NHS app) could be more active online to encourage better-informed discussions. Further, as most conversations centred around tweets by broadcasters and other significant accounts, could a more informative and less provocative use of tweets by traditional media outlets and journalists help avoid the online polarisation of healthsensitive topics? From a strictly criminological perspective, our findings reinforce insights from the UX literature, highlighting key dynamics that should be integrated into frameworks for understanding public resistance to new digital technologies, particularly surveillance systems, such as digital-tracing apps. One such mechanism is the recognition of the sociocultural context (i.e., the accepted beliefs, norms and practices that prevail among target users) in which the tech design and adoption takes place. Indeed, criminological studies exploring how frontline criminal justice practitioners deploy data-driven technologies, such as risk-prediction algorithms, have found that sociocultural resistance can discourage deployment and even trigger algorithm aversion. Such resistance can be provoked by perceived conflicts between the technologies and practice cultures, doubts about the social utility and technical efficiency of the technologies and lack of trust in their fairness. 68 Evidence also suggests that some police officers doubt the utility of predictive-policing algorithms and express concern and distrust about their fairness for socially marginal communities. 69 Interestingly, our study similarly uncovered sociocultural mechanisms of resistance, albeit in a different context of tech usage, and characterised by different intersecting factors, such as polluted information, conspiratorial thinking and ontological insecurity. This suggests that, even if their manifestations change across different contexts, the sociocultural dynamics prevailing at any one time and in any context should inform policy strategies that aim to address resistance to vital technologies, such as apps that can improve public health.
Unfortunately, the mechanisms of resistance we identified are difficult to counter and mitigate. Polluted information, for example, touches on the very delicate equilibria needed to promote and protect the right to freedom of opinion and expression; polluted information can be enjoyable (as it is more pleasant for consumers to read partisan news in line with their system of beliefs), cheap to obtain and difficult to identify. 70 Conspiratorial thinking finds a very fertile ground in situations in which people's need to feel safe and secure and exert control over their existence are threatened; conspiratorial thinking can, in such circumstances, be highly successful, as it helps promote individuals' feelings of agency and power. 71 The phenomenology of reactance, similar to other psychological mechanisms at the basis of science denialism, 72 reminds us why simply bombarding denialists with accurate scientific information does not lead to a change in attitudes. So far, interventions targeting these mechanisms have not been conducted in a coordinated way. 73 As recently discussed in the context of harmful polluted information online, 74 profound architectural changes have not occurred, and interventions from online intermediaries that targeting the source are proving relatively ineffective and have the potential to create serious tensions against individual rights. Further, debunking activities have often proved ineffective and potentially even counterproductive, increasing polarisation and facilitating displacement towards more protected social media. As is the case with other online harms, there is no single best strategy, and a sustained and multilayered effort between a wide range of institutions, individual actors and technology is, therefore, needed to meet a fundamental social challenge that goes well beyond convincing individuals to use an app; rather, this challenge has to do with improving public scientific literacy and critical thinking and restoring public trust. Such trust, however, must be earned, and this involves improving effectiveness and (institutional, political and algorithmic) transparency. 72 Prot, "Science Denial." 73 Kreko, "Countering Conspiracy Theories"; Larson, "Blocking Information." 74 Lavorgna, Information Pollution as Social Harm.