Updated: Jan 11
As digital technologies become increasingly embedded in the infrastructures of everyday life, concerns about the abuse of such technologies grow in tandem (Bellasio et al., 2020; Mantello, 2016; Moore & Rid, 2016; Shull, 2014). The democratic, accessible and borderless nature of the internet is both a blessing and a curse. Hyper-connectivity, anonymity and instant communication have led to a widening of the horizons of opportunities for anyone with access to the internet, including cybercriminals. After all, code does not discriminate - yet. It is widely accepted that developments in digital technology have created novel exploitative opportunities which have given rise to digital crime (Powell et al., 2018: 20), in line with routine activity theory (Felson, 2000: 208). Activity in the digital realm does not occur along a binary of innocent and guilty actors, however, especially considering the absence of universal laws or efficient international governing bodies to address digital crime (Shull, 2014: 3). A discussion of digital crime should therefore take into account the legal and moral grey areas that accompany attempts at its regulation and prosecution. Hence, the definition of digital crime operationalised in this essay hails from a more zemiological perspective extending beyond the cold facts of law . A digital criminal act is understood as one which breaks a rule or law and carries with it wider-reaching social implications (SCCJR, 2015: 1) and is significantly facilitated by digital technology or occurs solely within cyberspace.
The interests of corporations, governments and individuals are also becoming increasingly fused in the neoliberal context (Mantello, 2016: 2). I therefore define the politics of technology as the (re)negotiation of the use and regulation of technology in view of newly emerging socio-political implications of developments in digitally mediated crime, according to the varying interests of individual, state and corporate actors. Because of the simultaneously democratic and centralising nature of the internet and the speed with which digital technologies have emerged and developed, stakeholders in said negotiation processes exist in many different forms and are all involved in the solidification of the rules and norms which govern the digital space.
To tease apart the inherently complex politics of technology, I examine ethically charged dynamics of power, civil liberties, security, transparency and domestic and international politics that are necessarily implicated in the establishment of legislation and policies of technology such as encryption policy, as well as in domestic and international state espionage, digital warfare, and explicitly criminal digital enterprises like online drug markets. I emphasise the softness of the boundaries around what can be considered digital crime and address the politics of technology on a larger scale, introducing the idea of a digital ‘Cold War’ in view of the increasingly common employment of state-sponsored digital crime for the pursuit of economic and political interests (Bellasio et al., 2020: 4; Caesar, 2021; Shull, 2014).
In order to understand the varying perspectives and interests of stakeholders in digital crime and its prosecution, we must first examine points of tension. The democratic and adaptable nature of developments in digital technology is one such issue. Advancements in artificial intelligence (AI) and machine learning (MI), for example, are not limited in their application. Whilst MI can be used to improve the efficiency of automatic cybersecurity software, for example, it can also be used to identify and address faults in malware code for improved efficiency of cyber-attacks (Bellasio et al., 2020: 5). The implications of this are twofold. If law enforcement and perpetrators of digital crime are using the same technologies or at least advancing in tandem, the power dynamics between the two are far more balanced than in traditional notions of state power and control in which certain resources are reserved for use by state powers. In other words, because the use of sophisticated digital technology and the power that comes with it are limited primarily in terms of proficiency rather than physically regulated access, law enforcement runs the risk of losing a significant degree of authority and power in the digital realm. Regulating the production of and access to physical objects of power – like guns issued to police, for example – is far easier than regulating the reproduction and access of informational objects of power like code. The potential for a power balance is offset by state access to funding for the development of new technologies, however, as is discussed later.
Secondly, taking the double-edged nature of technological advancements into account raises ethical issues around regulation and access. As Moore & Rid state, encryption policy is a Litmus ‘test of the values of liberal democracy in the twenty-first century’ (2016: 7). Attempting to restrict the use of protective technologies like cryptography, for example, which preserve privacy on messaging providers, protect political activists from persecution and secure sensitive information in online transactions (Moore & Rid, 2016: 8) simply because they have the potential to be abused is akin to restricting the use of shutters on homes in view of the possibility of a crime occurring within those four walls. If we consider the internet as a virtual extension of the physical world, it is reasonable to suggest that the same respect for rights to privacy be afforded in digital spaces as in physical ones. This issue is inherently linked to political ideology, however. Debates on the level of control and surveillance a government should have over its citizens lie at the heart of ideological conflicts beyond the digital realm.
An example of increasing state intervention in private affairs is the data-trailing and digital surveillance programme targeting Muslim communities in Japan as part of pre-emptive policing against terrorist activity (Bakkarly, 2016 cited in Mantello, 2016: 3). Documents leaked in 2010 showed evidence of the systematic surveillance of Muslim individuals including the collection of personal bank data, domestic movements and friendship networks. Due to the relative novelty of the internet, such cases which lie in a moral and legal grey zone suspended between priorities of civil liberties and national security, form precedents which set standards for what is deemed acceptable and inacceptable practice in the digital realm. An internal investigation into the case found the actions of the Metropolitan Police Department in Tokyo to be justified means of protecting national security, despite accusations of unlawful prejudicial profiling (ibid.). The $250 million-a-year NSA decryption program ‘Bullrun’ works to decipher encrypted internet traffic for surveillance purposes (Ball et al., 2013; Moore & Rid, 2016: 7). In addition to breaching data security guarantees for internet users, ‘Bullrun’ involves the systematic insertion of ‘backdoors’ or security vulnerabilities into popular digital communication services in order to facilitate state surveillance (Ball et al., 2013). Made possible through the significant investment of government funds, Bullrun is evidence that state power is likely upheld in the digital realm through access to state-of-the-art development programs and funding, contesting the aforementioned utopian idea of a power-balanced, democratic internet.
Such cases also serve to highlight a double standard in approaches to digital crime. Returning to our zemiological definition of digital crime, such acts of state surveillance which undeniably involve large-scale public deception and infringements on the right to privacy under the article 12 of the Universal Declaration of Human Rights (United Nations, 1948) should be considered digital crimes. Therein lies the importance of extending the definition of digital crime beyond rogue individual actors or exclusively criminal enterprises into the political arena. The ‘folk-devilling’ (Levi, 2009: 51) of the lone hacker archetype may distract from the very real social harm caused by white-collar and state-sponsored digital crime, which is arguably just as socially harmful. Large-scale security breaches undermine collective trust in the internet as a democratic and securely functioning space (Levi, 2009: 50; Moore & Rid, 2013: 32; Shull, 2014: 2), which could have significantly destabilising effects. In order to avoid shifting into a quasi-Orwellian regime fed by ‘informational capital’ (Mantello, 2016: 1), clear and unambiguous legislation on specific technological developments and uses of existing technology must be developed which apply to both state and subject. If this does not occur, the ideals of liberal democracy are likely to be undermined in the name of arbitrary state interests.
The increasing use of digital surveillance by both governments and corporations, sometimes in collaboration when interests converge (Mantello, 2016: 2), illustrates one aspect of the emergence of Massumi’s ‘Ontopower’ (2015: 40) as a new tool under Foucauldian systems of ‘governmentality’ (Foucault, 2008 cited in Massumi, 2015: 22). ‘Ontopower’ is an adaptation of Foucault’s concept of biopower which involves the governing of a state’s subjects through the direction of thought, habit and behaviour (Massumi, 2015: 24). Highly compatible with the more covert forms of governance exercised in neoliberal societies, ‘ontopower’ rests on an ethic of pre-emption that emerged as a strategy in the war against terror post-9/11 and functions on a spectrum from pre-emptive war – as in the US invasion of Iraq – to covert surveillance (ibid: 53). Through the exercise of ‘ontopower’ in the absence of comprehensive legal frameworks around digital practices which have yet to be solidified, what are essentially state-sponsored acts of digital crime are justified under the banner of protective action in the interests of national security. This legitimises the double standard in the use of technology in digital crime. Both surveillance without reasonable justification in which the ignorance of the surveyed lies at the centre of its function as well as the utilisation of informational capital to covertly direct and condition the behaviour of internet users (Sunstein, 2016) should be considered digital criminal acts in their violation of both the universal human rights to privacy and liberty (United Nations, 1948).
Perhaps what is important is that every state align its domestic laws on digital technology with its constitutional foundations to avoid slipping into practices which break with the level of state intervention accepted by its citizens. If we conceptualise technologies like AI and cryptography as potentially dangerous and weaponisable, their regulation should be modelled on that of the regulation of weapons. By this logic, regulation of the development and use of new technologies should be stricter in the United Kingdom than the United States, where the constitutional focus on ‘liberty’ is reflected in relaxed gun laws, for example. On the other hand, allowing for varying levels of domestic regulation of digital technology in the absence of universal guidelines could create international imbalances in power in the case of digital warfare. This is reminiscent of the nuclear arms race of the Cold War. It is possible that in order to avoid vulnerability, mutual espionage and investment in the development of digital ‘weaponry’ like Bullrun could result in a kind of digital arms race, creating a new normal for state surveillance and use of digital interventions in social and political spheres. It stands to reason that more international co-operation is required to set out clear, universal boundaries around acceptable uses of digital technology as well as what constitutes digital crime on micro and macro levels.
Ideological and ethical issues in defining digital crime are exacerbated by practical issues around policing and enforcement. Limitations in extraterritorial prosecution are a significant hindering factor in combating digital crimes such as fraud, malware and the illicit drugs trade. The borderless and centralised nature of the internet coupled with the mainstream adoption of digital technologies such as cloud-based data storage by individuals, governments and corporations simultaneously creates vulnerabilities in the security of said data, opening it up to theft and espionage (Bellasio et al., 2020: 5). Although it is unlikely that an amateur hacker could infiltrate the private servers of a secure corporation like the South African Standard Bank, for example, the opportunity for virtual co-presence of offenders and victims which is created through the centralised nature of the internet enabled North Korean state-sponsored hackers to do just this in 2016 (Caesar, 2021).
Indeed, in view of the historically unstable and disjointed diplomatic relations between North Korea and most western states including the US and EU member states (Wertz, 2017) its government arguably has far less to lose from engaging in at times overt cybercriminal activity. Enabled by Chinese and Russian infrastructures, the DPRK consistently utilises hacking as part of sophisticated hybrid criminal strategies alongside social engineering and impersonation for purposes such as credit card fraud and theft of foreign military information in order to fund its arms program and gather intelligence on foreign affairs (Caesar, 2021). Universities such as the Kim Chaek University in Pyongyang offer programmes specifically designed to nurture new generations of hackers for government employment. Government-sponsored hackers may sometimes even embed digital identifying ‘tags’ in malicious code to build a reputation for their proficiency (ibid.), similar to traders on darknet markets (Yip et al., 2013: 520). The North Korean regime has also worked with organised crime groups such as the Yakuza in Japan to circumvent the restrictions of their isolationist ethic, making use of remote communication and encryption technologies to conceal the origin of orders and commit acts of fraud in territories far beyond their physical access (Caesar, 2021). If the perpetrator of a digital crime can be located in a state 8000 miles away from their victim, policing is complicated significantly. Ideally, international co-operation between both the victim and perpetrator’s home states would occur - implicit in this, however, is the alignment of interests between both states, the availability of legal frameworks to address digital crime in both states, as well as the presence of a diplomatic, co-operative relationship between them (Shull, 2014: 13).
Such pre-requisites are further complicated by the interplay of licit and illicit technologies and practices in digital crime. One example is the facilitation of the international black market drugs trade by the use of legitimate platforms on the open web. The public availability of browsers such as the Tor project has facilitated the emergence of networks of hidden services which constitute the ‘darknet’, often associated with digital crime such as the trade in illicit goods (Moore & Rid, 2016: 15). Today, the most frequently used secure browsers such as Tor offer an anonymised experience by employing multi-layered encryption or ‘onion routing’ (ibid.:16). Indeed, Moore & Rid’s crawl of 205’000 hidden sites concluded that ‘the most common uses for websites on Tor hidden services are criminal’ (2016: 21), with illicit drugs and financial fraud being the most commonly traded goods and services, usually facilitated through Bitcoin transactions (ibid.: 22). Although digital crime appears to be disproportionately rooted in the dark web, enterprises such as the trade in illicit drugs or the dissemination of extremist propaganda (ibid.: 21) rely in part on branching out to more public arenas on the open web such as Vimeo, Twitter and Facebook to recruit customers (Feng, 2020). This suggests that issues around the regulation of technology to combat digital crime are more complex than questions around the regulation of cryptography and involve the interplay of a variety of both licit and illicit infrastructures which facilitate and uphold it.
The involvement of Chinese producers in the trafficking of the opioid fentanyl into the US is one such example. According to a report released by the Drug Enforcement Administration (DEA), China remains the main source of fentanyl trafficked into the US, as well as the primary source for all fentanyl and fentanyl-related substances shipped through international mail systems (2020: 1). In May 2019, China passed landmark laws on the production and distribution of fentanyl and fentanyl precursors under pressure from the US government (Feng, 2020; ibid.), however before this point there was little to no regulation on such substances. Online fentanyl vendors based in China often used disclaimers to state that their end of the transaction was permitted under Chinese law and that the legal responsibility lay within the hands of the buyer, whilst others operated under the name of registered corporations like Gaosheng Biotechnology Co. Ltd rather than trading as individual vendors (ibid.). The latter strategy is reminiscent of the smokescreen effect of white collar crime committed through large corporations leading to complications in the attribution of crimes to individual actors which significantly impedes policing (Levi, 2009: 53). Glassbeek’s analysis of corporations as fundamentally conducive to the concealment of ‘disproportionate ownership’ of power – and in this case guilt, in the eyes of the law – through the attribution of acts performed in the name of the company to its ‘brand’ as a kind of autonomous entity, concisely illustrates this process (Glassbeek, 2007 cited in Levi, 2009: 53). In a similar vein, this can be extended to some forms of state-sponsored digital crime which are committed in the name of national security in accordance with a political ‘brand’ of neoliberalism.
This case study also illustrates the need for measures which go beyond domestic laws when combatting digital crime. In view of changes in China’s fentanyl laws, the chains of production and distribution have adapted. Production is now frequently offshored to Mexico in collaboration with cartels, mediated by encrypted digital communication (Feng, 2020). Secure shipping is facilitated through private couriers like DHL and FedEx, and products are continuously advertised on both open web and deep web websites, including Twitter and Alibaba (ibid.). These also facilitate contact building between cybercriminals (see fig. 1) and their use for digital criminal purposes persists despite China’s (somewhat vague) resolution to control websites advertising fentanyl more strictly (DEA, 2020: 3). The vast array of stakeholders and intermediaries combined with the hybrid digital and physical transaction chains involved in the international fentanyl trade is a prime example of the vast range of logistical, legal and political impediments standing in the way of the forensic investigation and eventual prosecution of digitally mediated crime.
The prosecution of a perpetrator of a digital crime based in China by a law enforcement agency based in the US, for example, is incredibly complicated. Even in view of a clear violation of established laws in the US, the prosecution would still need to gather evidence attributing the crime to the perpetrator, may require a warrant for investigation and cannot realistically carry out punitive measures within another jurisdiction without the voluntary co-operation of the offender’s home state (Shull, 2014: 8). In the case of a digital crime or digitally mediated crime like the trafficking of fentanyl, for example, informational evidence may be limited due to transactions taking place on hidden services or using a blockchain currency like Bitcoin. This, once again, cycles back to issues around cryptography. Additionally, both the victim and perpetrator’s home states must have converging interests in jurisdictional co-operation, as states are not required to co-operate in attempts at extraterritorial prosecution if that process runs the risk of endangering their sovereignty or national security under international law (ibid.: 14). This is especially relevant in cases of state-sponsored digital crime in the name of national security, as the offending state is effectively given immunity against co-operation.
Beyond state and subject
Perhaps the only way to effectively police digital crime is through the co-operation of multiple stakeholders in investigations. As evidenced above, digital crime often involves a bricolage of licit and illicit services and middlemen, especially in large-scale international operations. Without the co-operation of internet service providers (Donahue, 2019 cited in Feng, 2020), cross-border law enforcement agencies and other stakeholders, policing digital crime is only going to become more difficult as criminal strategies evolve and adapt. In any case, it is unlikely that the third condition for crime under routine activity theory, ‘the absence of a capable guardian against the offense’ (Felson, 2000: 208), will ever be mitigated in digital spheres, unless there is the development of a kind of centralised, internet police force. In view of a trend towards the convergence of corporate, national, and individual interests as a marker of neoliberalism (Peters, 2012) in a world where the revenues of multinational conglomerates surpass the GDPs of nation-states (Moynihan and Belinchón, 2018), interdisciplinary collaboration is essential – especially in the regulation of technology.
One such example is the pharmaceutical company Pfizer’s commissioning of an international drug taskforce, the Global Security Program, which collaborated with the National Crime Agency and the Metropolitan Police in their attempts to disrupt the counterfeit Xanax trade (Bryant, 2018). Counterfeit pills for the Pfizer-produced painkiller Xanax were introduced and monopolised by a vendor named ‘Hulked Benzo Boss’ (HBB) on dark web marketplace AlphaBay around 2015, flooding the UK drug market with alprazolam powder-based pills, the precursors for which had been imported from aforementioned Chinese pharmaceutical labs (ibid.). Although it is reasonable to assume that Pfizer and the UK law enforcement agencies had varying motivations for collaborating in the investigation, the ultimate goal of disrupting the trade of counterfeit drugs was one and the same. In the case of HBB, arrests were made but no individuals were charged, although production ceased after the account was taken over by another vendor (Bryant, 2018), presumably due to a lack of evidence and complications in attribution.
Due to the anonymous nature of the hidden services on which digital crime often occurs, law enforcement’s attempts at monitoring and disrupting digital crime (EUROPOL, 2017) as an exercise of ‘ontopower’ (Mantello, 2016: 3) may be misplaced. The structure of hidden services allows for the interchangeability of anonymous vendors, the flexible movement of vendors to other marketplaces in the event of a disruption, and the rapid adaptation of operations in the face of prosecution. Perhaps restricting the development and use of certain kinds of technology which are more conducive to digital crime than others – such as hidden services – is a more efficient way of preventing digital crime (Moore & Rid, 2016: 9). Rather than throwing the baby out with the bathwater and undermining core digital infrastructures like public key encryption through mass covert surveillance, governments should focus on tackling digital crime at the root, by working with internet service providers to limit or prohibit potentially dangerous, non-essential structures like hidden services, and focussing on intercepting supply chains in illicit goods before they reach digital marketplaces.
The increasingly converging nature of the interests of governments, corporations and individuals in the neoliberal context coupled with the centralised nature of the internet has rendered the politics of technology in digital crime a highly complex issue. In the absence of universal laws governing the digital realm, not only rogue actors or organised crime groups have become involved in digital crime. Technologies like cryptography, AI and ML have been adopted by a vast range of stakeholders for both licit and illicit purposes, including at state level. This renders digital crime an inherently political issue, calling for the development of clear, universal laws on the use of technology in the digital realm due to the wide-reaching social and political implications of its misuse. This is especially true in view of the increasing use of digital technology for international and domestic state espionage, which should not lie beyond the law and threatens the integrity of the internet as a democratic space, as well as undermining the universal human rights to liberty and privacy. Rather than reserving technologies like cryptography which can be used for both protection and harm for the few and not the many, focus should be shifted towards regulating those digital structures which are especially conducive to digital crime, such as hidden services. Additional attention should be paid to more closely regulating the development of new technologies such as AI, ML and autonomous systems (Bellasio et al., 2020: 2). Developing technology comes with responsibility (Moore & Rid, 2016: 7) and weaponisable lines of code are much harder to destroy than guns
Ball, J. et al. (2013) Revealed: how US and UK spy agencies defeat internet privacy and security. The Guardian. 6 September. [online]. Available from: http://www.theguardian.com/world/2013/sep/05/nsa-gchq-encryption-codes-security (Accessed 28 April 2021).
Bellasio, J. et al. (2020) The Future of Cybercrime in Light of Technology Developments. p.13.
Bryant, B. (2018) FAKE XANAX: BBC. 10 March. [online]. Available from: https://www.bbc.co.uk/news/resources/idt-sh/fake_xanax_the_uks_largest_ever_dark_net_drugs_bust (Accessed 24 April 2021).
Caesar, E. (2021) The Incredible Rise of North Korea’s Hacking Army. The New Yorker [online]. Available from: https://www.newyorker.com/magazine/2021/04/26/the-incredible-rise-of-north-koreas-hacking-army (Accessed 24 April 2021).
DEA (2020) Fentanyl Flow to the United States. p.1–4. [online]. Available from: https://www.dea.gov/sites/default/files/2020-03/DEA_GOV_DIR-008-20 Fentanyl Flow in the United States_0.pdf.
EUROPOL (2017) Crime in the age of technology – Europol’s Serious and Organised Crime Threat Assessment 2017. p.1–60. [online]. Available from: https://www.europol.europa.eu/newsroom/news/crime-in-age-of-technology-%E2%80%93-europol%E2%80%99s-serious-and-organised-crime-threat-assessment-2017 (Accessed 29 April 2021).
Felson, M. (2000) ‘The Routine Activity Approach as a General Crime Theory’, in Of Crime & Criminality: The Use of Theory in Everyday Life. [Online]. Thousand Oaks: SAGE Publications, Inc. pp. 205–216. [online]. Available from: http://sk.sagepub.com/books/of-crime-and-criminality/n11.xml (Accessed 28 April 2021).
Feng, E. (2020) ‘We Are Shipping To The U.S.’: Inside China’s Online Synthetic Drug Networks. NPR.org. 17 November. [online]. Available from: https://www.npr.org/2020/11/17/916890880/we-are-shipping-to-the-u-s-china-s-fentanyl-sellers-find-new-routes-to-drug-user (Accessed 24 April 2021).
Levi, M. (2009) SUITE REVENGE? The Shaping of Folk Devils and Moral Panics about White-Collar Crimes. British journal of criminology. [Online] 49 (1), 48–67.
Mantello, P. (2016) The machine that ate bad people: The ontopolitics of the precrime assemblage. Big Data & Society. [Online] 3 (2), . [online]. Available from: http://journals.sagepub.com/doi/epub/10.1177/2053951716682538 (Accessed 21 April 2021).
Massumi, B. (2015) ‘National Enterprise Emergency: Steps toward an Ecology of Powers’, in Ontopower: War, Powers, and the State of Perception. Duke University Press. p. [online]. Available from: http://read.dukeupress.edu/books/book/134/OntopowerWar-Powers-and-the-State-of-Perception (Accessed 29 April 2021).
Moore, D. & Rid, T. (2016) Cryptopolitik and the Darknet. Survival. [Online] 58 (1), 7–38.
Moynihan, F. & Belinchón, Q. (2018) 25 giant companies that are bigger than entire countries. Business Insider [online]. Available from: https://www.businessinsider.com/25-giant-companies-that-earn-more-than-entire-countries-2018-7 (Accessed 29 April 2021).
Peters, J. (2012) Neoliberal convergence in North America and Western Europe: Fiscal austerity, privatization, and public sector reform. Review of International Political Economy. [Online] 19 (2), 208–235.
Powell, A. et al. (2018) ‘At the Crossroad’, in Digital Criminology: Crime and Justice in Digital Society. 1st edition [Online]. Milton: Routledge. pp. 17–34.
Rai, A. S. (2019) ‘The Affect of Jugaad: “Frugal Innovation” and the Workaround Ecologies of Postcolonial Practice’, in Jugaad Time. [Online]. Duke University Press. pp. 45–67. [online]. Available from: http://read.dukeupress.edu/books/book/2541/chapter/1291591/The-Affect-of-JugaadFrugal-Innovation-and-the (Accessed 22 April 2021).
SCCJR (2015) What is crime? [online]. Available from: http://www.sccjr.ac.uk/wp-content/uploads/2015/10/SCCJR-What-is-crime.pdf.
Shull, A. (2014) Global Cybercrime: The Interplay of Politics and Law. Centre for International Governance Innovation. 1–18.
Sunstein, C. R. (2016) Fifty Shades of Manipulation. Journal of Marketing Behavior. [Online] 1 (3–4), 213–244.
United Nations (2021) Treaty on the Non-Proliferation of Nuclear Weapons (NPT) – UNODA [online]. Available from: https://www.un.org/disarmament/wmd/nuclear/npt/ (Accessed 28 April 2021).
United Nations (1948) Universal Declaration of Human Rights. [online]. Available from: https://www.un.org/en/about-us/universal-declaration-of-human-rights.
Wertz, D. (2017) DPRK Diplomatic Relations [online]. Available from: https://www.ncnk.org/resources/briefing-papers/all-briefing-papers/dprk-diplomatic-relations (Accessed 29 April 2021).
Yip, M. et al. (2013) Trust among cybercriminals? Carding forums, uncertainty and implications for policing. Policing and Society. [Online] 23 (4), 516–539.