In the humanities, a reflexive and often critical use of vocabulary has so far been one of its key features. With the rise of digital media and information systems, new technical forms of processing content have emerged. From the perspective of the humanities, this has given way to divergent interests and methodologies: On the one hand, developing digital tools for humanistic research allows one to look at content differently (e.g. distant reading). On the other hand, looking critically at technology in use allows one to deliver a cultural explanation of our by now ubiquitous digital techniques as demonstrated by software studies. The Critical Keywords for the Digital Humanities seek to complement those new approaches of the digital. For this, it takes up one of the humanities’ traditional approach anew: the use of words. [Read more about the keywords project.]

a

b

c

d

e

f

g

h

i

j

k

l

m

n

o

p

q

r

s

t

u

v

w

x

y

z

a

Augmented intelligence is an umbrella-term used in media theory, cognitive sciences, neurosciences, philosophy of mind and political philosophy to cover the complex relation between human intelligence, on one side, and mnemo-techniques and computational machines, on the other, both understood as an expansion (also to a social and political degree) of human cognitive faculties.

Definition

Main Synonyms

Synonyms include: augmented human intellect, machine augmented intelligence and intelligence amplification. Specifically, extended mind, extended cognition, externalism, distributed cognition and the social brain, are concepts of cognitive sciences and philosophy of mind that do not necessarily involve technology (Clark and Chalmers 1998). Augmented reality, virtual reality and teleoperation can be framed as a form of augmented intelligence, moreover, for their novel influence on cognition. Brain-computer interfaces directly record electromagnetic impulses of neural substrates to control, for instance, external devices like a robotic arm, and raise issues of the exo-self and the exo-body. Augmented intelligence must be distinguished from artificial intelligence, which implies a complete autonomy of machine intelligence from human intelligence despite sharing a logical and technological ground; and from swarm intelligence, which describes decentralized and spontaneous forms of organization in animals, humans and algorithmic bots (Beni and Wang 1989). In the field of neuropharmacology, nootropics refers to drugs that improve mental functions such as memory, motivation and attention. Like artificial intelligence, the idea of augmented intelligence bred (especially in science fiction) a family of visionary terms that is not possible to summarized here (cf Wikipedia 2014).

History: Engelbart and Bootstrapping

The relation between cognitive faculties, labour and computation was already present in the pioneering work of Charles Babbage (1832). The "division of mental labour" was the managerial notion at the basis of his famous calculating engines which aimed to improve industrial production. The concept of augmented intelligence itself was first introduced in cybernetics by Engelbart (1962), who was influenced by the works of Bush (1945) on the Memex, Ashby (1956) on intelligence amplification, Licklider (1960) on man-computer symbiosis, and Ramo (1961) on intellectronics, among others. In his seminal paper, Augmenting Human Intellect: A Conceptual Framework, Engelbart (1962) provides a definition of augmented intelligence specifically oriented to problem solving,

By 'augmenting human intellect' we mean increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems. Increased capability in this respect is taken to mean a mixture of the following: more-rapid comprehension, better comprehension, the possibility of gaining a useful degree of comprehension in a situation that previously was too complex, speedier solutions, better solutions, and the possibility of finding solutions to problems that before seemed insoluble. And by 'complex situations' we include the professional problems of diplomats, executives, social scientists, life scientists, physical scientists, attorneys, designers--whether the problem situation exists for twenty minutes or twenty years. (Engelbart 1962, p. 1)

Engelbart was a pioneer of graphic user interfaces and network technologies, inventor of the computer mouse and founder of the Augmentation Research Center at Stanford University. The methodology called bootstrapping was the guiding principle of his research laboratory and aimed to establish a recursive improvement in the interaction between human intelligence and computer design (the term has also been adopted by artificial intelligence to describe a hypothetical system that learns how to improve itself recursively, that is by observing itself learning, although such a system at present has never been successfully designed). Engelbart’s vision was eminently political and progressive: any form of augmentation of individual intelligence would immediately result in an augmentation of the collective and political intelligence of humankind. Despite the fact that Engelbart does not account for possible risks, social frictions and cognitive traumas due to the introduction of augmented intelligence technologies, his combined technological and political definition can be useful to draw a conceptual map of augmented intelligence.

Conceptual Axes of Augmentation

The conceptual field of augmented intelligence can be illustrated along two main axes: a technological axis (that describes the degree of complexity from traditional mnemo-techniques to the most sophisticated knowledge machines) and a political axis (that described the scale of intellectual augmentation from the individual to a social dimension).

  • Technological axis. Any technique of external memory (such as the alphabet or numbers) has always represented an extension of human cognition. McLuhan (1962) underlined how innovations such as printing press and electronic media caused a further expansion of our senses on a global scale, affecting cognitive organization and, therefore, social organization. After McLuhan, it is possible to periodize the history of augmented intelligence in four epistemic periods according to the medium of cognitive augmentation: sign (alphabet, numbers, symbolic forms), information (radio, TV, communication networks), algorithm (data mining, computer modelling, simulation and forecasting), and artificial intelligence (expert systems and self-learning agents: as a hypothetical limit). The interaction between the human mind and techniques of augmentation is recursive (as Engelbart would register), as humankind has always continued improving them. Turing’s essay 'Computing Machinery and Intelligence' (1950) strongly advised the investigation of a machine that one day could "think by itself". The hypothesis of artificial intelligence is, trivially, that of an autonomy of the machine from the human and, more interestingly, that of a new kind of alliance between the two forms of cognition. Across the history of recent media culture, the use of the expressions information explosion and knowledge explosion is notable for denoting a paradigm break towards new forms of civilization. Without providing scientific evidence, Vinge (1993) has defined as the 'technological singularity' as the hypothetical moment in which intelligent machines will show emergent properties and produce an "intelligence explosion" (Chalmers 2010) beyond human control.
  • Political axis. The political consequences of augmented intelligence are immediately manifested as soon as a large scale of interaction and computation is reached. Indeed, Engelbart’s project was conceived to help problem solving on a global scale of complexity: the collective scale cannot be severed by any definition of augmented intelligence. A vast tradition of thought has already underlined the collective intellect as an autonomous agent not necessarily embodied in technological apparatuses (Wolfe 2010). See the notions of: general intellect (Marx), noosphere (Teilhard de Chardin), extra-cortical organisation (Vygotsky), world brain (Wells), cultural capital (Bourdieu), mass intellectuality (Virno), collective intelligence (Levy). Across this tradition, "the autonomy of the general intellect" (Virno 1996) has been proposed by autonomist Marxism as the novel political composition emerging out of post-Fordism. The project of such a political singularity is perfectly specular to the a-political model of the Technological Singularity.

The combination (and antagonism) of the technological and political axes describes a trajectory towards augmented social intelligence. Under this definition, however, political conflicts, on one side, and the computational aporias, on the other, are not resolved. Deleuze and Guattari’s notion of the machinic (1972, 1980) - inspired also by the idea of mechanology by Simondon (1958) - was a similar attempt to describe, in conjunction, the technological and political composition of society without falling either into fatalism or into utopianism. Among the notions of augmentation, moreover, it is worth recalling their concepts of machinic surplus value and code surplus value (Deleuze and Guattari 1972).

Criticism and Limits

Any optimistic endorsement of new technologies for human augmentation regularly encounters different forms of criticism. ‘Artificial Intelligence winters’, for instance, are those periods of reduced funding and fall of institutional interest also due to public scepticism. A first example of popular criticism toward augmented intelligence in the modern age would be the Venetian editor Hieronimo Squarciafico. After working for years with Aldus Manuntius’s pioneering press, he stated in an aphorism, an "abundance of books makes men less studious" (Lowry 1979: 31). The essay 'The Question Concerning Technology' by Heidegger (1954) is considered a main reference for technological critique in continental philosophy. Heidegger influenced a specific tradition of techno-scepticism: Stiegler (2010), for instance, has developed the idea that any external mnemo-technique produces a general grammatization and, therefore, a proletarization of the collective mind with a consequent loss of knowledge and savoir-vivre. Berardi (2009) has repeatedly remarked upon the de-erotization of the collective body produced by digital technologies and the regime of contemporary semio-capitalism. The physical and temporal limits of human cognition when interacting with a pervasive mediascape is generally addressed by the debate on attention economy (Davenport and Beck 2001). The discipline of neuropedagogy has been claimed as a response to widespread techniques of cognitive enhancement and a pervasive mediascape (Metzinger 2009). Specifically dedicated to the impact of the internet on quality of reading, learning and memory, the controversial essay 'Is Google Making Us Stupid?' by Carr is also relevant in this context. The thesis of the nefarious effect of digital technologies on the human brain has been contested by neuroscientists. Carr’s political analysis, interestingly, aligns him with the continental philosophers just mentioned: "what Taylor did for the work of the hand, Google is doing for the work of the mind" (Carr 2008). A more consistent and less fatalistic critique of the relation between digital technologies and human knowledge addresses the primacy of sensation and embodiment (Hansen 2013) and the role of the 'nonconscious' in distributed cognition (Hayes 2014). In neomaterialist philosophy, it is feminism, in particular, that has underlined how the extended or augmented mind is always embodied and situated (Braidotti, Grosz, Haraway).

Augmented Futures

Along the lineage of French techno-vitalism, yet turned into a neo-reactionary vision, Land (2011) has propagated the idea of capitalism itself as a form of alien and autonomous intelligence. The recent 'Manifesto for an Accelerationist Politics' (Srnicek and Williams 2013) has responded to this fatalist scenario by proposing to challenge such a level of complexity and abstraction: the idea is to repurpose capitalism’s infrastructures of computation (usually controlled by corporations and oligopolies) to augment collective political intelligence. The project CyberSyn sponsored by the Chilean government in 1971 to control the national economy via a supercomputer is usually mentioned as a first rudimentary example of such a revolutionary cybernetics (Dyer-Witheford 2013). More recently, Negarestani (2014) has advocated a functional linearity between the philosophy of reason, the political project of social intelligence and the design of the next computational machine, where the logical distinction between augmented intelligence and artificial intelligence would no longer make any sense. The definition of augmented intelligence, however, will always be bound to an empirical ground that is useful to sound the consistency of any political or technological dream to come.

References

Ashby, W.R. (1956): An Introduction to Cybernetics, London: Chapman & Hall.

Babbage, C. (1832): 'On the Division of Mental Labour', in: On the Economy of Machinery and Manufactures. London: Charles Knight. Beni, G. and Wang, J. (1989): 'Swarm Intelligence in Cellular Robotic Systems', Proceed. NATO Advanced Workshop on Robots and Biological Systems, Tuscany, Italy, 26-30 June 1989. Berardi, F. (2009): The Soul at Work: From Alienation to Autonomy, trans. Francesca Cadel and Giuseppina Mecchia, Los Angeles: Semiotext(e), 2009. Bush, V. (1945): 'As We May Think', in: The Atlantic, July. Carr, N. (2008): 'Is Google Making Us Stupid?', in: The Atlantic, July. Chalmers, D. (2010): 'The Singularity: A Philosophical Analysis', in: Journal of Consciousness Studies 17 (9–10), pp. 7–65. Clark, A. and Chalmers, D. (1998): 'The Extended Mind', in: Analysis 58 (1), pp. 7-19. Davenport, T. H. and Beck, J. C. (2001): The Attention Economy: Understanding the New Currency of Business, Harvard Business School Press. Dyer-Witheford, N. (2013): 'Red Plenty Platforms', in: Culture Machine 14; available at: http://www.culturemachine.net/index.php/cm/issue/view/25 Engelbart, D. (1962): Augmenting Human Intellect: A Conceptual Framework, Summary Report AFOSR-3233, Stanford Research Institute, Menlo Park, California Hansen, M. B. N. (2006): Bodies in Code: Interfaces with New Media, New York: Routledge. Hayles, N. K. (2014): 'Cognition Everywhere: The Rise of the Cognitive Nonconscious and the Costs of Consciousness', in: New Literary History, 45, pp. 199-220. Heidegger, M. (1954): 'The Question Concerning Technology', in: The Question Concerning Technology and Other Essays, trans. William Lovitt, New York: Harper and Row, 1977. Land, N. (2011): Fanged Noumena: Collected Writings 1987-2007, Falmouth, UK: Urbanomic. Licklider, J.C.R. (1960): 'Man-Computer Symbiosis', in: IRE Transactions on Human Factors in Electronics 1, pp. 4-11. McLuhan, M. (1962): The Gutenberg Galaxy: The Making of Typographic Man, Toronto: University of Toronto Press. Metzinger, T. (2009): The Ego Tunnel: The Science of the Mind and the Myth of the Self, New York: Basic Books. Negarestani, R. (2014): 'The Revolution is Back', paper, Incredible Machines conference, 7-8 March, Vancouver, Canada. Ramo, S. (1961): 'The Scientific Extension of the Human Intellect', in: Computers and Automation, February. Simondon, G. (1980 [1958]): On the Mode of Existence of Technical Objects, Ontario: University of Western Ontario. Srnicek, N. and Williams, A. (2013): 'Manifesto for an Accelerationist Politics', in: Jousha Johnson (ed.) Dark Trajectories: Politics of the Outside, Miami: Name, 2013. Turing, A. (1950): 'Computing Machinery and Intelligence', in: Mind 59. Vinge, V. (1993): 'The Coming Technological Singularity: How to Survive in the Post-Human Era', in: Whole Earth Review, Winter. Virno, P. (1996): 'Notes on the General Intellect', In: Saree Makdisi et al. (eds) Marxism beyond Marxism, New York: Routledge. Wikipedia (2014): 'Group Mind (Science Fiction)', available at: http://en.wikipedia.org/ wiki/Group_mind_(science_fiction) Wolfe, C. (2010): 'From Spinoza to the Socialist Cortex: Steps Toward the Social Brain', in: Hauptman, D. and Neidich, W. (eds.) Cognitive Architecture: From Biopolitics to Noopolitics. Rotterdam: 010 Publishers.

c

Copyfight is a portmanteau word combining copyright and fight. It refers to the conflict between the holders and protectors of copyrights, trademarks, patents and related rights (e.g. broadcasting), and those who are associated with Creative Commons, the pro-piracy and peer-to-peer file sharing communities, as well as the movements for open access, open data and free software. The fight is over the right to copy, use, distribute and sell artistic, literary, cultural and academic research works and other materials, including pharmaceuticals.

Introduction

Not so very long ago large-scale political protest seemed to be a thing of the past. It looked as if this form of political activism had more or less come to an end with the anti-capitalist globalisation movements of the pre-9/11 world; and if not then, certainly with the anti-war marches of 2003 and their failure to prevent the subsequent invasion of Iraq. Yet recent years have seen the Occupy, Arab spring, anti-austerity and student protests usher in a new age of mass mobilisation. As highlighted by the recent critique of UK mainstream politics by film star and comedian Russell Brand (http://www.youtube.com/watch?v=3YR4CseY9pk), we now live in an era characterised by a widespread rejection of the principle of political representation and individual fame, and the development of non-hierarchical forms of political organisation and co-ordination instead. Similar characteristics are a feature of many of the related struggles around intellectual property, internet piracy and copyright, as the activities of the international hacktivist networks Anonymous and LulzSec bear witness.

Victories and Defeats

Without doubt some battles with the current Euro-American intellectual property regime have been won. The service blackout coordinated by Wikipedia and others in January 2012 resulted in the SOPA (Stop Online Piracy Act) and PIPA (Protect IP Act) bills being postponed in the US. The 'Academic Spring' of the same year, in which over 12,000 academics signed a public petition protesting against the business practices of the publisher Elsevier – reported to make €725 million annual profits on its journals alone – had a similar effect on the Research Works Act. Still, the overall victors in the copyright wars are the multinational conglomerates of the cultural industries who, with the backing of governments worldwide, continue to control the production, distribution and marketing of the majority of our knowledge and culture. Witness the federal charges brought by the US Department of Justice in July 2011 against the self-declared open access guerrilla Aaron Swartz for his large-scale unauthorised downloading of files from the JSTOR academic database. Swartz was a founder of the online activist group Demand Progress, which launched the campaign against SOPA and PIPA. He committed suicide in January 2013 before his case could come to trial.

From Open to Closed

In fact for many political activists and theorists the situation over the right to copy is, if anything, getting worse. They cite as evidence a profound shift that is taking place in the digital world. It is a shift toward the closed, centralised systems of mobile media and the cloud, as represented by the non-configurable iDevices and single-purpose apps of Apple (at one point the most valuable company of all time in terms of market capitalisation, thanks to the launch of the iPhone 5); and away from the open, distributed networks and physical infrastructure of the Web that allows users to understand how such digital products are made, and to continually copy, share, change, update, improve and re-imagine them. Coupled to the fast-emerging online media monopolies of a small number of powerful international corporations, including Amazon, Facebook and Google, it is a shift that has led some to predict the death of the 'open' Web.

The Question of the Commons

Radical theorists of media and culture are thus confronted by some key questions. How might we turn from IP laws and infrastructure designed for the benefit of 'the 1%' to find ways of openly sharing art, education, knowledge and culture, whilst at the same time ensuring creative workers are adequately and justly compensated for their labour? Is this primarily a cultural issue? Or does it require the development of new laws, new forms of political organisation, new economies – even new ways of organising post-industrial society?

As critical media theorist Felix Stalder writes with regard to Anonymous, many oppositional organisational movements associated with struggles over copyright, intellectual property and internet piracy appear to find it extremely difficult to "engage in the institutional world in any other than destructive ways". Anonymous "cannot, and does not aim at, building alternative institutions", he emphasizes. Stalder does however see this informal grouping as being capable of "contributing to the forging of a common, oppositional horizon that could make it easier to coordinate future action" (Stalder 2012, p.7). This raises the further question of what possible forms the forging of such a common, oppositional horizon might take. It is not difficult to envisage any coordinated future copyfight action as endeavouring to include Creative Commons and the pro-piracy, peer-to-peer file sharing, open access, open data and free software movements on the basis they all offer a challenge of one kind or another to the current intellectual property regime. Yet how much do such movements and initiatives actually have in common? And how significant is it that they do not even share a common idea of the commons?

Creative Commons (CC)

Creative Commons is a non-profit organisation that offers a range of easy-to-use copyright licenses that authors and artists can choose from in order to grant others permission to share their work and use it creatively. Rather than the default copyright position of all rights reserved, CC licences range from some rights reserved to a public domain CC-0 licence that waives all rights. Creative Commons thus provides a means of protecting the rights of creators from the extremes of IP law, including the length to which copyright has been extended as a result of lobbying from companies such as Disney.

Open Access

Open access is concerned with making academic research openly available online. Many texts published on an open access basis are covered by a CC license that permits them to be openly read, copied and distributed, but not built upon, developed, altered and improved by others in the way that, for example, free and open source software is. A substantial number of open access initiatives have undergone a change in licensing policy in recent years, however. More and more have adopted a CC-BY licence that insists only on author-attribution, thus giving others permission to copy and reuse texts, and to make derivative works from them.  To a large extent this change has been motivated by a concern to grant users open access not merely to the research but also to the associated data. This includes the right to mine texts and data, as text and data mining can be blocked by many permission barriers. Yet there are some open access advocates who view this shift in policy to CC-BY licensing as going too far. They argue that opening up access to research has to be the priority, and any insistence that academics do so on a basis that allows others to modify work as well will only succeed in alienating the majority of the research community from publishing open access in the first place (e.g. because the support for the CC-BY licence risks being seen as giving a licence to plagiarism).

Critiques of Creative Commons

Both of these positions – for and against CC-BY – are held as being profoundly misinformed by many theorists in certain areas of critical media studies, software studies and cultural studies. Here the very notion of the commons in Creative Commons is placed under attack on the grounds that,

  • The concern of Creative Commons is with preserving rights of copyright owners rather than granting them to users;
  • Creative Commons is extremely liberal and individualistic, offering authors a range of licences from which they can individually choose (even in the case of public domain CC-0 licence that waives all rights) rather than promoting a collective agreement, policy or philosophy;
  • What Creative Commons actually offers is a reform of IP law, not a fundamental critique of it, or challenge to it (Cramer, 2006).

Indeed, Creative Commons is not advocating a common stock of non-privately owned works that everyone jointly manages, shares and is free to access and use at all - which is how the commons is frequently understood. Instead, Creative Commons presumes everything created by an author or artist is their property. If anything, Creative Commons is concerned with helping the law to adapt to the new conditions created by digital culture by supporting a smarter, more open, flexible and pluralistic model of individual ownership.

In this respect, the emphasis of Creative Commons on the rights of copyright owners can be seen to function strategically, as it holds strong appeal to what cultural studies academic and intellectual Andrew Ross describes as "the thwarted class fraction of high-skilled and self-directed individuals in the creative and knowledge sectors whose entrepreneurial prospects are increasingly blocked by corporate monopolies" (Ross 2009, p.168). Proponents of this view of IP have thus been able to form a "coalition of experts with the legal access and resources" to mount a powerful campaign that frequently overshadows other, often more interesting and radical approaches (Ross 2009, p.161). This explains why it is with the likes of Lawrence Lessig, James Boyle and Cory Doctorow, and their reformist lobbying for better IP law (i.e. IP law that does not put business, competition and innovation at risk), rather than no IP law or a radically different IP law, that the term 'copyfight' is most closely associated. It also clarifies why CC licenses are so widely used in open access. The result, though, is that this aspect of the debate over 'free culture' risks being, in Ross’s words, "simply an elite copyfight between capital-owner monopolists and the labor aristocracy of the digitariat… struggling to preserve and extend their high-skill interests" (Ross 2009, p.169).

Free Software and Copyleft

Many in the Free Software community, including Richard Stallman, founder of the Free Software Foundation and inventor of the General Public License (GPL), the most common free software copyright licence, lobby for what is called copyleft. Like Creative Commons, this still entails a use of IP law – only one that is designed to serve the opposite ends to those to which such a license is usually put. Rather than supporting the ownership of private property, copyleft defends the freedom of everyone to copy, distribute, develop and improve software or any other work covered by such a licence. The only permission barrier is that which upholds this right by insisting all such copies and derivatives must be shared under the same terms and conditions, thereby ensuring that the freedom of everyone to do likewise continues into the future.

The free software community likes to position itself as a movement, and as being more politically engaged than those who argue for both Creative Commons and open source. Whereas the free software movement encourages co-operative working Creative Commons is held as being quite individualistic: not just in the way a particular CC license is applied, but in how CC licensed works tend to be used too. Similarly, in their concern to determine the best way to develop and promote a product in an open manner – and not alienate the world of corporate capital by using terms such as 'free' that could all too easily be ascribed to a radical left approach to property – those associated with open source are perceived from within the movement for free software as following the logic of the market too pragmatically. Yet many activists and theorists question just how left politically copyleft actually is. Free software is not necessarily anti-commercial or anti-capitalist. As long as the copies are covered by the same license, there is nothing to prevent a corporation from selling copies of 'free' software it has developed from the original source code. (To provide an example, while making the source code available for free, thus respecting the terms of the copyleft licence, it can sell the executable application a user needs to actually run the software, and which they may have neither the time nor the skills to produce for themselves.) There is also the problem that, in contrast to Creative Commons, the philosophy of free software cannot be easily applied to other areas of culture to create a larger commons – for the simple reason this philosophy does not scale. As software developer and founder of the Telekommunisten Collective Dmytri Kleiner points out,

Companies for whom software is a necessary capital input are happy to support [the production and development of] free software, because doing so is most often more beneficial to them than either paying for proprietary software, or developing their own systems from scratch. They make their profit from the goods and services which they produce, not from the software they employ in their production.

Cultural Works, especially popular ones, such as book, movies, music, etc, are not usually producer's goods. In a capitalism economy these are generally Consumer's goods, and thus the publishers of such works must capture profit on their circulation. Thus capital will not finance free culture in the same way it has financed free software. (Kleiner, 2012)

Copyfarleft

Rather than preventing access to cultural works and source code from being restricted, those on the political left tend to be more concerned with developing a free, common culture jointly managed and shared by all, and with promoting the equal and just distribution of wealth among the creative workers who produce it. To this end Kleiner insists copyleft must be transformed into copyfarleft. In the latter creative workers themselves own the means of production, and only prevent use of their works that is not based in the commons. This last point is especially important with regards to how creative workers can be compensated for their labour in the context of a free common culture. It means that creative workers can “earn remuneration by applying their own labour to mutual property”, but those who exploit wage labour and private property in production cannot (Kleiner, 2010, p.42).

Anti-Copyright and Pro-Piracy

For copyfarleft to be able to generate such a worker-controlled economy, however, and thus itself succeed in having an impact on anything even approaching a significant scale, it would need to be part of a much larger emergent economy of this kind, one capable of taking in not just the production of art, culture and software, but material items such as food and housing too. Since the prospect of such an economy emerging any time soon looks unlikely, Kleiner acknowledges that complete anti-copyright, as a radical gesture that "refuses pragmatic compromises and seeks to abolish intellectual property in its entirety", has significant appeal for many (Kleiner, 2010, p.42). This is particularly true of those in the peer-to-peer file and text sharing communities, where distinctions between producer and consumer are difficult to maintain.  Some anti-intellectual property advocates in the pro-piracy movement even argue against copyright and the use of licenses altogether, regarding them as remnants from a previous age, and as inappropriate for an era in which cultural works can be copied and shared at very little expense, without depriving the original 'owners' of their versions due to the non-rivalrous nature of digital objects. Instead of Creative Commons, they argue for a 'grey commons', the adjective 'grey' being used to signal the legal ambiguity of much of the content of this commons (i.e. that it is not a black and white issue). The grey commons thus connects to the 'pirate' desire to avoid the formation of the kind of organisational centres and hierarchies of authority and leadership that would inevitably ensue if anyone (e.g. platform managers, administrators, curators) were to be placed in a position requiring them to make decisions about what the commons should and should not include, be it pirated music, Hollywood films, videos of beheadings or child pornography.

The difficulty with the anti-copyright stance, in turn, is that it may only be really effective from a position outside the capitalist legal system – or after its demise. Certainly, when it comes to academic publishing, gestures of this kind risk playing into the hands of the neoliberal philosophy that states universities should carry out the basic research the private sector does not have the time, money or inclination to conduct for itself, while granting the private sector easy access to that research and the associated data so it can be commercially applied and exploited. (This is another explanation for the shift in licencing policy within the open access movement toward CC-BY: it is designed to serve the neoliberal policy of enabling that which is available publically to be enclosed by private interests.)

Conclusion

All of the problems, blindspots and critiques discussed above offer a rather neat illustration of a paradox the Italian theorist Roberto Esposito locates in the idea of the common. The paradox concerns the way in which,

The “common” is defined exactly through its most obvious antonym: what is common is that which unites the ethnic, territorial, and spiritual property of every one of its members. They have in common what is most properly their own; they are the owners of what is common to them all.

(Esposito 2010, p.3)

To be sure the commons is a place where the interests regarding the right to copy of a large number of diverse groups, movements, organisations, initiatives and constituencies – including artists, activists, academics, educators and programmers - come together but also exist in a state of tension and conflict, and are often in fact demonstrably incompatible and incommensurable. This is not to suggest a co-ordinated community of artists, activists and programmers is impossible to achieve. As the French philosopher Jean-Luc Nancy writes: “Being with, being together, and even being ‘united’ are precisely not a matter of being ‘one’. Of communities that are at one with themselves, there are only dead ones” (Nancy 2003, p.285). It is merely to acknowledge that a certain amount of conflict is what makes both a community and the common possible; and that if we do want to forge a common, oppositional horizon that could make it easier to coordinate any future action over the right to copy, then we need to think the nature of community, of being together and holding something in common, differently.

References

Cramer, F. (2006): ‘The Creative Common Misunderstanding’, <nettime-l> mailing list, 9 October, available at: http://www.mail-archive.com/rohrpost@mikrolisten.de/msg00798.html; republished in Cramer, F. (2013): Anti-Media: Ephemera on Speculative Arts. Amsterdam: Institute of Network Cultures, pp. 82-90.

Esposito, R. (2010): Communitas: The Origin and Destiny of Community, Stanford, California: Stanford University Press. Kleiner, D. (2010): The Telecommumist Manifesto, Amsterdam: Institute of Network Cultures, available at: telekommunisten.net/the-telekommunist-manifesto/. Kleiner, D. (2012): ‘OSW: Open Source Writing in the Network’, empyre mailing list, 13 January, available at: www.mail-archive.com/empyre@lists.cofa.unsw.edu.au/msg03634.html. Nancy, J.-L. (2003): A Finite Thinking, Stanford, California: Stanford University Press. Ross, A. (2009): Nice Work If You Can Get It: Life and Labor In Precarious Times, New York and London: New York University Press. Stalder, F. (2012): ‘Enter the Swarm: Anonymous and the Global Protest Movements’, Neural, issue 42, Summer, pp. 6-9.

d

Digital native is a term attributed to Mark Prensky (2001), who coined it to make a generational distinction between users of digital technologies. Prensky proposed that the generations born after the 1980s, who did not transition from the analogue to digital technologies but grew up with ‘digital technologies’, develop new forms of interaction, engagement and participation, which are significantly different from the preceding generations. He suggested that the older users of digital technologies might be further identified as ‘digital settlers’ – people who transitioned from the analogue to the digital and were responsible for building the first digital infrastructure, thus naturalizing them to the new environments – and ‘digital immigrants’, people who migrated to these structures but are unable to take to it like ‘fish do to water’ (Salkowitz 2008, p. 100).

Prensky’s nomenclature has found other competing descriptions, which begin at the same time, to account for the emergence and scope of digital technologies. Dan Tapscott’s ‘Grown up Digital’ (2008), John Palfrey and Urs Gasser’s ‘Born Digital’ (2008) and Michel Stanat’s ‘Generation Y’ (2006) have also been used as related and interchangeable ways of referring to this new set of users and their relationship with digital technologies. All of these concepts have been used to emphasise particular trends and activities in different fields. In the world of academia, digital natives have been at the centre of discourse around new modes of learning, distributed and collaborative forms of knowledge production, and the form and function of education systems. Within the field of development, digital natives have been the beneficiaries of new development practices enshrined in Information and Communication for Development (ICT4D) missions like the United Nations Youth Development Programme  and the Millennial Development Goals (Isaacs 2011, p. 21). In social and political movements, many new kinds of protests ranging from human rights intervention to resisting authoritarian regimes have been attributed to digital natives who have organized using viral and memetic means of communication and connection. Culture industries have seen new forms of production and circuits of distribution found in user generated sites that have challenged corporate cultural production centres of older media forms and crowd-sourced and crowd-funded means of sustainability. Disciplines and practice of law have struggled to cope with the new burgeoning practices online that challenge existing forms of governance and social interaction through new media phenomena like content sharing, sexting, identity management, and cyber-bullying.

Blindspots

However, digital natives is not a term that is easily accepted or used. It propagates certain problematic approaches to understanding identity, citizenship and social change in information societies. There are a few blind-spots in the discourse which persist across the spectrum of discourse which is often divided and contradictory.

Change without Difference

The digital native identity imagines that all users of digital technologies are universally the same. By locating them in the digital universe as having similar origin points and practices, it suggests that they have a singular identity in an unchanging world. The digital native remains a category or identity that remains to be understood in its difference to integrate it into a world vision that precedes it. The difference is invoked only to emphasise the need for continuity from one generation to another; and thus there is a call to ‘rehabilitate’ this new generation into earlier moulds of being.

The Social Construction of Loss

Each new technology has been accompanied by a nostalgia industry that immediately valorises a pre-technological, innocent world that was simpler, better, fairer and easier to live in. The digital native identity is premised on multiple losses – loss of childhood, loss of innocence, loss of control, loss of privacy, etc. – which together imply the loss of political participation and social transformation; the loss of youth as the political capital of our digital futures.

Trivialising the Realm of the Cultural

The existing approaches bind digital natives in narratives of simultaneous celebration and fear, reducing their engagement with the digital to content production and consumption. They paint the digital natives as without agency, not reading their everyday practices as political and socially significant. The location of digital natives, almost entirely in the realms of this narrow definition of culture, further allows for the control and ownership of digital technologies and infrastructure in centralised power structures of the state and market.

Overemphasis on Access and Usage

The defining characteristic of digital natives is access and usage. Both of these are treated as politically neutral categories with a simple prewired response to all critiques of digital inequity: more access, more usage, better societies [See keyword entry “open access”]. However, this often fails to account for different intersections of inequity and tries to contain the question of digital within the digital. There isn’t much effort put into recognising the different intersections of race, gender, sexuality, ethnicity and class as the larger axes of injustice that get amplified, acerbated and re-formed by digital technologies in new ways.

Critique

While there is a recognition that the digital turn has changed how young people in different information and network societies are developing new modes of thinking about themselves, how they connect with each other, and their relationship with their immediate environments, there is also a growing critique of the term digital natives from different academic and practice based disciplines.

Post-Colonial Studies

Critics (Sandford 2006; Philip et al 2010; Nakamura & Chow-White 2013) have pointed to the colonial overtones of the natives-immigrant binary and objected to perpetuating such dichotomies within the globalising networks. They suggest that using the same vocabulary that marks exploitation and dominance that marked the colonial enterprise forces similar hierarchies of privilege and exclusion in the digital world, without problematising the terms. The use of digital native also inverts the original logic of the colonial enterprise, where the initial builders of the cyberspace are actually removed from it as outsiders and allows for new users to claim dominion over the virtual geographies. Moreover, terms like digital natives are used more as a celebration, without delving into the problem of using words like native and immigrant, depoliticizing them and naturalizing them in our everyday attitude towards race, ethnicity, cultural imperialism, etc.

Media Theory

Media practitioners and theorists (Livingstone 2009; Nayar 2010; Thomas 2011; Jenkins et al. 2013; Boyd 2014) have argued that this sort of technological determinism to separate different users is erroneous and often produces skewed accounts of reality. Media practices are collaborative, inter-generational and convergent, with abilities and capacities with one media form carries into others. The idea that digital natives engage with forms that are completely new denies historicity of media practices and also produces forced interruptions in an intertwined and entangled media landscape.

Sociology and Anthropology

Within sociology and anthropology, where new modes of fieldwork, ethnography and participant observation in the virtual world are quickly emerging as new sites of inquiry, there is a growing concern of thinking about the digital natives’ online practices as separate from their physical environments and locations (Orton-Johnson 2013; Davidson 2010; Losh 2014). There is a growing recognition that merely descriptions of the online as separated from the offline (sometimes coded in the RL-IVR binary) can be limited in our understanding of human and social connections.

Social and Political Movements

Interventions and research in the field of social and political movements, especially focusing on youth, technology and change, have pointed out that the emphasis on digital natives as new kinds of users, removed from their history or context, promotes the idea that digital technologies are completely disconnected from older media and technology practices (Joyce 2010; Shah 2011; Franklin 2013, Biekart 2013). This increases the gap between traditional forms of activism and new media activism, as if they have no common causes and ambitions in common, when the change is often in tactics and strategies rather than the core values of change towards equity and justice.

Management and Innovation Studies

Much of the attention on digital natives comes infused with politics of hope – an expectation that the young, in their interactions with digital technologies, will produce innovative ideas and solutions to some of the ‘wicked’ problems of our times (Lessig 2008; Olopade 2014; Shirky 2009; Brown et al 2010). However, this expectation leads to an unequal investment in tools for social change than social change itself. It becomes a way by which massive infrastructure is built without any proportional impact on the society, and the development of tools and platforms, innovative processes and management of resources becomes the end-point of the endeavour.

New Approaches

Given this persistent critique of the name and the processes surrounding digital natives, there has been a recent call at dropping the name and finding another replacement for it, which might be less problematic. However, such a solution that only tries to change the nomenclature is not a fruitful shift because it only tries to fix the surface and not the systemic problems that are inherent in it (Shah 2010). Within the new wave of digital cultures, there are a few approaches and interventions which have offered new ways of engaging with Digital Natives as a concept, a framework and an identity.

Conceptually, it has been suggested that instead of looking for superstar narratives which are generally exception rather than the rule, we need to provide an account for an Everyday Digital Native – not somebody who is defined through age or usage, but through a transformation of their lives because of the presence of digital technologies. This is to recognise that the digital spread geographically and vertically is uneven. Different people, based on their location and context have different relationships with technologies. But you do not have to be a power-user to be a digital native. The digital native as a strategic and discontinuous identity can be imagined as one of the many engagements that intersect with everyday practices and negotiations of survival and living rather than an external catalyst of sorts.

Experiments that have tried to involve the digital native in defining their technologically mediated identities have also been fruitful in laying out the tensions of talking about the techno-social and its politics. It allows us to think of the digital as learned behaviour, and also lays bare the processes by which the digital produces our default conditions of life and living at the level of governance, policy, social regulation, control and so on, thus tying in digital natives politics with crucial and fundamental questions in our times.

Approaches to digital natives research and intervention have shown that the older forms of representational engagement, which often replays power hierarchies by either making digital natives into exemplars or by treating them as juvenile, are not as effective. New modes of intervention where the role of the researcher and interventionist is to create infrastructures of knowledge and change production rather than to be the producer and analyst of these knowledges, have been proposed.

Digital natives is not a prescriptive identity. It is a lens which allows us to look at the role of technology in our everyday life and how the varied engagement produce different kinds of social, political, cultural, economic and discursive transformations. Instead of beginning with a definition of what a digital native is, and then going in search for people who would fit that definition, or finding people who need to be rehabilitated into that description, we might want to begin by questioning what is the form, function and role of this name and what it enables (or not) in our usage. Digital natives can be used with irony, to break the gentrification of politics that is implied in its usage, or it can be used as an empty signifier, inviting local, specifically and historically rooted meanings which provide a new way of engaging with youth, technology and visions of change.

References

Biekart, Kees and Fowler, A. (2013): ‘Transforming Activism 2010+: Exploring Ways and Means’, Development and Change 44(3): pp. 527-546.

Boyd, D. (2014): It’s Complicated: the Social Lives of Networked Teens, New Haven: Yale University Press.

Brown, V. et al (eds.) (2010): Tacking Wicked Problems Through Transdisciplinary Imagination, London: EarthScan.

Davidson, C. (2010): The Future of Thinking: Learning Institutions in a Digital Age, Cambridge, MA: MIT Press.

Franklin, M. I. (2013): Digital Dilemmas: Power, Resistance, and the Internet, New York: Oxford University Press.

Isaacs, S. (2011): ‘Shift Happens: A Digital Native Perspective on Policy Practice’ in N. Shah and F. Jansen (eds.): Digital AlterNatives with a Cause?: Book 1 – To Be, Bangalore: Hivos Knowledge Programme, pp. 21-32.

Jenkins, H. et al. (2013): Spreadable Media: Creating Value and Meaning in a Networked Culture, New York: New York University Press.

Joyce, M. (ed.) (2010): Digital Activism Decoded: The New Mechanics of Change, New York: iDebate Press.

Lessig, L. (2008): Remix: Making Art and Commerce Thrive in the Hybrid Economy, London: The Penguin Press.

Livingstone, S. (2009): Children and the Internet, Cambridge: Polity Press.

Losh, E. (2014): The War on Learning: Gaining Ground in the Digital University, Cambridge, MA: MIT Press.

Nakamura, L. and Chow-White, P. (2012): ‘Introduction - Race and Digital Technology: Code, the Color Line and the Information Society’ in: Nakamura, L. and Chow-White, P. (eds.): Race After the Internet, New York: Routledge, pp. 1-18.

Nayar, P. (2010): An Introduction to New Media and Cybercultures, Oxford: Blackwell Publishing.

Olopade, D. (2014): The Bright Continent: Breaking Rules and Making Change in Modern Africa, New York: Houghton Mifflin Harcourt.

Orton-Johnson, K. and Prior, N. (eds.) (2013): Digital Sociology: Critical Perspectives, New York: Palgrave Macmillan.

Palfrey, J. and Gasser, U. (2008): Born Digital, New York: Basic Books.

Philip, K. et al. (2010): ‘Postcolonial Computing: A Tactical Survey’, Science, Technology & Human Values, 37(1), pp. 1-27.

Prensky, M. (2001): ‘Digital Natives, Digital Immigrants’ in On the Horizon 9(5), pp. 1–6.

Salkowitz, R. (2008): Generation Blend: Managing Across the Technology Age Gap, New York: John Wiley & Sons.

Sandford, R. (2006): ‘Digital post–colonialism’ in Flux (14 December), available at flux.futurelab.org.uk/2006/12/14/digital-post-colonialism/ [accessed 8th August 2014]

Shah, N. (2010). ‘Knowing a Name: Methodologies and Challenges’ in N. Shah, F. Jansen and J. Stremmelaar (eds.) Digital Natives with a Cause? Position Papers, The Hague: Hivos Knowledge Programme.

Shah, N. and Jansen, F. (eds.) (2011): Digital AlterNatives with a Cause?, Bangalore & Den Haag: Hivos Knowledge Programme.

Shirky, C. (2009): Here Comes Everybody: How Change Happens When People Come Together, New York: Penguin Books.

Stanat, M. (2006): China’s Generation Y: Understanding the Future Leaders of the World’s Next Superpower, New Jersey: Homa and Sekey Books.

Tapscott, J. (2008): Grown-Up Digital: How the Net Generation is Changing Your World, New York: Vintage Books.

Thomas, M. (ed.) (2011): Deconstructing Digital Natives: Young People, Technology, and the New Literacies, New York: Routledge.

h

Hybrid space designates a single unified concept of space that is characterised by the simultaneous presence (co-presence) of different, heterogeneous, and at times contradictory (operational) spatial logics. The concept proceeds from the assumption that different spatial logics are superimposed in any ‘lived’ space. Physical structures, whether natural or constructed, are superimposed with processual flows that operate according to a different and mostly incommensurable spatial logic. Such flows can be flows of communication, trade, goods and service provision, transportation, data flows, and even face-to-face exchanges and public gatherings of different kinds. While the concept of hybrid space is thus not necessarily defined by the superimposition of technological infrastructures onto the ‘natural’ or built environment, the density and spatial heterogeneity of space is greatly increased by electronic communication media, especially by the increasing presence of electronic signals, carrier waves and wireless communication and data networks in lived environments.

Definition

The concept of hybrid space was first proposed by architects Frans Vogelaar and Elisabeth Sikiaridi in their text 'Idensifying™ Translocalities” (Vogelaar & Sikiaridi, 1999). In their essay, Vogelaar and Sikiaridi include a citation from Flusser’s essay 'The City as Wave-Trough in the Image-Flood' that provides a remarkably prophetic image of the variable densities of contemporary hybridised urban spaces, permeated by wireless media and information flows, and the 'webs of interhuman relations' that unfold in them,

The new image of humanity as a knotting together of relationships doesn't go down easily, and neither does the image of the city that rests upon this anthropology. It looks roughly like this: We must imagine a net of relations among human beings, an 'intersubjective field of relations.' The threads of this net should be seen as channels through which information like representations, feelings, intentions, or knowledge flows. The threads knot themselves together provisionally and develop into what we call human subjects. The totality of the threads constitutes the concrete lifeworld, and the knots therein are abstract extrapolations […] It can be imagined roughly in this way: the relations among human beings are spun of differing densities on different places on the net. The denser they are, the more concrete they are. These dense places develop into wave-troughs in the field […] The wave-troughs exert an attraction on the surrounding field (including the gravitational field); ever more intersubjective relationships are drawn into them […] Such wave-troughs are called cities.

(Flusser 2005: 325-326)

The knotting of dense webs of interhuman relations identified by Flusser, is intensified exponentially by the proliferation of networked and especially mobile wireless media. Adrian Mackenzie for instance, in his book Wirelessness (Mackenzie, 2010, p. 213), speaks of  ‘overflows’ (between spatialities, things, bodies, divisions of private-public) that redraw boundaries and reorganizing the time/space of action. Crucially, though, Flusser recognizes these dense webs of interhuman relationships constitute the concrete lifeworld of contemporary urban subjects, implying that both urban space and subjective experience are transformed simultaneously by these topological 'densifications.'

Grasping the dynamics of hybrid space as a topological media concept, however, poses distinct challenges. The notion of a superimposition of different spatial logics, infrastructures and processual flows within the same space promotes a sedentary image of space, perhaps most vividly expressed by wireless spectrum allocation charts that show how different frequency bands are distributed and allocated for specific functions within each legal jurisdiction (usually the territory of a country). Most striking about these charts is their sheer complexity and the relatively narrow bands that are used for Wi-Fi networking and mobile phone communication: much of the electromagnetic layers of hybrid space is publicly invisible, unknown, and inaccessible, reserved for professional elites and specialist uses. Despite these issues, however, a number of key features can be identified as defining characteristics of the concept and the phenomena to which it refers:

  • Discontinuity. Hybrid space is discontinuous. The presence/absence of signals and communication modalities means that the interaction between physical and communicative spaces is continuously subject to interruption and reorganization.
  • Variable Densities. Hybrid space is characterised by ever-varying densities – from place to place and moment to moment. This second aspect is highly significant as it introduces an important temporal dimension to the concept. The density of hybrid space increases with the number of actual, possible and potential interactions of different flows across time. In the case of wireless communication and data transmission, the density increases as more signals are available and the data transmission rates of signal carrier waves are increased (in part by advances in the router and network client technology – for instance, faster and more versatile mobile phones, tablets, portable computers and other network enabled devices). The introduction of new wireless network protocols and signals most obviously increases the density of the environment, evidenced by the subsequent introduction of GSM networks, GPRS, 3G and 4G telecommunication networks and Wi-Fi networking protocols with higher data-transmission rate capacities.
  • Volatility. However, densities can also decrease, sometimes instantly and intermittently. This is revealed most clearly in the context of many popular protests, where mobile communication networks are quite often either overloaded or simply switched off by authorities under siege. In many of the public square protests in 2011 images, reports and videos were often recorded with mobile devices (smart phones and consumer photo / video camera’s) and then uploaded via 'wired' internet network connections. In other cases satellite relays of data traffic were used to transmit messages from a particular locale where mobile phone and data networks had been switched off.

Critical Intersections

The distinctness of hybrid space as a concept can be clarified further by comparison with a number of related influential concepts and ideas, especially as a mode of reflecting critically on the intersections of digital, networked and mobile technologies and space from the perspective of the humanities and social sciences.

Places and Flows

The Catalan sociologist and urban theorist Manuel Castells proposes an entirely contrary reading of the relationship between physical and networked communication spaces to hybrid operational heterogeneity yet spatial unity of hybrid space, characterized instead as a spatial dichotomy of 'places' and 'flows'.  In his book The Rise of the Network Society (Castells, 1996), the first part of his trilogy on the information age, he describes the rise of flexible social network connections which resulted from economic and social transformations in late industrial societies and were strengthened by the introduction and wide application of new technology, primarily communication and information technology. Castells postulates that the network has become the dominant form in a new type of society that he calls the network society. He treats the influence of the network form as a social organisation in physical and social space, and establishes a new kind of dichotomy. According to Castells, there are two opposing types of spatial logic, the logic of material places and locations (the 'space of place') and the logic of intangible flows of information, communication, services and capital (the 'space of flows'). The particularly striking thing about Castells' theory is the strict separation between the two kinds of spatial logic. Whereas the space of places and locations is clearly localised and associated with local history, tradition and memory, Castells sees the space of flows as essentially ahistorical, location-free and continuous. This last mainly because it moves across every time zone and so in some sense is not only location-free, but also timeless.

Castells believes there is a fundamental asymmetry between the two kinds of space: while the vast majority of the world's inhabitants live, dwell and work in the space of places and locations, the dominant economic, political, social and ultimately also cultural functions are increasingly shifting to the place of flows, where they make possible location-free ahistorical network connections, international trends, power complexes and capital movements. Only a very small part of the world population is represented by the latter, which take decisions about the organisation and use of new location-free spatial connections. But increasingly the decisions made within such self-contained systems determine the living conditions in those places and locations where the vast majority of the world population attempts to survive and where their knowledge, experience and memory is localised. Castells feels that it is not surprising that political, social and cultural bridges need to be deliberately built between the two spatial dynamics, to avoid society’s collapse into insoluble schizophrenia. Such a strict division between physical, embodied and built environments and the processual spaces of flows of electronic and digital communications networks is curiously repeated in the imaginaries of 'virtual realities' that abound in popular culture. The Matrix trilogy of the Wachowski brothers is perhaps the most famous and most articulated case in point. The narrative suggests a disembodied 'neural interactive simulation' that immerses mute and motionless humans in the illusion of a 'real' live experience that is entirely technologically projected and fictionalised.

Hybrid space rejects both these versions of a spatial dichotomy. Instead, the concept of hybrid space indicates the simultaneous presence of such different, heterogeneous and potentially incommensurable spatial logics in every possible 'place' and their ever varying densities and volatile interactions that result from this spatial variability. Given that human life still unfolds on a planetary scale, or in immediate proximity (orbital space stations) to the planet – well in reach of earthly transmissions, there is no place conceivable where not a single wireless signal transmission is present or can be intercepted. Short wave radio signals, for instance, literally have a planetary reach. This is what prompted the Stalinist Enver Hoxa regime in Albania to install one of the most powerful short wave radio transmitters in the capital, 'Radio Tirana', assuring global presence of the regime's radio propaganda, while simultaneously setting up transmitter fields across the entire country that blocked out all other transmission frequencies.

Hybrid space proceeds from the assumption that all spaces are constituted through assemblages of embodied, lived and electronically mediated operational elements that despite their heterogeneity unfold in the same space – it emphatically rejects spatial dichotomies.

Locative Media

Hybrid space has become more tangible in the experience of everyday life with the deployment of location sensitive wearable devices. While this began in the technological research labs, then in logistics, specifically ship navigation using GPS, and then went through a cycle of locative arts experiments, it has now firmly arrived with ordinary citizens / consumers. GPS positioning and other triangulation techniques have been built into virtually all recent generations of smart phones and no longer require a separate device. These locative systems are present in a wide variety of apps offering geo-specific media and communications services, thus accentuating the hybridity of contemporary lived spaces.

Artistic engagements with these forms of remote technological presence in the landscape (invisible and otherwise insensible coordinate systems), in particular, tend to highlight the discrepancies between technologically transmitted data and direct experience. A good example is the series of technologically enhanced urban walks organized by artist collective Constant, Routes + Routines (Constant, 2006) that explored mistakes, missing roads, uncharted terrains in popular digital and mapping systems in residential districts of Brussels. Such artistic enquiries emphasize the discontinuous nature of hybrid space and question the ideology of seamless technological surfaces that often dominates mainstream digital cultures.

Internet of Things (IoT)

The much maligned concept of the Internet of Things (IoT) designates a condition where virtually all objects become network enabled or connected using massively distributed networked sensor technologies, tagged objects that can be read by these sensors, and networked databases to process, store, and respond to these sensor readings. Particularly the use of low cost RFID tags, little chips that either transmit or respond to radio signals, that are envisioned to eventually replace bar codes on ordinary daily products will create dense networks of exchange primarily between 'non-humans' that densify hybrid space further. Against the organised innocence of the smart fridge that knows not only how much milk it holds but also its expiry dates (and where you bought it last time), dystopian visions of an ever more finely distributed matrix of control over every daily object we interact with, and with that over every aspect of our quotidian practices of everyday life, are spelled out (for a critical overview, see Kranenburg, 2007). Different terms have circulated – IoT, ubiquitous computing, pervasive and ambient computing – they all signify the ever closer proximity between physical objects and digital electronic networks in a densifying hybrid space.

It might be almost a cliché to state that 'objects have agency too,' this is clearly evident from the massive proliferation of so-called 'smart devices' and the technological trends converging in IoT. However, in so far as the densification of the webs of interhuman (and inter-object) relations is intensified further by IoT, while simultaneously the boundaries between physical and networked (media-) objects is blurred even more, the constitution of the concrete lifeworld that Flusser associates with these webs of relationships becomes more ambiguous, as human subjects and technological objects become entangled ever further. The concept of hybrid space heightens our sensitivity to these evolving changes and allows for a more diversified analysis of these entanglements, rather than attempts to keep seeing them as distinct.

Futures: The Right to Disconnect

With the ever closer proximity between physical bodies, objects and digital electronic networks through the pervasiveness of sensor technologies and distance-readable tags, the question of how it is still conceivable to switch off or to disconnect from hybrid space becomes all the more pertinent. At the level of individual agency, this is indeed increasingly unthinkable. Even disregarding the threats of pervasive computing and sensing technologies, or the IoT conception, too many of our daily interactions simply depend on networked systems that are deployed throughout private and public lived spaces. If any opportunity for disconnectivity is still to survive in the future, it will require a deliberate intervention at the collective level. It requires in short a politics of disconnectivity. For the 2007 issue 'Hybrid Space' of Open, the journal for art and the public domain, Howard Rheingold and myself jointly wrote a consideration of the 'right to disconnect', and how this could be envisioned in an ever more tightly 'connected' world. We proposed a 'mindful' approach to disconnection, but also emphasized that a personal practice of mindful or selective disconnectivity should be backed up by a legally enshrined right to disconnect. Given what the ongoing disclosures through WikiLeaks and the NSA Files dossier have revealed, this seems at best a very distant promise.

References

Castells, Manuel (1996): The Rise of the Network Society, Blackwell Publishers, Oxford.

Constant, VZW (2006): Routes + Routines, locative art project, Brussels. www.constantvzw.org/site/-Routes-Routines,5-.html Flusser, Vilém: 'The City as Wave-Trough in the Image-Flood', translated by Phil Gochenour in: Critical Inquiry 31 (2005), pp. 320-328. Kranenburg, Rob van (2007):  The Internet of Things. A Critique of Ambient Technology and the All-Seeing Network of RFID, Network Notebooks 02, Institute of Network Cultures, Amsterdam, http://networkcultures.org/wpmu/portal/publications/network-notebooks/the-internet-of-things/ Mackenzie, Adrian (2010), Wirelessness: Radical Empiricism in Network Cultures, Cambridge, MA: MIT Press. Seijdel, Jorinde & Kluitenberg, Eric (eds.) (2006): 'Hybrid Space: How Wireless Media Mobilize Public Space', OPEN: Journal for Art and the Public Domain, Amsterdam: SKOR / NAi Publishers, http://www.skor.nl/eng/publications/item/open-11-hybrid-space-how-wireless-media-are-mobilizing-public-space?single=1 Vogelaar, Frans & Sikiaridi, Elisabeth (1999), 'Idensifying™ Translocalities', in: Logbook NRW.NL (catalogue), De Balie, Amsterdam.

k

Knowledge is processed information. As specified by Oxford Dictionaries, it is traditionally understood to be gained by “experience or by education” (cf. Oxford Dictionaries 2013). Today, however, it is also provided by digital technology and described as “information held on a computer system” (ibid.). Despite not being a digital term in the strict sense, this new meaning fundamentally affects our concept of knowledge, which provides the reason for addressing this word in the Critical Keywords for the Digital Humanities.

Digitalization has shifted our perspective on knowledge as well as its historic relation to technology. In our industrial past, the main focus was on the discussion of technology as a form of human knowledge, while in our digital present the technical constitution of knowledge itself has become increasingly emphasized. This new perspective has been triggered by the widespread use of digital devices and the internet. Ever since there was a critical mass of information and books in digital form, knowledge was not only learned or studied, but also searched for. As its digitalization makes knowledge addressable in a new form, the order of knowledge is transformed from categorization to a new messiness answered by the rise of metadata as a tool to find a way through this disorder (cf. Foucault 1966, Weinberger 2008). Accordingly, performing a search as a technical scan of information in order to get specific knowledge or to evaluate a knowledge field are knowledge techniques as regularly used as making notes. This being the case, new skills have become as necessary as reading and writing indicated by the term digital literacy.

Critique

The transformation of gathering and processing knowledge results in a number of debates concerning both its quantity and quality. Interestingly, in most of these debates, the availability of more information is not addressed as a benefit but received as a problem.

Information Overload (Brain Capacity Argument)

Historically, the term for describing a loss of orientation on account of too much information was popularized in the 1970’s by Alvin Toffler’s bestselling book Future Shock. The issue of overload, however, can be traced back further to the concept of ‘sensory overload.’ In the early 1900’s, this term was used to describe the constant stimuli to our nervous system from an urban scenario in an industrialized environment, as in, for instance, Simmel’s ‘The Metropolis and Mental Life’ (1903). It was reapplied when the digitalization of communication allowed for easier publishing, distribution and research of knowledge, and thus created a new abundance of information. Before digitalization, the remote access to detailed information about an issue or topic was only available for the past. With the internet, an archive of the present came into being. This new factor is described as ‘information overload,’ a field received as unstructured. It calls for an attention economy. A loss of quality is also often associated with this sense of an unstructured and, therefore, chaotic quantity of information, as once verified as well as controlled gatekeepers are now bypassed – among them journalists, doctors, or academics – so that rumours, lies, and opinions prevail. The central problem in this argument is that it applies the rules of old media to the digital public while not recognising the new rules that take their place (see below). This leads to the reception of digitally processed knowledge as a deranged information environment, in which knowledge loses its traditional link to power. From an economic perspective, this seems to be clearly the case: there is an abundance of information and the supply is too high for the demand. Knowledge, however, cannot necessarily be assessed according to an economic logic. There can only be not enough knowledge, but never too much.

Making Us Stupid (Neural Brain Connections Argument)

Compared to the fear of information overload, arguments surrounding the cognitive plasticity are based on a much more precise observation of how we handle digital knowledge. From this perspective, the new quantity of instantaneous and ubiquitous information does not result in a reduced quality of knowledge, but in a diminished capacity for knowledge reception. It gained popularity in 2008 with Nicholas Carr’s essay of international fame ‘Is Google Making Us Stupid?’, which was followed by the book The Shallows: What the Internet is Doing to our Brain (2010). Both subtly raise concerns on two levels: a) constantly incoming information has changed information reception into a new mode of scanning, and this is chipping away at our barely acquired skill of concentration which enabled us in the past to stay focused; b) this further affects the process of thought in a more material way: by not using our brains for concentration, we are remapping our neuronal connections. Hunting or being hunted by information, the cultivated literal mind regresses to a primordial state of distraction. Interestingly, this argument can be traced back to Plato who directed it against the technique of writing as it “will produce forgetfulness in the souls of those who have learned it, through lack of practice using their memory” (Phaedrus 275a-b). Plato also anticipated today’s concerns that new technology makes knowledge shallow as it allows us to acquire “the appearance of wisdom instead of wisdom itself” (ibid.).

This neuronal argument, however, has been challenged in two ways: from a postcolonial perspective, evaluating the reading of a book as the deepest form of understanding and a superior way is questionable. Against favouring a certain way of mediating knowledge, N. Katherine Hayles (2012) argues that print literacy compared to digital literacy simply provides a different, but not a deeper perception. Historically, methods to perceive texts change over time in different information environments, from close reading to symptomatic reading, and finally to a hyper or cursorily reading necessary for digital literacy. From this perspective, digital literacy does not diminish, but on the contrary, enhances our brain capacities: Studies have shown that compared to reading one text, hypertext reading increases the demand of decision making whereby a high working memory and prior knowledge clearly give an advantage (DeStefano et. al. 2007). In short, the internet is challenging our brains.

Not Knowledge (Hermeneutic Argument)

Hermeneutic arguments usually make a distinction between information and knowledge, with information being digitally processed, and the latter requiring human skills. Historically, two concepts resonate in this approach. One of them is to be found in the differentiation of text/interpretation (biblical exegesis) or later distinctions between sign/meaning (modern hermeneutics) which are transformed in the digital context into information/knowledge. In this case, information is treated as content while knowledge is defined as understanding this content. The Oxford Dictionary online, for example, defines information as ‘facts,’ while knowledge is information or facts that were further processed “through experience or education (…) or practical understanding of a subject.” Clearly, this widespread claim emphasizes the difference between the terms, whereby ‘knowledge’ is addressed as superior and of more value. The other historic line of the argument that digitalized information cannot be addressed as knowledge is found in the definition of knowledge (episteme) as something other than technology (techne). Subsequent to Plato who described episteme as a theoretical compound of techne, it often has been argued that science aims at enlarging our knowledge, while technology is simply applied science. However, Ryle (1949, pp. 14-47) amongst others has shown that the knowledge to do something (knowing how) does not derive from the knowledge about something (knowing that). From there it has been argued that technology is its own form of knowledge. Recently, the human geographer Nigel Thrift even described technology and knowledge as a set “being almost impossible to separate” (2004, 186). Refusing to construct technology as the ‘other’ could also be challenged from a feminist perspective (Haraway 1991, Braidotti 2013). Then, the following point comes into view: if the evolution of technology is related to knowledge and knowledge is power, it surely affects the social body (Leroi-Gourhan 1964).

Conclusion

In general, a more productive attempt to define knowledge needs to take into account today’s profound media change as it “is simply not practicable to differentiate the question of what a fact is from the question of what the facts are” (Düttmann 2007, pp. 76-77). With new media forms, new and different relations between knowledge and truths emerge. The old logic of the printing press, which served us very well for so long, cannot necessary be applied to today’s perspective on knowledge. Facts in the age of the printing press claimed truthfulness by being durable. This emphasis on durability is not compatible with the constantly changing knowledge landscape of digitalization. Today, algorithms frequently update facts so that content is altered endlessly. While the digital fact has never been more accurate, it also has never been less durable.

However, a number of emerging approaches do not negate, but evaluate or even embrace the digitalization of knowledge. New concepts like big data, open access, and open science have given the term Digital Humanities a unique popularity, in relation to which traditions of critical thinking are currently struggling to navigate (Liu 2012). Other than in our past, knowledge must not be necessarily an end in itself, an ideal behind which certain interests could hide very well. But this does not mean it needs to become a commodity either. If a world with more information fears the deterioration of knowledge, clearly the values of that world should be held in question.

References

Braidotti, R. (2013): The Posthuman, Cambridge, Polity Press.

Carr, N. (2008): ‘Is Google Making Us Stupid?’, in: The Atlantic Monthly, July 2008. Also available at: http://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/6868/

_____. (2010): The Shallows: How the Internet is Changing the Way We Think, Read and Remember, London: Atlantic Books.

DeStefano, D.; LeFevre, J.-A. (2007): ‘Cognitive Load in Hypertext Reading: A Review,’ in: Computers in Human Behavior, 23, pp. 1616-1641.

Düttmann, A. (2007): Philosophy of Exaggeration, London, New York: Continuum.

Foucault M. (1998): ‘The Order of Discourse,’ in: Robert Young (ed.): Untying The Text: A Post-Structuralist Reader. London, Routledge, pp. 51-78.

Hayles, N. K. (2012): How We Think: Digital Media and Contemporary Technogenesis. Chicago: University of Chicago Press.

Haraway, D. (1991): ‘A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century,’ in: Simians, Cyborgs and Women: The Reinvention of Nature, New York: Routledge, pp. 149-181.

Leroi-Gourhan, A. (1964): Gesture and Speech. Cambridge, MA: MIT Press (1993).

Liu, A. (2012): ‘Where is Cultural Criticism in the Digital Humanities?,’ in: M.K. Gold (ed.): Debates in the Digital Humanities. Minneapolis: University of Minnesota Press, pp. 490-509. Also available at: http://dhdebates.gc.cuny.edu/debates/text/20

Oxford Dictionaries (2013): ‘Knowledge’ available at: http://www.oxforddictionaries.com/definition/english/knowledge?q=knowledge

Plato (2005): Phaedrus, London, Penguin Classics.

Ryle, G. (1949): The Concept of Mind, Oxford: Routledge (2009).

Simmel, G. (1903): ‘The Metropolis and Mental Life,’ in: G. Bridge and S. Watson (eds.): The Blackwell City Reader, Oxford and Malden, MA: Wiley-Blackwell (2002), pp. 11-19.

Toffler, A. (1970): Future Shock, New York: Bantam (1990).

Thrift, N. (2004), ‘Remembering the Technological Unconscious by Foregrounding Knowledges of Position,’ in: Environment and Planning D: Society and Space, 22(1), pp. 175-190.

Weinberger, D. (2008): Everything Is Miscellaneous: The Power of the New Digital Disorder. New York: Henry Holt.

l

'Everyone uses lists,' Francis Spufford (1989, p. 2) tells us. Lists are all pervasive; they are part-and-parcel of how we experience and make sense of the world. According to Umberto Eco (2009), the whole history of creative production can be seen as one that is characterised by an ‘infinity of lists’ comprising, to name a few, visual lists (sixteenth century religious paintings, Dutch still life paintings), pragmatic or utilitarian lists (shopping lists, library catalogues, assets in a will), poetic or literary lists, lists of places, lists of things (like the great list of ships in the Iliad), and so on, ad infinitum.

In accordance with such variation in form comes great variation in purpose, with lists used to ‘enumerate, account, remind, memorialize, order,’ and so on (Belknap 2004, p. 6). List making, Geoffrey Bowker and Susan Leigh Star (2000, p. 137) point out, ‘has frequently been seen as one of the foundational activities of advanced human society’: to cite three examples, list making is argued to be crucial to our understanding of orality and the development of literacy (Goody 1977, pp. 74-111), and to the connection between these and later forms and techniques of information management (Hobart and Schiffman 1998), as well as to our appreciation of the functioning and value of narrativity (White 1981). In this way, Robert Belknap (2004, p. 8) perhaps has a point in proposing that ‘the list form is the predominant mode of organizing data relevant to human functioning in the world’.

In this sense, lists feature as a functional organisational tool in a digital age brimming with the volume, variety and velocity of big data sets and the endless production of bits of information. With those vast historical precedents in mind, we can now also think about listing as the algorithmically inflected method for making visible the parts within the whole that renders information useful. For instance, web search delivers a manageable and algorithmically organised, if partial and individualised, list of items from a massive database. Lists form the usable interface, a method and tool for managing information within digitised and computational environments.

Definition

Defined simply, a list is a block of information organised formally and composed of a set of members (Belknap, 2004, p. 15). What is significant about a list is that it is ‘simultaneously the sum of its parts and the individual parts themselves’ (p. 15). It relies on discontinuity in that it requires a set of elements, but can be understood as a whole – as in a shopping list. That is to say, like links in a chain, ‘the list joins and separates at the same time’ (p. 15). In addition to these features, Jack Goody (1977) also suggests that, across their various manifestations, lists have a number of basic characteristics or conventions concerning how they are constructed and read. For instance, in contrast to a narrative, lists can be read in different directions; however, as distinct from a database, a list ‘has a clear-cut beginning and a precise end, that is, a boundary, an edge, like a piece of cloth’ (Goody, 1977, p. 81). This distinction is significant in terms of what we might refer to, after Lev Manovich (2001, p.37), as the era of the database.

A database shares with a list a functionality or the effect of structuring experience, objects and thus the world in particular ways. However, a database can be defined differently as ‘a structured collection of data’ with no necessary beginning or end, with each item carrying equal value and being equally accessible for functional use. Manovich stresses the incomplete, modular, scalable form of a database (2001, p. 218); whereas, a list ‘encourages the ordering of the items, by number, by initial sound, by category, etc. And the existence of boundaries, external and internal, brings greater visibility to categories, at the same time as making them more abstract’ (Goody 1977, p. 81). As Goody’s definition suggests, the simple idea of a list takes on great complexity in practice. This complexity becomes more apparent and more to the point in the context of digital production and processing of the infinite bits of information within the contemporary big data society.  

Critical Concerns

Just as boundaries are a significant attribute (Goody 1977, p. 80) of the list and how each is compiled, so, too, are semantic boundary disputes for how we conceive of the list vis-à-vis other forms of enumeration. Gass (1985, p. 117) attempted a general taxonomy of lists consisting of three basic families: first, ‘lists built without a formal organizing principle that take shape as elements emerge’; second, ‘everything arranged by a particular principle’; and, third, ‘lists that are built through an externally imposed system’. But the forms that might be captured within these ‘families’ are far from settled. There is a lack of transparency in terms of the organising principle of a list that signals the larger challenges for using lists and listing as a critical device. In addition, the selective nature of lists introduces an often problematic process of inclusion and exclusion or boundary setting and gatekeeping that remains inherent and opaque. These issues will continue to pose pressing concerns within humanities and digital humanities scholarship.

If one were to compose a list of lists, Belknap (2004, p. 2) suggests, it ‘would include the catalogue, the inventory, the itinerary, and the lexicon’. This is, however, a problematic typology insofar as each item can be seen to hold subtle differences in form and purpose, as Belknap is quick to point out: ‘The catalogue is more comprehensive, conveys more information, and is more amenable to digression than the list. In the inventory, words representing names or things are collected by a conceptual principle.’ (pp. 2-3). In his discussion of lists in literature, Spufford (1989) extends the first of these distinctions by drawing a qualitative distinction between the list and catalogue, where a list ‘seems inevitably to exclude’ all that renders writing interesting, while the catalogue takes ‘a step closer to the complex intentions and complex effects of literature proper’ (pp. 1,3). Elsewhere, the close associations, and difficulties in differentiating, between the list and the classification system has also been noted (Bowker and Star 2000, pp. 137-61).

One (deceptively simple) distinction is that which Belknap (2004, pp. 3-5) draws between literary lists, or we could say creative lists, on the one hand, and pragmatic or utilitarian lists on the other hand. According to Belknap, literary lists are ‘complex in precisely the way a pragmatic list must not be’ (p. 5). Belknap, like Spufford before him, takes up and explores these ‘complexities’ of literary lists in great detail through the work of Emerson, Whitman, Melville and Thoreau. Central to this is an attention to the different functions of the individual units and the whole. Taking an even more expansive view, Umberto Eco (2009) moves beyond the literary by emphasising the aesthetic character of visual lists in addition to their literary forms and to their function or utility.

For a digital humanities concerned with the tools and techniques associated with expansive digital data sets and computational methods, lists and listing thus has a long pre-digital history. While we might explore with Belknap, Eco and others the myriad uses and forms of listing in literature and art, this prehistory also contains examples of the kinds of list-oriented mathematical calculations that are echoed in some of the contemporary methods of digital humanities. As Eco (2009, p. 366) notes, in the seventeenth century Pierre Gulian (1622 – Problema arithmaticum de rerum combinationibus), Marin Mersenne (1636 – Harmonie universelle) and, Gottfried Wilhelm von Leibnitz (1693 – Horizon de la doctrine humane) experimented with mathematical calculations involving combinations of words, letters, musical notation and other forms – in other words, large data sets that might produce an ‘infinite list of elements’ in order to obtain philosophical and mathematical truths. These experimentations vastly predate contemporary work with large corpuses and big data sets, which in their own way seek to make sense of the proliferation of cultural production and texts beyond the canon or the case study. For example, there is resonance with methods devised in linguistics to investigate word frequency and keyword extraction within corpuses such as the British National Corpus, or the International Corpus of English (e.g. Archer 2009).  

It is often noted that in the digital and internet era, lists and listing applications have proliferated. But in the face of large corpuses and data sets, a more pressing question for digital humanities becomes how to manage the sheer distance between the macro and the micro in any body of information or data, how to move between the item and the corpus or dynamic totality of the database as a whole. Algorithmic extraction then becomes the tool that determines what list of features from the corpus or the data stream to make visible (Hochman 2014). Scale, and even time, then becomes the focus of the methodological shifts that are a key preoccupation of digital humanities, enfolding technology and technique. Lists feature as a method ‘in the pursuit of informational concision and compactness’, an ‘art of data compression and of performance’ that takes the form of enhanced, even computational modes of curation (Burdick, et al. 2012, p. 32).

Likewise, lists become a crucial part of the standard interface environment, and a programmable element and tool for managing massive and dynamic databases (Monterio 2014). The infinite list acts as a standard piece of code allowing a large or infinite database to become accessible on an iterative basis without having to account for the whole at any point in the operation of a program. This both removes the need to process the whole in order to display the elements, and allows a user to navigate expansive data sets through an expandable list.  

If there is a central line running through these accounts of lists in history and literature, and as an operative tool of contemporary digital humanities, it would highlight the dual role of listing as method for navigating and making visible the parts and the whole in any data environment. While definitions and taxonomies may vary, the usefulness of thinking through and putting lists to work will continue to unfold as a factor of digital humanities.  

References

Archer, Dawn (ed.) (2009) What’s in a Word-list? Investigating Word Frequency and Keyword Extraction, Farnham, Surrey: Ashgate.

Belknap, Robert E. (2004) The List: The Uses and Pleasures of Cataloguing, New Haven: Yale University Press.

Bowker, Geoffrey C., and Star, Susan Leigh (2000), Sorting Things Out: Classification and Its Consequences. Cambridge, MA: MIT Press, 2000.

Burdick, Anne, Drucker, Johanna, Lunenfeld, Peter, Presner, Todd, and Schnapp, Jeffrey (2012) Digital_Humanities, Cambridge, MA: MIT Press.

Eco, Umberto (2009) The Infinity of Lists, trans. Alastair McEwen, London: MacLehose Press.

Gass, William (1985) ‘And’, in Allen Wier and Don Hendrie (eds.), Voicelust: Eight Contemporary Fiction Writers on Style, Lincoln: University of Nebraska Press, pp. 101-125.

Goody, Jack (1977) ‘What’s in a List?’ in The Domestication of the Savage Mind. Cambridge: Cambridge University Press, pp. 74-111.

Hobart, Michael E., and Schiffman, Zachary S. (1998) Information Ages: Literacy, Numeracy, and the Computer Revolution, Baltimore: The Johns Hopkins University Press.

Hochman, Nadav (2014) ‘The Social Media Image’, Big Data and Society, 1(2), pp. 1-15.

Manovich, Lev. (2001) The Language of New Media, Cambridge, MA: The MIT Press.

Monterio, Stephen (2014) ‘Mapping Moving-Image Culture: Topographical Interface and YouTube’, Fibreculture, 169: twentythree.fibreculturejournal.org/fcj-169-mapping-moving-image-culture-topographical-interface-and-youtube/.

Spufford, Francis (1989) ‘Introduction’, in Francis Spufford (ed.), The Chatto Book of Cabbages and Kings: Lists in Literature, London: Chatto & Windus, pp. 1-23.

White, Hayden (1981) ‘The Value of Narrativity in the Representation of Reality’, in W. J. T. Mitchell (ed.), On Narrative, Chicago: The University of Chicago Press, 1981, pp. 1-23.

Logistical media determine our situation. Consisting of locational devices such as Voice Picking technology, GPS tracking, RFID (Radio Frequency Identification) tags and biometric monitoring technologies, logistical media calibrate labour and life, objects and atmospheres. The spatial and temporal properties of these information and communication technologies are a determinate force in the production of subjectivity and economy. Their primary function is to extract value by optimising the efficiency of living labour and supply chain operations. Logistical media – as technologies, infrastructure and software – coordinate, capture and control the movement of people, finance and things.

Anticipated in the work on 'logistical modernities' (Bratton 2006) by urban theorist and military historian Paul Virilio (2006), and elaborated to some extent in the study on gameplay and war simulations by media philosopher Patrick Crogan (2011), the term 'logistical media' is named as such by communication historian and social theorist John Durham Peters (2012, 2013). For Peters, the concept of logistical media "stresses the infrastructural role of media" (2012, p. 43). Infrastructure makes worlds. Logistics governs them. The combinatory force of logistical media has a substantive effect on the composition of labour and production of subjectivity. The flexibility of global supply chains and just-in-time modes of production shape who gets employed, where they work and what sort of work they do (see Cowen 2014). Logistical systems, in other words, govern labour. Logistical labour emerges at the interface between infrastructure, software protocols and design. Labour time is real-time. The formation of logistical media theory, therefore, requires an analysis of how labour is organised and governed through software interfaces and media technologies that manage what anthropologist Anna Tsing (2009) identifies as 'supply chain capitalism'. In addition to storage, transmission and processing systems, the study of logistical media attends to the aesthetic qualities peculiar to the banality of spreadsheets, ERP (enterprise resource planning) systems and software applications that have arisen from particular histories in military theatres, cybernetics, infrastructural design, transport and communications. Logistical media theory is interested in how logistical infrastructure is made soft through ERP systems designed to govern the global movement of people, finance and things. Questions of securitisation, control, coordination, algorithmic architectures, protocols and parameters are among those relevant to a theory of logistical media.

Theoretical Debates and Critique

With its attention to flexibility, contingency, control and coordination, logistical media critique opens the relation between economies of data and the remodelling of labour and life. In terms of disciplinary orientation, however, logistical media theory does not yet exist. It is a theory whose status has yet to coalesce into a sustained analytical and methodological body of research and knowledge. For the purpose of sketching some contours of influence, a disciplinary set of relations for logistical media theory can be drawn across the fields of network cultures, software studies, critical organization studies, Canadian communications research and German media theory in addition to anthropological and historical research on infrastructure.

Limits of Locative Media

At the conceptual and empirical levels, research on locative media has next to nothing to say about logistical media and supply chain operations whose spatial-temporal operations are frequently enough overseen by locative media – GPS, RFID, voice picking technology, ERP systems, social media software, etc. The deployment of these technologies across logistical supply chains produces what Anja Kanngieser (2013, p. 598) calls 'microtechnologies of surveillance' designed "to track and trace workers by constantly tying them to territorial and temporal location[s]". From the embedding of RFID microchips under the skin of employees to the automated instructions on picking lists for workers in warehouses and distribution centres, the use of locational devices within logistical industries results in the extraction and relay of data that holds high commercial value.

While geodata may be used in positive ways in the case of managing delivery fleets aimed at fuel efficiency and 'ecorouting', locative media also generate data that affects how workers are monitored in workplace settings. Along with privacy issues that arise with the tracking of consignments in transport industries via GPS and cell phones that make visible in real-time the location of workers, there is also concern by unions over how the software parameters of Voice Picking technologies and the generation of data by RFID can result in the profiling and categorisation of workers along lines of race and class that may have deleterious effects on employment conditions and prospects in industries that are frequently characterized by insecure modes of work. Logistical media are also very different from location-based media characterised by the capacity of users to 'control and personalize' the borders between public and private spaces (de Souza e Silva and Frith 2012, p. 266). The agency afforded to users of locative media is much less clear in the case of logistical media, which as an instrumentalisation of location-aware mobile technologies are designed to exert control over the mobility of labour, data and commodities as they traverse urban, rural, atmospheric and oceanic spaces and traffic through the circuits of databases, mobile devices and algorithmic architectures. A further distinction between locative and logistical media is marked by the tendency of users of locative media to search urban spaces for services related to consumption, while logistical media provide the very conditions for urban settings to function in such a way.

Data Analytics

The analysis of data is one key line of critique for logistical media theory. Forms of pattern recognition beyond the basic data hold relevance for how the emergent fields of digital humanities and software studies analyse the massive volume of big data generated by digital transactions and user-consumer practices online. Big data analysis of habits of consumption is interesting for commercial entities, but not particularly exciting for social and political analysis of network ecologies. How to ascertain a relation between data, materiality and subjectivity is a problem little addressed by either digital humanities or software studies. Some notable exceptions include Matthew Kirschenbaum’s (2008) research on 'forensic materiality', Anne Balsamo’s (2011) pedagogical experiments and design research on 'technological imagination', N. Katherine Hayles’ (2012) study of 'technogenesis' and Jussi Parikka’s (2013) 'materialities of technical media culture'. Yet despite the materiality of much logistical media, a theory of logistical media can present itself as elusive. A key reason for this has to do with the proprietary control of the high-end software systems, making a study of logistical software difficult to undertake. Even if one had the resources at hand to analyse code, the algorithmic architectures and troves of data remain beyond reach for media theory.

Noise

What would the critical practice of digital humanities research consist of in the study of big data? How might such practices be designed on transnational scales involving networks of collaborative constitution? What are some of the particular problems surrounding the politics of depletion that come to bear both in the method of digital humanities research and the data sets under scrutiny? Where is the dirt that unravels the pretence of smooth-world systems so common within industry, IT and state discourses around global economies and their supply chains? And can disruption be understood as a political tension and form of conflictual constitution?

Within cybernetics 'noise' is a force of ambivalence, interference and disruption, refusing easy incorporation within prevailing regimes of measure. Constituent forms of subjectivity and the ontology of things often subsist as noise. Undetected, without identity and seemingly beyond control, noise is the 'difference which makes a difference' (Bateson 1972). Digital humanities research would do well to diagram the relations of force and transformation operative within ecologies of noise populated by unruly subjects, persistent objects and algorithmic cultures. A form of critique is required that is not simply an extension of classical political economy into the realm of digital labour, as exemplified by the work of Christian Fuchs (2008, 2014). Logistical media theory is one possible alternative that brings method and critique together in ways sufficient to the task of examining how algorithmic capitalism shapes the experience and condition of labour.

Method

A study of logistical media begins to address issues of method that digital humanities, software studies and sociological digital methods struggle with in quite distinct ways. The program of 'cultural analytics', headed by Lev Manovich (2011a) and his Software Studies Initiative, summarises its project in ways that essentially transpose already existing techniques rather than invent new methods per se:

Today sciences, business, governments and other agencies rely on computer-based analysis and visualization of large data sets and data flows. They employ statistical data analysis, data mining, information visualization, scientific visualization, visual analytics, and simulation. We propose to begin systematically applying these techniques to contemporary cultural data. (Manovich 2011b, 2013)

By contrast, a study of software within the global logistics industries prompts the question of method with regard to how to research the relation between software and the management of labour, the role of logistics infrastructure and the reconfiguration of urban, rural and geopolitical spaces, and the production of new regimes of knowledge within an organisational paradigm. German based software developer SAP is one of the leading firms in developing ERP software that can be found across a wide range of industries, from warehousing, education, transportation, healthcare, financial services and global logistics. Software systems operative within global logistics industries such as SAP or Oracle generate protocols and standards that shape social, economic and cross-institutional relations within and beyond the global logistics industries. Logistics organizes labour as an abstraction within parameters governed by software. How such governing forces and material conditions are captured and made intelligible through the use of digitally modified data is, in part, the challenge of method.

Abstraction

New research questions in the fields of digital humanities, software studies and sociological digital research are often posed in terms of how to tackle larger scale datasets rather than address a material world underscored by complex problems. The work on digital methods by sociologist Noortje Marres (2012) is emblematic of such an approach. Like Richard Rogers (2013), Marres’ advocacy of digital methods has an interest in how 'natively digital' research tools "take advantage of the analytic and empirical capacities that are 'embedded in online media'" (p. 151). In focusing on the empirics of data as it is generated through algorithmic operations of search tools such as Issue Crawler, the analysis of political issues, discourses and actors become displaced from the material conditions from which they arise (see Kanngieser, Neilson and Rossiter 2014). As it turns out, 'method as intervention' for Marres is a fairly exclusive online undertaking no matter that it might involve a 'redistribution of research' and 'transfer' of knowledge among diverse actors (see also Rogers 2013). Search data and its visualisation become the universe of critique. The subject of labour is divided between humans and technology in the practice of method but not the interface between politics, economy, subjects and objects, and the material and technical conditions from which they emerge.

In social science and humanities disciplines undertaking transnational and transcultural research using digital methods for collecting and sampling large scale datasets, there is a tendency for analyses to formulate a universal system of questions to ensure maximum consistency in generating usable data. In adopting such methodological and analytical approaches, the disparity between the particularities of the object of study and the abstraction of knowledge becomes even further amplified than might be the case in, for example, more traditional methods of practice in anthropological fieldwork. Alternatively, abstraction itself becomes the object of study, which is the direction taken in Franco Moretti’s quantitative method of ‘distant reading’ of literary history.

Asymmetrical Analysis

An additional and rarely addressed problem can arise with projects international in scope that place a priority on modelling, visualising, aggregating and analysing large data sets. The underlying method within many ‘global’ approaches to comparative research in media, social and cultural research will often seek to integrate and make uniform data which is non-assimilable due to protocological conflicts, parametric irregularities, qualitative differences and the like. In doing so, such approaches reproduce some of the central assumptions of area studies. Namely, that the study of geocultural difference is predicated on equivalent systems of measure that demonstrate difference in terms of self-contained areas or territories and civilizational continuities often conforming to the borders of the nation-state. Yet it is a mistake to suppose that cultural variation can be distinguished in terms of national cultures, at least in any exclusive sense. In the case of transnational research on logistics industries (and, more broadly, any research project taking a transcontinental perspective), digital methods of comparative research need to be alert to the asymmetrical composition of datasets on transport and communication industries and labour performance, which upsets any desire for equivalent units consistent across time and space that might provide the basis for comparison.

New Approaches

The challenge of digital methods and critique is not about integrating historical or archival data into ever-larger sets but involves working across variable, uneven and often incomplete datasets. Here the fantasy of logistical industries of creating interoperability through protocols of Electronic Data Interchange (EDI) and ERP software platforms hits its limits. Designed to track the movement of people and things, EDI and ERP architectures are intended to function as real-time registrations of labour productivity and the efficiency of distribution systems. Yet these technologies of optimisation frequently rub up against any number of disruptions in the form of labour struggles, infrastructural damage, software glitches, supply chain problems, and so forth. This discrepancy between the calculus of the plan and the world as it happens suggests that the most interesting sites to study are those where interoperability breaks down and methods of organisation external to logistical software routines are instituted in an attempt to smooth out the transfer of data and material goods (Neilson 2014).

Technologies of logistical governance external to software architectures may include border regimes such as Special Economic Zones (SEZs), territorial concessions and trade corridors. They may also manifest as juridical power in the form of labour laws or extra-state forms of governance such as manufacturing and industrial design standards, communication protocols and the politics of affect as it modulates the diagram or relations special to subjectivity (see Easterling 2014). Borders, in short, proliferate, multiply and at times overlap (Mezzadra and Neilson 2013). Clearly, to study such an expanse of governmental techniques is beyond the scope of this entry. But they are important to note by way of signalling the digital is not as ubiquitous as often claimed or assumed. And this has implications for the design of methods within digital humanities, chief among these is an ‘invention of new knowledge practices and methods that intervene in the world’ (Neilson 2014, p. 79).

Futures: Imaginary Media, Parametric Politics

So what is supply chain software and what does it do? Why don’t we have it installed on our PCs and laptops? Why are we so utterly unaware of it? The digital humanities and software studies may have something to contribute by way of response to these sort of questions. But both would need to radically shift their focus away from a general mission to digitize the humanities archive and conduct exotic sorties into the fringes of network cultures. These are important enough activities, but they tell us little about how capital and power works.

First of all we need to enter the imaginary world of SAP, Oracle and their kin. We need to pose critical questions based not on our disciplinary predilections and intellectual whimsies, but rather on the object of inquiry – computational power, interface aesthetics (what many these days call ‘usability’, which is so dreary), algorithmic architectures and the politics of parameters. What is required is a truly transdisciplinary collective investigation into the increasingly mysterious centres of power in the age of big data. This would involve work between media theorists, organizational studies, computer scientists, programmers and designers to open up the black box of SAP and the products of similar software developers, identifying how their algorithmic architectures are constructed, what their business models are and how they use data extracted from the back end of mostly unwitting clients. What is the vision of SAP beyond the PR machine? According to one SAP consultant, it is 1 billion SAP users by the year 2020. What do Hasso Plattner & Co. see as the limit horizon for extracting value from the world? We need to know that, because whether we are aware of it or not, our lives are becoming increasingly subsumed by logistical nightmares.

References

Balsamo, A. (2011): Designing Culture: The Technological Imagination at Work, Durham: Duke University Press, 2011.

Bateson, G. (1972): Steps to an Ecology of Mind, New York: Ballantine Books. Bratton, B. H.  (2006) 'Logistics of Habitable Circulation', in Paul Virilio, Speed and Politics, trans. Marc Polizzotti, Los Angeles: Semiotext(e), pp. 7-25. Cowen, D. (2014): The Deadly Life of Logistics: Mapping Violence in Global Trade, Minneapolis: University of Minnesota Press. Crogan, P. (2011): Gameplay Mode: War, Simulation and Technoculture, Minneapolis: University of Minnesota Press. de Souza e Silva, Adriana and Frith, Jordan (2012): 'Location-aware Technologies: Control and Privacy in Hybrid Spaces', in in J. Packer and S. B. C. Wiley (eds.): Communication Matters: Materialist Approaches to Media, Mobility and Networks, New York: Routledge, pp. 265-275. Easterling, K. (2014): Extrastatecraft: The Power of Infrastructure Space, London: Verso. Fuchs, C. (2008): Internet and Society: Social Theory in the Information Age, New York: Routledge. Fuchs, C. (2014): Digital Labour and Karl Marx, New York: Routledge. Hayles, N. K. (2012): How We Think: Digital Media and Contemporary Technogenesis, Chicago: University of Chicago Press. Kanngieser, A. (2013): 'Tracking and Tracing: Geographies of Logistical Governance and Labouring Bodies', in: Environment and Planning D: Society and Space 31 (4), pp. 594-610. Kanngieser, A., Neilson, B. and Rossiter, N. (2014): 'What is a Research Platform? Mapping Methods, Mobilities and Subjectivities', in: Media, Culture & Society 36 (3), pp. 302-318. Kirschenbaum, M. G. (2008): Mechanisms: New Media and the Forensic Imagination, Cambridge, Mass.: MIT Press. Manovich, L. (2011a): 'Trending: The Promises and the Challenges of Big Social Data', available at: www.manovich.net/DOCS/Manovich_trending_paper.pdf [accessed 2 November 2011]. Manovich, L. (2011b): 'Cultural Analytics: Visualizing Cultural Patterns in the Era of "More Media"', available at: manovich.net [accessed 2 November 2011]. Manovich, L. (2013): Software Takes Command, New York: Bloomsbury Academic. Marres, N. (2012): 'The Redistribution of Methods: On Intervention in Digital Social Research Broadly Conceived', in: The Sociological Review 60: pp. 139-165. Mezzadra, S. and Neilson, B. (2013): Border as Method, or, the Multiplication of Labor, Durham: Duke University Press. Neilson, B. (2014): 'Beyond Kulturkritik: Along the Supply Chain of Contemporary Capitalism', in: Culture Unbound: Journal of Current Cultural Research 6: pp. 77-93, available at: www.cultureunbound.ep.liu.se [accessed 1 March 2014]. Parikka, J. (2013): 'Dust and Exhaustion: The Labor of Media Materialism', in: C-Theory (October), available at: www.ctheory.net/articles.aspx; [accessed 14 November 2014]. Peters, J. D. with Packer, J. (2012): 'Becoming Mollusk: A Conversation with John Durham Peters about Media, Materiality and Matters of History', in: J. Packer and S. B. C. Wiley (eds.), Communication Matters: Materialist Approaches to Media, Mobility and Networks, New York: Routledge, pp. 35-50. Peters, J. D. (2013): 'Calendar, Clock, Tower', in: J. Stolow (ed.), Deus in Machina: Religion, Technology and the Things in Between, New York: Fordham University Press, pp. 25-42. Rogers, R. (2013): Digital Methods, Cambridge, Mass.: MIT Press. Tsing, A. (2009): 'Supply Chains and the Human Condition', in: Rethinking Marxism 21 (2): pp. 148-176. Virilio, P. (2006): Speed and Politics, trans. Marc Polizzotti, Los Angeles: Semiotext(e).

m

Metadata in its most general sense means 'data about data'. Routine examples range from the bibliographic description of a book to the formatted and structural descriptions of a Flickr image or a Facebook friend. Today, in our current media intensive daily life, the importance of metadata is very much amplified in its social, economic and political implications.

Metadata and Digitisation

The first large-scale metadata system was believed to emerge in the Sumerian culture towards the end of the fourth millennium. The Sumerians used pictograms inscribed on clay tablets to index their stocks in order to set up an 'accounting system' that recorded debts as archives for future reference (Goody 1977, p. 75). All such metadata systems immediately involve classification, as Emile Durkheim and Marcel Mauss documented in their co-authored work Primitive Classification, in which they gave priority to what they called the "first philosophy of nature" (1903/1964, p. 81). Indeed, Aristotlean understandings of ontology as 'being qua being' already point in the same direction; namely, how can we describe beings in a proper sense? The later emergence of tables, lists, catalogues, databases and references constructed increasingly concrete metadata systems that demanded more and more precision and universal applications. Metadata systems henceforth determined both the content and the format of presentation of a knowledge system. Yet the role of metadata remains abstract and vague in this history, which is often discussed in related to power and knowledge as demonstrated, for instance, in the works of the French philosopher Michel Foucault (1970) or contemporary sociologists Geoffrey C. Bowker and Susan Leigh Star (1999).

The proliferation of metadata, however, has only accelerated after digitalisation. In fact, at core of the digital is not binary code – as presumed by digital physicists such as Stephen Wolfram (2008) – but rather, what qualitatively and quantitatively distinguishes an analogue and digital system is the ability to produce, store and organise data. The Latin root of ‘data’ is datum, meaning ‘something given’. This etymology is not well recognized in our everyday use of the term, except in French, where the word donnée is still used. What is given here is actually sense data, for example, the colours I perceive, the pain I feel. This givenness changes its form after the emergence of computers; in the Online Etymology dictionary, we read: "meaning 'transmittable and storable computer information' first recorded 1946. Data processing is from 1954." It is also from this moment on that we can no longer easily distinguish metadata and data. We still perceive things, but our perceptions are turned to materialised data immediately through different apparatuses and programs such as GPS, cameras, sensors, and other technical media.

One has to recognise the translation of 'something given' into material form, and how this materiality constitutes a new kind of 'givenness'. This follows a general tendency of technology, consisting in the materialisation of all sorts of relations by rendering the invisible in visible and measurable forms (Hui, 2014). For example, writing puts thoughts and perceptions on paper; pulleys, wheels and chains concretize imaginary movements in mechanical terms; the vapour engine instantiates flows of energy in the relations between water, fuels, pipes and gears; one could give similar examples for transportation infrastructures, electricity or computer networks. Simondon calls these relations within and between technical objects transindividual relations,

[The] technical object taken according to its essence, that is to say technical objects as invented, thought and designated, assumed by a human subject, becomes the support and the symbol of this relation that we would like to call transindividual. The technical object can be read as carrier of defined information; if it is only used, employed, and by consequence enslaved, it cannot carry any information, no more than a book which is used as a wedge or a pedestal. (Simondon 1989, author's translation, p. 147)

The significance of the new technique of data processing we now call the digital (distinguished from the general notion of the technical) is not only that we can process large amounts of data with computers, but also that the system can establish new connections and form a data network that extends from platform to platform, from database to database. The digital remains invisible without data, or traces of data.

Artificial Intelligence and the Semantic Web

Since the 70’s, computer scientists, especially those working in the domain of artificial intelligence, have attempted to construct automated knowledge systems and different technical schemes for the representation of knowledge. Among them, the most well known is the CYC project, which is premised on the belief that one can construct a representation system of common sense knowledges that users can search and learn from. In this phase of development, metadata was disguised by the name 'ontologies' (as opposed to Ontology, if we follow what Heidegger distinguishes as ontic and Ontological here) (Gruber, 1993). The passion for such universal representation systems, however, declined somewhat after the approach was challenged by philosophers and computer scientists such as Hubert Dreyfus and Terry Winograd (1986). They instead proposed to consider alternative approaches such as embodiment and hermeneutics from the perspective of phenomenology. Nevertheless, industries still used metadata schemes during this period in order to enhance the interoperability of machines, but with a more humble name: mark-up languages. We can see very clearly a technical lineage of industrial standardisation with these mark-up languages, for example, from SGML, to HTML, to XML and XHTML, to Web Ontologies (Hui 2012). Below is a simple example of the metadata scheme for describing a chair. In technical applications, the format is much more complicated and different details are needed according to the requirements of different systems,

<chair> <shape></shape> <material></material> <height></height> <width></width> <colour></colour> <producer></producer> <date></date> …. </chair>

Crucially, an enthusiasm for ontology and metadata schemes returned during the early 2000s, when Tim Berners-Lee, the inventor of the web and president of the standards organisation World Wide Web Consortium, proposed the idea of the semantic web. Berners-Lee explained his approach in this way,

I have a dream for the Web… and it has two parts. In the first part, the Web becomes a much more powerful means for collaboration between people. I have always imagined the information space as something to which everyone has immediate and intuitive access, and not just to browse, but to create. [...] In the second part of the dream, collaborations extend to computers. Machines become capable of analyzing all the data on the Web - the content, links, and transactions between people and computers. (Berners-Lee and Fischetti 2000, p. 157)

In this vision, formalised and structural metadata is able to form a giant knowledge system, and by giving 'semantic meanings' (syntax and semantics is an important debate in this context, but is beyond the scope of this keyword entry) to data/metadata, machines will be able to operate on some basic vocabularies shared by human beings. In the vision of Berners-Lee, there is a strong sense of co-creation between human and machine through the sharing of metadata schemes and production of metadata. Under the semantic web movement, there are many other industrial programs that try to revive the AI dream while taking another path through social computing. Among them is the linked data project that attempts to map different metadata schemes and link different knowledges on the web together to form an inferable logical system, such as DBPedia. This linkability is also used in the management of social relations such as Facebook, Twitter and other social media platforms. Such new possibilities pose a lot of challenges to existing metadata systems, for example, the emergence of early web ontologies such as Dublin Core have greatly disrupted previous cataloging formats in information sciences, mostly notably in the gradual replacement of the human unreadable MARC (MAchine-Readable Cataloging) standards. Electronic publications, meanwhile, rely more and more on metadata schemes that allow for the easy conversion of format and navigation.

Debates

The proliferating industrial use of metadata also introduces many social controversies; the following passages summarise four of the key current debates.

  • The first issue concerns privacy and control, since the harvest of metadata has become increasingly intrusive for the personal life of users, especially when used intensively for marketing purposes. Following the Snowden affair in 2013, the word metadata became more prevalent in the media than ever before; it is a widespread concern that the State surveillance organisations such as NSA are actively harvesting our data and metadata through different mediums. There is equally public debates over the collection and storage of metadata collected by different service providers (telecommunications, social networks, search engines).
  • The second issue is the opposition between 'tagging' and web ontology, a topic that was most notably expressed by Clay Shirky (2005) who proposed that web ontologies are overrated, and in fact social tagging (also termed 'folksonomy') can provide an annotation system that is more favourable for semantic accuracy and serendipity. This distinction between bottom up (tagging) and top down (ontologies) also leads to oppositions and tensions between liberty and authority. What is more significant in this debate is the articulation of an industrial democracy, since bottom up fits more squarely into the industrial programs based on user generated contents, and hence the control over the production and organisation of metadata becomes the core of this industrial democratic model.
  • Debates around metadata systems have also raised some responses from the philosophy communities, as seen with the emergence of the Philosophy of the Web organisation closely affiliated to the W3C, which studies the philosophical meaning of web architecture and poses questions such as: what is a Uniform Resource Indicator (URI)? What is meaning and sense of URIs? There are also responses to the question of digital objects and the internet of things, since after digitisation as we have shown above, metadata has become a new type of industrial object preceded by what Simondon  called technical object. These digital objects show us a different notion of object-hood and objectivity that can be traced in the history of philosophy, and which exhibits new relations between humans and machines.
  • Finally, following the interventions by Shirky, there has been an increasing awareness and attempt to understand the social, economic and political implications of metadata. Among them, we find Bernard Stiegler’s proposition of the "economy of contribution" (Stiegler, 2010) that perceives the contributive mode of production of metadata as a mode of working instead of crowdsourcing . This is closely related to the industrial model that we discussed abover, and at stake is the question how can new forms of contribution of data be developed as combats against the  alienation and industrialisation embedded in the digital socio-technical apparatus.

Conclusion

The Italian theorist Matteo Pasquinelli (2014) describes the coming society as the society of metadata, in which metadata becomes the dominant dispositif of social control and production of subjectivity. Metadata also becomes the battlefield for alternative possibilities to the current industrial models, in which industrial democracy comes to its culminating point. The production, circulation, and distribution of metadata takes place beyond the factories as Marx described in Capital, but also sets up a new theatre of individuation which materialises the psycho-social relations as manipulable digits. The future of metadata is not only about standardisation and knowledge representation as it has been happening from the mid-60's until today, but also about the organisation of society through the materialised data networks, which we may see as the revival of the cybernetic programs - e.g. Project Cybersyn in Chile (Medina 2011). The politics of metadata will have to go back to the question of the transindividual relations of digital objects, in search of new models which combat against the industrial model for 1) an open and democratic use of technologies; 2) a technical individuation as in contrast to the disindividuation embedded in the marketing plans of industrial programs (Hui and Halpin, 2013).

References

Berners-Lee, T. and M. Fischetti (2000): Weaving the Web: The Past, Present and Future of the World Wide Web by Its Inventor, London: Texere.

Bowker, G. C. and Star, S. L. (1999): Sorting Things Out: Classification and Its Consequences, Cambridge, MA: MIT Press. Durkheim, E. and Mauss, M. (1903/1964): Primitive Classification, trans. Rodney Needham, Chicago: University of Chicago Press. Goody, J. (1977): The Domestication of the Savage Mind, Cambridge: Cambridge University Press. Gruber, T. (1993): Toward Principles for the Design of Ontologies Used for Knowledge Sharing, pdf.aminer.org/000/912/413/toward_principles_for_the_design_of_ontologies_used_for_knowledge.pdf Hui, Y. (2012): ‘What is a Digital Object?’, in: Metaphilosophy, 43: pp. 380-395. Hui, Y. and Halpin, H. (2013): 'Collective Individuation: The Future of the Social Web', in G. Lovink (ed.), Unlike Us Reader, Institute of Network Cultures. Hui, Y. (2014): 'Form and Relation - Materialism on an Uncanny Stage', in: Intellectica, 61: pp. 106-121.

Medina, E. (2011): Cybernetic Revolutionaries: Technology and Politics in Allende's Chile, Cambridge, MA: MIT Press.

Pasquinelli, M (2014): 'Italian Operaismo and the Information Machine', in Theory, Culture & Society, 0(0): pp. 1–20.

Shirky, C. (2005): ‘Ontology is Overrated: Categories, Links, and Tags’, Clay Shirky's Writings About the Internet, available at: www.shirky.com/writings/ontology_overrated.html Simondon, G. (1989): Du mode d'existence des objets techniques, Paris: Aubier. Stiegler, B. (2010): For a New Critique of Political Economy. Trans. Daniel Ross. Cambridge: Polity Press Winograd, T. and Flores, F. (1986): Understanding Computers and Cognition: A New Foundation for Design, Norwood, NJ: Ablex. Wolfram, S. (2008): ‘Stephen Wolfram’, in: Luciano Floridi (ed.) Philosophy of Computing and Information: 5 Questions, Copenhagen: Automatic Press / VIP, pp. 177-180.

o

Open is a term used across an array of digital and networked projects and artifacts, from government data initiatives and online teaching materials to software code and digital publishing. While the term has been in use in the contexts of political theory (Popper, 1962a; 1962b), philosophy (Bergson, 1935) and general systems theory (Bertalanffy, 1960) for a long time, contemporary uses of openness are often indebted to the open source software practices of the 1990s and the distinct but related Free Software Movement which preceded it. In this context, open as ‘open source’ was understood as a particular mode of software development (cf. Raymond, 2000) underpinned by  ‘permissive’ intellectual property licenses. This legal framework ensured access to the human readable ‘source code’ of a program, thereby allowing anyone to contribute to a software project or to start a new project based on the pre-existing code. Transformations that took place on the web from the early 2000s onwards – variously described as increased participation, collaboration, the flattening of hierarchies, sharing culture, meritocracy, user-generated content, produsage, crowdsourcing, or commons-based peer production – either drew inspiration from the practices of open source software or were retrospectively likened to it, and this has led to a proliferation things described as open. Openness now simultaneously works across legal, technical, organizational, economic, and political registers. It is a core guiding principle of several of the most powerful players on the web (including Google and Facebook) and is increasingly taken up by governments to describe their modus operandi in a world transformed by digital networks. The Digital Humanities, which is here one domain among others, is no different. This from the Digital Humanities Manifesto (2008): “the digital is the realm of the open: open source, open resources, open doors. Anything that attempts to close this space should be recognized for what it is: the enemy.”

Definition

While many uses of openness in digital cultures can be traced back to open source software, academic accounts of openness point to much longer histories and legacies. Christopher Kelty’s (2008) influential anthropological history of ‘conceiving open systems’ begins in the early 1980s and covers the UNIX operating system and the TCP/IP suite (Internet protocols). In Open Standards and the Digital Age, Andrew Russell (2014) locates what he describes as the ‘ideological origins of openness’ in the rise of the telegraph and engineering standards beginning in the 1860s. When sociologist Richard Sennett considers openness in relation to cities – drawing heavily from the science of open systems – he invokes Darwinian evolution:

The most familiar and most magnificent open system familiar to all of us is Charles Darwin’s version of evolution, which combines elements of chance mutation, path dependence, and the environment conceived as a colloid within which natural selection does its work. (Sennett 2014, p. 6)

Perhaps the longest history of openness was conceived in Karl Popper’s (1962a; 1962b) two-volume work The Open Society, where Popper rereads the history of political philosophy – beginning with the Greeks – as a battle over open and closed. While Popper wrote of the historical organisation of society, his notion of openness was primarily epistemological. An open society was one where forms of knowledge (i.e. truth) could be challenged and thus were likely to change over time. Any philosophy that claimed to unveil the (unchangeable) ‘laws of history’ was by definition closed and, for Popper, provided the preconditions and rationality for totalitarian regimes of governance.

Given these diverse histories, defining the open is no easy feat. Not only is the term highly abstract and applied to so many different topics that it is hard to draw consistencies across them, but for some, the very appeal of openness is its capacity to change over time. There have, of course, been numerous contemporary attempts to develop comprehensive definitions within specific fields. Some of these include: the eight principles of Open Government Data (http://opengovdata.org/); the Open Knowledge Foundation’s 11 defining characteristics of an open work (http://opendefinition.org/od/); and the Open Source Initiative’s ‘Open Source Definition’, which lists 10 characteristics (http://opensource.org/osd). There have also been attempts to define contemporary openness more broadly by mapping the difference senses or modalities in which it is applied. For example, inventor of the World Wide Web, Tim Berners-Lee, uses the idea of openness ‘in at least 8 different ways’ to refer to: universality, open standards, open web platform, open government through open data, openness with personal data on the social net, open platform, open source, open access, open internet through net neutrality (http://blog.digital.telefonica.com/2013/10/09/tim-berners-lee-telefonica-open-agenda/). To my knowledge, the most comprehensive attempt to map the different fields and dimensions of openness has been produced by supporters of the ‘open everything’ movement and their ‘free and open everything’ mind map. The map divides openness into eight areas – aspects of openness, enablers of openness, infrastructures of openness, practices of openness, domains of openness, products of openness, open movements and open consciousness – each of which is filled with numerous examples (http://www.mindmeister.com/28717702/everything-open-and-free).

Considerations of the open’s ‘others’, its opposites, helps to make clear the context of its use (software, political thought, publishing, and so forth), but do not get us much closer to a general definition. For Popper, the open’s other was rather straightforward: closed societies and closed forms of knowledge. This is complicated in the programmer cultures that Kelty describes, however, where the opposite of openness is no longer closed, but ‘proprietary’. When considered in terms of organisation, net commentator Nicholas Carr (2011) invokes bureaucracy as the opposite of openness. Parallels can, of course, be drawn between the stable and rigid organisational form of bureaucracy and a closed society – and here Plato’s Republic, with its strictly defined social roles and total subjugation to the philosopher’s wisdom, comes to mind. The sense of not being able to change ‘closed knowledge’ equally resonates with not having access to the ‘source code’ of proprietary software. But such parallels are just as easily problematized through inconsistencies and counterexamples. For Popper, the most recent iteration of the open society in the context of WW2 was (democratically governed) capitalism, with soviet communism and fascism occupying the roles of closed societies. In this account, property-based societal organisation is ‘open’. But in terms of software, its commodification or existence in ‘proprietary form’ is largely what makes it closed. Likewise, bureaucracies are found across open and closed societies. And while this form of organization resonates with Plato’s Republic in its apparent stability or rigidity, bureaucracies are also institutions where individuals can rise in position based on merit, which is not true of the various classes of the Republic. Rather than try to pin down a precise definition of openness, it might be more productive to map its fields of use and the effects of its deployment across different domains.

Critical Concerns

Beyond Critique

While there are an increasing number of voices that have identified shortcomings of specific implementations of openness (in the UK, for example, there is a backlash over the recent rolling out of mandatory open access publishing for government funded research), as it stands, there are still very few direct critical engagements with openness per se. As Kelty puts it in his discussion of open systems, “everyone claims to be open” and “everyone agrees that openness is the obvious thing to do” (2008, p. 143). While perhaps not everyone is claiming to be open in all situations (not yet anyway), nobody claims to be against openness in general. This has rather detrimental effects on any project that is generally understood as being open, and for its contributors in particular, who all too often adopt an air of superiority alongside what can only be called political naivety. In thinking that they are part of a progressive community or activity, they remain blind to the immanent political dynamics that necessarily constitute any form of organisation whether described as open or closed. In short, once it is agreed that something is open, the possibility for criticism is significantly foreclosed.

Ambiguity

In many ways, the open has the characteristics of a floating signifier. Without doubt, the ambiguity of openness has been highly productive and helps explain how and why the term has attained such prominence. It also partly explains the tendency to build definitive lists of what is considered open in specific domains. Such list-making activities, however, are not only futile, but threaten the very goal of openness. This much was well understood by Popper and explains why his major two-volume work on the topic actually had very little to say on the open society per se and was instead largely concerned with closed societies and closed modes of thought. To pronounce once and for all what is open is, of course, to close it. Many contemporary champions of openness have yet to grasped this impasse. Thus, the ambiguity of openness is not merely an unfortunate side effect of a sloppy application or deliberate creative intervention. And while all language is slippery and on the move, openness is defined by these qualities. If co-optation and recuperation by hegemonic forces pose challenges for most progressive politics, the problem for openness is that of the boundary and its non-relation to it. If openness is open to the world it is open to the world. In this respect, openness is unable to define its limits. Co-optation first requires articulation, a ‘stand’ or ‘standpoint’, a ‘territory’, and without such articulation (and therefore boundary creating) co-optation is as impossible as the original political intervention.

Relationship to Neoliberalism

It is now very clear that some iterations of openness are not only compatible with neoliberal governments, but are actively (if strategically) embraced by them. An increasing number of governments around the world now have open government policies or initiatives, which often combine a specific focus on open access and open data with a more general commitment to openness and transparency (Tkacz, 2012). It is also often remarked that large corporations such as Google and IBM are the largest writers of open source code (e.g. Asay, 2009). Openness is, therefore, not anathema to neoliberal institutions or the machinery of global capitalism. But the relationship between openness and neo-liberalism is deeper. There is a shared history and a resemblance at the level of rationality (Foucault, 2008; Dardot and Laval, 2014). This deeper history is captured in the intellectual projects of Popper and Hayek – and their friendship. To greatly summarize a complicated dynamic, Hayek’s (1944) defense of the market form, which he argued was superior to centralised attempts to organise an economy, shared much in common with Popper’s writings on openness and his critique of totalitarianism. Openness and neoliberalism are both rationalities of organisation that favour decentralisation and the capacity for change, and this is derived from a more fundamental and radical skepticism toward knowledge.

Futures: Politics as Digital, But Not Binary

In the mid-2000s large segments of the web were recast in the image of open source. Everyone became a producer, prosumer, produser or collaborator – a participant in the network. Websites were now ‘platforms’ for users to generate their own content. Everything was open. And just as it seemed that the utopian optimism of this moment was exhausted, the reach of openness extended beyond network cultures and into all of the domains mentioned above. The terrain around openness has shifted significantly. Some years ago, Jamie King (2006) had already recognized that the most radical openness of all, let’s say absolute openness, is nothing more than the total embrace of the status quo. Isn’t this a fitting description of the latest version of openness as ‘open data’: total embrace of the status quo? If there is to be a politics to the digital humanities, it has to be about more than the choice between open and closed.

References

Asay, M. (2009): ‘World’s biggest open source company? Google’, in: CNET, available at: http://www.cnet.com/uk/news/worlds-biggest-open-source-company-google/ [accessed 2 May, 2014].

Bergson, H. (1935): The Two Sources of Morality and Religion, trans. R. A. Audra, C. Brereton, and W. H. Carter, New York: H. Holt and Company.

Bertalanffy, L. v. (1960): Problems of Life: An Evaluation of Modern Biological and Scientific Thought, New York: Harper and Brothers.

Carr, N. (2011): ‘Questioning Wikipedia’, in G. Lovink and N. Tkacz (eds.), Critical Point of View: A Wikipedia Reader, Amsterdam: Institute of Network Cultures, pp. 191-202.

Dardot, P. and Laval, C. (2014): The New Way of the World: On Neoliberal Society, Brooklyn, NY: Verso.

‘A Digital Humanities Manifesto’ (2008), available at: http://manifesto.humanities.ucla.edu/2008/12/15/digital-humanities-manifesto/ [accessed 2 May, 2014].

Foucault, M. (2008): The Birth of Biopolitics: Lectures at the Collège de France, 1978-1979, trans. Graham Burchell, Basingstoke: Palgrave Macmillan.

Hayek, F. A. v. (1944): The Road to Serfdom, London: Routledge.

Kelty, C. M. (2008): Two Bits: The Cultural Significance of Free Software, Durham: Duke University Press.

King, J. (2006): ‘Openness and Its Discontents’ in J. Dean, J. W. Anderson and G. Lovink (eds.), Reformatting Politics: Information Technology and Global Civil Society, New York: Routledge, pp. 43-54.

Popper, K. R. (1962): The Open Society and Its Enemies (Vol. 1, 4th ed.), London: Routledge and Kegan Paul.

Popper, K. R. (1962a): The Open Society and Its Enemies (Vol. 2, 4th ed.), London: Routledge and Kegan Paul.

Raymond, E. S. (2000): ‘The Cathedral and the Bazaar’, available at: http://catb.org/~esr/writings/homesteading/ [accessed 2 May, 2014].

Russell, A. (2014): Open Standards and the Digital Age: History, Ideology and Networks, New York: Cambridge University Press.

Sennett, R. (2014): ‘The Open City’, available at: http://www.richardsennett.com/site/SENN/UploadedResources/The%20Open%20City.pdf [accessed 8 April, 2014].

Tkacz, N. (2012): ‘From Open Source to Open Government: A Critique of Open Politics’, in: Ephemera: Theory and Politics in Organization 12 (4), pp. 386-405.

As a concept and practice, open access has always been heavily debated; by open access advocates, but also by the wider academic community as part of the debate over the future of scholarly communication. From an initial subversive proposal (Harnad), open access has increasingly turned into accepted practice, promoted by governments, institutions and businesses alike. However, while growing in popularity, the struggle over its specific implementation in publishing and scholarly communication is ongoing and perhaps even more urgent than ever.

Definition

Open access literature has been defined by Peter Suber, one of its greatest advocates, as "digital, online, free of charge, and free of most copyright and licensing restrictions" (Suber 2012: p. 4). From the early 1990s onwards, the open access movement – although the term open access was not yet used then – grew out of an initiative established by academics, librarians, managers and administrators. Some of the first freely available online journals were launched by academics during that time, including Stevan Harnad’s Psycoloquy (1989), Surfaces by Jean-Claude Guèdon (1991) and Postmodern Culture by John Unsworth et al. (1990). Open Access was initiated and developed within science, technology, engineering and mathematics (STEM), where it focused mainly on what is now defined as the Green Road to open access. Here, authors self-archive their research works submitted for peer review (preprints) or their final peer-reviewed versions (post-prints) in central, subject or institutionally-based repositories. For instance, in 1991, Paul Ginsparg started the first free scientific archive for physicists online, arXiv.org, and in 1997, Pubmed Central was launched based on the life sciences and biomedical database MEDLINE. The other main (and complementary) route to open access, the Gold Road, focuses on publishing research works in open access journals, books or other types of literature (Guédon, 2004). For example, the Public Library of Science (PLoS), founded in 2000, is a non-profit open access scientific publisher aimed at creating a library of open access journals – such as PLoS Biology – that operate under an open content license.

Next to these different routes to open access, there are also different forms of open access. Making research more accessible by taking away price barriers is known as providing Gratis open access. If some or most permission or licensing barriers to a work are removed on top of that – making it more open and enabling its reuse for scholarly or commercial purposes – then Libre open access is provided. The main open access definitions, together known as the ‘BBB-definition’, were agreed upon in three seminal public statements based on meetings of the movement’s members in Budapest, Bethesda and Berlin. These definitions all combine Gratis and Libre open access, allowing both access and reuse of scholarly content as long as the author is properly attributed.

As the various forms and iterations of open access outlined above already exemplify, the open access movement has been divided in its views on what openness is and how we should go about achieving it. Nonetheless, it has been united in its mission to improve the conditions under which academic work can circulate. Even though new digital distribution formats and mechanisms increasingly offered opportunities to make research more widely accessible, open access advocates argued that the traditional publishing system was no longer able or willing to fulfil their communication needs. There was also the widespread feeling (also known as the taxpayers argument) that the public should not be paying twice for the same research: once to fund its conduct and then again to buy access to it from (commercial) publishers via libraries or institutional subscriptions. Open access can also be seen as a direct reaction against the ongoing commercialisation of research and the publishing industry. For instance, Harvie et al. have argued against the practices of so-called 'feral' publishers, targeting in particular the high profit margins and tax avoidance of Informa, which includes the Taylor & Francis and Routledge imprints (Harvie et al., 2012). Furthermore, as part of the 'Academic Spring', almost 15,000 academics up to now have signed the Cost of Knowledge boycott petition to protest against Elsevier’s business practices, objecting to its high journal subscriptions among other things.

For several decades, journal subscription prices have been rising far above their average cost, faster than inflation and faster than mostly declining library budgets. This has triggered a pricing crisis for scholarly research, firstly in what we now know as the serials crisis, and subsequently in the monograph crisis, which predominantly hit the humanities. As Suber has argued, a pricing crisis also means an access crisis, as libraries are no longer able to afford all the research academics need (2012: pp. 29-30). These adverse conditions also increasingly led to libraries decisions to cut spending on monographs to buy journals in STEM instead, despite their rising subscription costs. This drop in library demand for monographs has consequently led presses to produce smaller print runs and focus more on marketable titles. This has been detrimental for those (mostly early-career) researchers who depend on book publication for tenure and promotion. Partly in response to this 'monograph crisis', a rising number of scholarly-, library- and/or university-press initiatives are experimenting more directly with making monographs available in open access, including scholar-led presses such as Open Humanities Press, and presses established by or working with libraries, such as Athabasca University’s AU Press.

Critical Debates

Genealogies of Openness

The history and rise of the idea of the 'open' that has influenced the development of the open access movement remains contested and indistinct. Open access can be seen as an offspring of the wider open source movement, but has also been influenced by longstanding practices and discourses of open science and open authorship. A lot of confusion exists about what openness is, what it means, and whether it is a goal in itself or a means to an end. Nathaniel Tkacz, for instance, in tracing the genealogy of openness back to Karl Popper's work (among others), argues that in the current proliferation and fetishisation of openness as an inscrutable political ideal and goal in itself, it merely functions as the positive antidote to an empty binary; namely, closedness, the closed society, or closed politics. Openness has become an uncontested, but empty objective, in its inscrutability ready to be taken up by a variety of different groups all claiming it as their own, from Google to The Open Knowledge Foundation. Openness, based on this Popperian genealogy, is thus mainly seen as a reaction against closed systems and sources; obfuscating, as Tkacz states, the closures that openness also necessarily implies (Tkacz, 2012: pp. 386, 399).

Other genealogies focus more on the diverse motivations, as well as critiques, that have informed openness historically, including from within the open access movement. Here, the focus is on the complexity of openness itself, and the way it cannot simply serve as one side of an ‘open-closed’ binary, since it has always already been closely enmeshed with closedness as well as with its other proposed antagonists. For instance, Christopher Kelty has argued that in the open source movement, openness is not opposed to closedness, but to proprietary, which he argues, is a "complicated mesh of the technical, the legal and the commercial" (Kelty, 2008: p. 143). This entangled development of openness also comes to the fore in recent history of science studies. Pamela Long, for instance, has shown how historically openness developed in connection to ideas and practices of secrecy, authorship, and property rights alongside the establishment of print and the printed scholarly book in the West (Long, 2001). Historical research into the development of early modern natural philosophy shows a complex picture, nuancing the opposition between open science and secretive technology, seeing openness in science as intricate and enmeshed.

These alternative genealogies of openness also shed a different light on the Mertonian ideal of open science (communism) that has accompanied our narratives of modern research, where as part of its practical historical development, openness and secrecy can be seen to have co-developed in changing conditions of power, patronage, and technological development. Nonetheless, the specific context in which the open access movement developed – related to developments in (digital) technology, the existing cultures of knowledge and unfavourable economic and material conditions – requires us to take into account the influence of both this longstanding narrative on 'open' scholarship, as well as the day-to-day practicalities of entangled secrecy and power relations.

Contested Terms

The meaning of openness as reflected within the terminology surrounding open access (green, gold, gratis, libre, etc.) continues to be a highly charged topic. For many advocates, the debate about the 'true' definition and 'correct' implementation of open access is of critical importance for the future of academia. The ongoing dispute over terminology is thus of a highly strategic character. For some, the free online availability of all scholarly research is the most important goal of open access; where for others, access is only the beginning or a means to an end (re-use, data-mining, experimentation, etc.). Stevan Harnad, however, argues that more expansive and disputed forms of open access that can potentially challenge the integrity of a work, such as Libre open access, are standing in the way of the mainstream acceptance of the project of universal access (Harnad, 2012b). On the other hand, mandating Green open access might ultimately mean the implementation of a watered-down version of open access, not allowing re-use rights nor fundamentally challenging or changing the present (commercial) publishing system. However, the current practical implementation of a variant of Gold open access in the UK does not seem to offer much solace either. In what has become known as The Finch Report (June 2012) – an independent study commissioned by the UK science minister David Willetts, examining how UK-funded research ?ndings can be made more accessible – recommendations have been made (which will inform the UK government’s open access policy) which include the further implementation of article processing charges for the open access publishing of journals. This recommendation can be seen to maintain and favour the system of communication as currently set up, protecting the interests of established stakeholders, mainly commercial publishers. For in this system, the publishers’ profits will be sustained via APCs, where in, for instance, Green open access, depositing of articles in repositories will not require any charge. Harnad has thus already called The Finch Report a case of 'successful lobbying' from the side of the publishing industry (Harnad, 2012a).

Sustainable Business Models

The search for the right definition of open access has increasingly turned into a quest for the ultimate sustainable open access business model, based on the idea that the print-based subscription model (especially in the humanities) is no longer sustainable, and that the (long-term) sustainability of open access still needs to prove itself. However, despite this criticism, one could also argue that the print-based model has never been sustainable and that specialised and book-based research in the humanities especially has never been marketable or self-sustaining and always relied on some form of additional funding (Adema, 2010).

Christian Fuchs and Marisol Sandoval thus argue for the adaptation ofthe so-called Diamond (also known as platinum) model, a variant of the Gold open access model that does not charge any fee to authors or readers, nor to their institutions. In this model, Fuchs and Sandoval state, research is published in open access 'without the commodity logic', arguing that any form of for-profit open access will create new academic inequalities of access between the haves and have nots (Fuchs and Sandoval, 2013: p. 439). Here the focus is on the necessity of public funding, and on the idea that open access business models need to be subsidised. The sustainability or even profitability of scholarly publishing in general is questioned in this context, where the argument is that publishing costs should be an integral part of the costs of research.

Neoliberal Rhetoric

The prospects of open access in many ways depend on how open access has been and is currently perceived. As becomes clear from above, openness can be seen as a floating signifier (Laclau, 2005: pp. 129-155), a concept without a fixed meaning, easily adopted by different political ideologies. For some, it is exactly this 'openness' of open access that becomes problematic. Where initially the open access and open source movements were heralded by progressive scholars and thinkers as a critique of the commodification of information and knowledge (Berry, 2008: p. 39), increasingly openness is seen as a concept and a practice that has and can be applied in various political contexts—most notably as part of a neoliberal rhetoric—and that can be connected to ideas of transparency and efficiency heralded by business and government. This has been related to the 'openness' of the concept and its multiplicity of adaptations (Eve, 2013; Tkacz, 2012). For instance, as part of neoliberal rhetoric, open access is seen as supporting a competitive economy by making the flow of information more efficient, transparent and cost-effective, and by making research more accessible to more people. This makes it easy for knowledge, as a form of capital, to be taken up by businesses for commercial re-use, stimulating economic innovation. In this way, the research process and its results can be efficiently monitored and can be better made accountable as measurable outputs (Adema, 2010; Hall, 2008; Houghton, 2009).

Nonetheless, As Gary Hall argues, there is nothing intrinsically political or democratic about open access (2008: p. 197). Motivations behind open access are very diverse, and motives that focus on democratic principles go hand in hand with neoliberal arguments concerning the benefits of open access for the knowledge economy. Indeed, open access, openness and open science have been theorised and practiced in radically different, alternative, critical and affirmative ways within academia, offering a potential counterweight to the predominance of neoliberal forms of open access as well as providing new ways of thinking about politics (Adema and Hall, 2013; Eve, 2013; Hall, 2008; Holmwood, 2013b).

Futures: Radical Open Access

Although it is the openness of the concept of open access that brings with it a risk of uncertainty towards its (future) adaptations, it can also be seen as that which provides its potential political power. For example, the contingent and contextual forms of what can be called radical open access focus on experimentation, and the exploration of new institutions and practices. This approach towards openness, exploring new formats and stimulating sharing and re-use, can be seen as a radical alternative to and critique of the business ethics underlying innovations in the knowledge economy, questioning the system of (commercial) academic publishing as it is currently set up. Forms of radical open access can also be seen to offer an affirmative engagement with open access by establishing practical and experimental (and in many cases also scholar-led) alternatives to the present publishing system. Media Commons and Media Commons Press are examples here of scholar-led initiatives experimenting with new forms of publishing and collaboration. For many of these initiatives, open access also forms the starting point for a wider critique and interrogation of our institutions, practices, notions of authorship, the book, and publication. For example, Open Humanities Press’s Living Books About Life series is open on a read/write basis for continuing collaborative processes of writing, editing, remixing and commenting, challenging the physical and conceptual limitations of the codex format and rethinking the book as an open, living and collaborative process.

The potential of radical open access thus lies in how it envisions open access as an ongoing critical project, embracing its own inconsistencies and battling with its own conceptions of openness. Following this vision, open access should be understood not as a homogeneous project striving to become a dominant model, nor as a concept with a pre-described meaning or ideology, but as a project with an unknown outcome engaged in a continuous series of critical struggles. And this is exactly why we cannot pin down ‘open’ (nor radical open access) as a concept, but why we need to leave it open, open to otherness and difference, and open to adapt to different circumstances.

References

Adema, J. (2010): Open Access Business Models for Books in the Humanities and Social Sciences: An Overview of Initiatives and Experiments (OAPEN Project Report), Amsterdam.

Adema, J. and Hall, G. (2013): 'The Political Nature of the Book: On Artists’ Books and Radical Open Access', in: New Formations 78 (1), pp. 138–156.

Berry, D. M. (2008): Copy, Rip, Burn: The Politics of Copyleft and Open Source, Pluto Press.

Eve, M. P. (2013): 'Open Access, "Neoliberalism", "Impact" and the Privatisation of Knowledge', Dr. Martin Paul Eve blog, 10 March, available at: https://www.martineve.com/2013/03/10/open-access-neoliberalism-impact-and-the-privatisation-of-knowledge/

Fuchs, C. and Sandoval, M. (2013): 'The Diamond Model of Open Access Publishing: Why Policy Makers, Scholars, Universities, Libraries, Labour Unions and the Publishing World Need to Take Non-Commercial, Non-Profit Open Access Serious', in: tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society 11 (2), pp. 428–443.

Guédon, J.C. (2004): 'The “Green” and “Gold” Roads to Open Access: The Case for Mixing and Matching', in: Serials Review 30 (4), pp. 315–328.

Hall, G. (2008): Digitize this Book!: The Politics of New Media, or Why We Need Open Access Now, Minneapolis: University of Minnesota Press.

Harnad, S. (2012a): 'Finch Report, a Trojan Horse, Serves Publishing Industry Interests Instead of UK Research Interests', in: Open Access Archivangelism, June 19, available at: http://openaccess.eprints.org/index.php?/archives/904-Finch-Report,-a-Trojan-Horse,-Serves-Publishing-Industry-Interests-Instead-of-UK-Research-Interests.html

Harnad, S. (2012b): 'Open Access: Gratis and Libre', in: Open Access Archivangelism, May 3, available at: http://openaccess.eprints.org/index.php?/archives/885-Open-Access-Gratis-and-Libre.html

Holmwood, J. (2013b): 'Markets versus Dialogue: The Debate over Open Access Ignores Competing Philosophies of Openness', in: Impact of Social Sciences, 21 October, available at: http://blogs.lse.ac.uk/impactofsocialsciences/2013/10/21/markets-versus-dialogue/

Houghton, J., Rasmussen, B. and Sheehan, P. (2009): Economic Implications of Alternative Scholarly Publishing Models: Exploring the Costs and Benefits. A Report to the Joint Information Systems Committee, JISC.

Kelty, C. M. (2008): Two Bits: The Cultural Significance of Free Software, Duke University Press Books.

Laclau, E. (2005): On Populist Reason, Verso.

Long, P. O. (2001): Openness, Secrecy, Authorship: Technical Arts and the Culture of Knowledge from Antiquity to the Renaissance, JHU Press.

Suber, P. (2012): Open Access, MIT Press.

Tkacz, N. (2012): 'From Open Source to Open Government: A Critique of Open Politics', in: ephemera: theory in politics & organization 12 (4), pp. 386–405.

p

The discussion over the term 'post-media' has spawned – so far – three different conceptions (cf. Broeckmann 2013). First, there is the aesthetic notion of the 'post-medium condition' in contemporary art (e.g. Rosalind Krauss, Nicolas Bourriaud); second, the technological understanding of digital media being 'post-medium' (e.g. Peter Weibel, Lev Manovich, Siegfried Zielinski); and third, the rather political reading of 'post-mass media' (e.g. Félix Guattari, Howard Slater, Clemens Apprich). While the first two positions differ from each other mainly in their interpretation of 'post-mediality' being either an aesthetic (Krauss) or technological (Weibel) phenomenon, the notion of 'post-media' coined by the French philosopher and psychiatrist Félix Guattari has a much more political connotation.

The following keyword seeks to maintain the political relevance of the term post-media in order to rearticulate it for current debates. It was Guattari who, almost three decades ago, foresaw the emergence of a post-media era and the resulting transformations related to the digitisation of our everyday culture. As a starting point and first orientation, we can link Guattari’s concept of 'post-mediality' to at least three characteristics: first, it is non-deterministic (in opposition to the old, deterministic structures of mass-media culture); second, it is collectivistic (it goes beyond the individualistic approach of mere bricolage and amateur culture); and third, it is fundamentally relational (in contrast to the linear understanding of communication in conventional information theory).

Post-Media Era

The term post-media was coined by Guattari, who used it in the early days of the 1990s to describe a transformation of classical mass-media structures towards new collective assemblages of enunciation. In this sense, Guattari nourished the hope that collective forms of articulation could replace the old pacifying media structures. As opposed to mass media, which tends to reproduce a consensual (i.e. normative) subjectivity, post-media – according to Guattari – enable the creation of new modes of subjectivation. However, this new form of a computer-aided subjectivity is not the simple result of technological change, but rather a manifestation of new forms of media appropriation by a multitude of 'subject-groups' (Guattari 1989, p. 144). In accordance with a non-deterministic conception of media, the idea of post-mediality underlines the fact that the spur of change resides in social practices, not in the technological structure itself.

A new 'post-media era' (Guattari 2012), therefore, yields an interactive use of machines of information, communication, intelligence, art and culture. This pluralistic approach not only challenges the idea of specialisation, but was indeed seen as an immanent process of becoming, which itself should be experienced as a process of greater freedom. In this sense, post-media are not limited solely to digital technology, but include all formats of old and new media in order to produce new forms of knowledge. The collective appropriation of media technologies is believed to transform mass-media power in order to overcome contemporary subjectivity. Linked to this idea is the question of whether and how self-organised networks can preserve their autonomy against mass media structures. In particular, autonomous radio stations of the 1970s and 1980s in Europe (e.g. Radio Alice in Bologna) represented for Guattari an example of how collective assemblages of enunciation can be produced and preserved (cf. Goddard).

The notion of post-media has recently been applied to digital media technologies all the more as they have extended far into our everyday life. The massive proliferation of digital devices (laptops, cameras, mobile phones, smart appliances, etc.) has lead to new modes of media production, distribution and storage, which, in turn, enable a recombination of social practices. Such a (re-)articulation of the social, which aims to go beyond a 'postmodern impasse' (Guattari 1996a), refers to the fundamental openness of any social order. From a post-media perspective, the critical potential of digital technologies lies in its ability to change social processes of participation and organization, as well as to allow new forms of intervention. However, the goal of a post-media approach is not so much to build a counterpart to conventional mass media, but rather to create one’s own media and to position oneself within the social space.

Post-Media Theory

While traditional information theory – according to Guattari – has contributed "to masking the importance of the enunciative dimensions of communication" (Guattari 1996b, p. 266) by focusing merely on the linear connection between sender and receiver, the post-media approach highlights the relational character of communication. In this sense, messages are not transmitted alone, but rather their meaning depends on the interpretative framework of the participants involved in the communication process. Such a non-deterministic theory of media tries to free itself from the idea that social change (positive or negative) can be directly derived from technological structures. Instead of the deep suspicion towards the manipulative power of the media, as well as the wide-eyed hope of its emancipatory potential, post-media theory thinks of media technology as tool (among others) to initiate social change. It is, therefore, no longer a question of whether the media by the nature of their construction are manipulative or emancipatory, but to what extent media can be understood as tools of collective enunciation (cf. Apprich 2013).

In accordance with this approach, tactical media in the 1990s developed a new form of media activism that was based on the idea of DIY-media. Hence, it was no longer only about the reflection on media conditions, but rather about the co-creation of these conditions. In this context, Critical Art Ensemble’s concept of a "liberating collective arrangement of enunciation" (CAE 2001, p. 6) refers to the work of Guattari and the media ecological debate of the 1990s. Media practices, in this perspective, take on greater significance, because what function they finally fulfil is never determined a priori, but arises from their specific application. Moreover, as media transport social knowledge (in terms of images, values, categories, classifications and lifestyles), they contribute to the construction of our common sense (cf. Hall 2006).

The post-media approach has been criticised as being mostly vague about its actual goals. This is due in part to the fact that those passages where Guattari speaks about the concept of post-mediality are rather seldom in his writing. It was seen to be yet another postmodern term in the wake of poststructuralist thinking; nevertheless, the concept itself does not so much imply a simple temporal succession, as a reassembling of media-based practices. These post-media practices do not describe a quantitative phenomenon that could be measured by scientific means, but rather a qualitative dimension that is characterized by a heterogeneity of modes, a diversity of implementations and contingent outcomes. As a consequence, some have complained that such an abstract theory would jar with the practical needs of media activists and, therefore, would remain an elitist project. By using Guattari’s post-media discourse, intellectuals would merely produce a 'theory-art' that is unable to turn its ideas into practice (cf. Barbrook 2009).

Post-Media Practices

It could be assumed that post-media practices are not used to contribute to the public good, but rather to distinguish themselves from the rest of society. This might have been true in the case of an elitist understanding of media activism in the 1990s when digitally empowered activists lamented the massification of 'their' Internet. However, Guattari was not simply referring to the individual appropriation of media technologies, but underlining the importance of collective forms of enunciation: "it will all depend, ultimately, on the capacity of groups of people to take hold of them, and apply them to appropriate ends" (Guattari 1996b, p. 263). Key to his argumentation is not the mere utilisation of media technologies by these groups, but their activation through an autonomous use of shared media infrastructures. Post-media operators, therefore, are supposed to enable new forms of subjectivation that are self-instituting (cf. Slater 1998).

In this respect, there has been a lot of buzz around Guattari’s interest in the French Minitel-system, which lead to the idea that he was favourably inclined towards new media technologies. However, his partly enthusiastic statements about early net cultures were less influenced by the fact that the internet as worldwide communication system was spreading out in the beginning of the 1990s, but rather by his belief that new collective forms of enunciation may arise from the (re-)appropriation of network technologies. Post-media operators, equipped with all sorts of media technologies, develop new practices in the arts, the political field, media activism, alternative economies and net cultures. These micropolitics aim to overcome dominant modes of subjectivation in order to bring about a post-media era – a new horizon for artistic, cultural and political resistance to the subsumption of life by an "integrated world capitalism" (Guattari 1989, p. 137).

This raises the question of political agency in contemporary capitalist societies, which are increasingly shaped by information and communication technologies. Thus, new forms of knowledge production are taking centre stage in the current debate around emerging media cultures. In this sense, Guattari’s considerations may also be seen as an anticipation of the discourse around digital natives: a new set of users whose 'native' relationship with digital technologies has created new modes of learning, as well as new forms of production. Such singularly generated knowledge is used to challenge hierarchical structures and to open up new possibilities of enunciation. In order to be able to do so, a post-media strategy is required that considers media neither as external structure in terms of the manipulation or emancipation paradigm, nor as mere means in the struggle for political objectives, but as tools to shape our own everyday media life.

The "Guattari Effect" and the Digital Humanities

Guattari’s notion of 'post-mediality' is not only a vital concept for discussing political agency in times of ubiquitous media, but is itself the result of the philosopher’s interventions into electronic media spheres (e.g. his experiments with Fréquence Libre in Paris). Similar to the controversial concept of precarity, the term post-media emerged at the interface of activist and academic discourses. Hence, the "Guattari Effect" (c.f. Alliéz and Goffey 2011) in relation to Digital Humanities is twofold: firstly, it enables the theoretico-practical exploration of collective assemblages of enunciation as they have been recently expressed in social movements worldwide (from Tahrir square via Puerta del Sol to Zuccotti Park). Secondly, we are witnessing a productive impact of Guattari’s theoretical work on academic discourses (after a long-time absence from the canons of universities at large). Furthermore, the 'post-media approach' takes into consideration both the social and the technological transformations sparked by the digitisation of our everyday culture. This promotes engagements with current debates that traverse different disciplines as well as boundaries between theoretical and practical fields. Beyond the existing dichotomy between a technological-media a priori (e.g. Marshall McLuhan, Friedrich Kittler) and an anthropocentric hermeneutic (e.g. Jürgen Habermas), a new understanding of media ecologies has emerged in the field of digital humanities (cf. Erich Hörl 2011). In order to keep this discourse productive, also in a political sense, we should recall the work of Guattari and his attempt to actively build new assemblages of enunciation – may they be human, non-human, or both.

References

Alliez, É. and Goffey, A. (eds.) (2011): The Guattari Effect, London: Continuum.

Apprich, C. (2013): ‘Remaking Media Practices – From Tactical Media to Post-Media’, in Mute Magazine, available at: http://www.metamute.org/editorial/lab/remaking-media-practices-%E2%80%93-tactical-media-to-post-media

Barbrook, R. (2009): ‘The Holy Fools’, in: J. Berry Slater and P. van Mourik Broekman (eds.): Proud to be Flesh: A Mute Magazine Anthology of Cultural Politics after the Net. London: Mute Publishing, pp. 223-236.

Broeckmann, A. (2013): ‘“Postmedia” Discourses: A Working Paper”’, in mikro-berlin Blog, available at: http://www.mikro.in-berlin.de/wiki/tiki-index.php?page=Postmedia+Discourses Critical Art Ensemble (2001): Digital Resistance: Explorations in Tactical Media, Brooklyn: Autonomedia. Goddard, M.: ‘Felix and Alice in Wonderland: The Encounter between Guattari and Berardi and the Post-Media Era’, in: generation online, available at: http://generation-online.org/p/fpbifo1.html

Guattari, F. (1996a): ‘The Postmodern Impasse’, in: G. Genosko (ed.) The Guattari Reader, Oxford: Blackwell, pp. 109-113.

_____. (1996b): ‘Remaking Social Practices’, in: G. Genosko (ed.) The Guattari Reader, Oxford: Blackwell, pp. 262-272. _____. (1989): ‘The Three Ecologies’, in new formations (8), p. 131-147. _____. (2012): ‘Towards a Post-Media Era’, in Mute Magazine, available at: http://www.metamute.org/editorial/lab/towards-post-media-era

Hall, S. (1994): ‘The Rediscovery of Ideology: Return of the Repressed in Media Studies’, in: J. Storey (ed.): Cultural Theory and Popular Culture: A Reader, Essex: Pearson, pp. 124-155.

Hörl, E. (ed.) (2011): Die technologische Bedingung. Beiträge zur Beschreibung der technischen Welt, Berlin: Suhrkamp. Slater, H: ‘Post-Media Operators: Sovereign and Vague’, in: subsol, available at: http://subsol.c3.hu/subsol_2/contributors0/slatertext.html

Digitalisation has repercussions for the organisation, theoretisation and politicisation of labour. It touches upon contemporary conditions and forms of work, enhances global interdependences, and advances transnational debates and political initiatives. Precarity is a keyword when it comes to debating these changes within the field of work. Generally speaking, it suggests an increase of collective and individual insecurities and an intensification of vulnerabilities.

Definition

Precarity and precariousness are disputed notions in different fields. The terms first emerged within activist debates, especially in the context of the EuroMayDay movement in Milan, Italy in 2001. During the following years, the critique formulated by this movement, including its symbolic language and tools, were adopted by other activists throughout urban Western Europe (cf.  http://five.fibreculturejournal.org/). The academic rise of related terms – such as notions of precariousness, precarisation and precarious work – complemented this political picture and similarily contained a strong European focus, but was also informed by different theoretical traditions. Interpretations and analysis by autonomous Marxists, for instance, are not necessarily compatible with those provided by sociologists of labour, and all intersect with and have been challenged by feminist and postcolonial approaches. Nevertheless, within these contexts, four layers of meaning can be differentiated:

  • First, precarity provides a label to the specific experience of the younger generation in Europe concerning their working and living conditions. The collective Chainworkers organised the first MayDay parade on May 1st 2001. Taking up the symbolic date of labour related protests, they introduced a new analysis to this tradition: precarious work and life was placed at the centre of political mobilisation, while tactical media and online communication played an important role for the transnationalisation of the movement. For this activist context, precarity was then used to tackle changing working conditions, especially for the younger generation. Here, work was understood as being shaped by temporary employment, labour leasing and freelancing as much as low wages. Consequently, life becomes more precarious as well, even more so for those who are exposed not only to capitalist exploitation and marginalisation but also to racism, imperial regimes and patriarchy.
  • A second layer of meaning unfolds when precarity is considered not as a condition, but as a structural process. This process is linked to fundamental changes in working and living conditions within Western European societies. It is seen as politically induced and strongly linked to systemic changes of capitalism itself. From this perspective, precarity is framed as a collective, but individualising experience that is embedded into post-Fordist, neoliberal capitalism. The structural level is put into the center of attention. From a perspective of gouvernmentality studies, Isabell Lorey (2012) coined precarity as a mode of governing and suggested using the term of precarisation.
  • Third, this governmental reading of precarity is linked to discussions on precarity as a specific mode of subjectivation which generate new modes of subjectivity (Lorenz and Kuster 2007). These are shaped by fundamental insecurities leading up to a fundamental lack of prospects or the lack of a sense for one’s own future. Thus, precarity feeds back on the agency of subjects.
  • Finally, a fourth perspective tackles precarity as a subject-related state of being. It is, therefore, not considered as a question of social, political and economic changes, but as a condition of subjectivity. This interpretation reflects upon precarity – or rather, precariousness – as an ontological condition of human beings. Thus, as Judith Butler (2009) has argued, it provides a starting point for an ethics of relationality that systematically reconsiders the modern notion of the subject itself.

Theoretical Debates and Critique

Within political activism as much as social sciences, philosophy and cultural studies, there is not one undisputed singular interpretation of precarity, but many competing definitions. The major blindspot within the specialised discourse on precarity – reaching from activism to academia and back again – lies in the remarkable ignorance of each others findings and conceptualisations. Precarity, in this respect, has to be treated as a contested concept with different layers of meaning, but also with different critical, if not even affirmative, connotations. While the discussion initiated at the beginning of the 2000s held strong critical intentions, its diffusion into public discourse (in German speaking countries around 2006) clearly reduced its political dimensions.

Autonomous Marxism

Certainly, autonomous Marxism makes the clearest point of interlacing precarity and the digital, especially through concepts such as immaterial labour and cognitive capitalism. This line of thought focuses on new modes of capitalist production centred mostly around knowledge. The concept of immaterial labour hints at changing modes of production (through 'responsibilisation' and an enhanced attention to the subjectivity of workers) as much as on all kinds of producing and processing knowledge. Placing themselves in the tradition of the Italian movement of Operaismo, activist researchers have also taken up their combined methods of political mobilisation and knowledge production: through militant research, information about precarity/precariousness is gathered and widely documented and spread through digital media (cf. Precarias a la deriva). However, it should be noted that the concept of immaterial labour has additionally been criticised as neglecting, if not disregarding entirely, the material dimensions of cognitive capitalism. Firstly, the reproduction of labour forces needs to be considered as fundamental to cognitive capitalism similar to previous forms of capitalism, as feminist researchers and activists have pointed out (cf. Fantone 2007). Secondly, cognitive capitalism and immaterial labour are dependent on (material) infrastructure, the production of which has been outsourced mainly to the Global South. Although contemporary divisions of labour strongly relies upon the digitalisation of production flows, it is additionally organised along Taylorist modes of production. Thus, arguing exclusively for the idea of immaterial and cognitive labour runs the risk of obscuring exploitation and domination behind the precarious glamour of creative labour and knowledge work in digitalised societies.

Sociology of Labour

The French sociologist Robert Castel (2003) has influentially discussed precarisation (in the sense of growing precarity) as the new social question for Western European societies. Indeed, his history of labour relations and the social question strongly informed academic research on precarity, especially in Germany. This strand of research is based on empirical findings and thus systematically documents changes within the field of work. One quite widely explored area of research are the working conditions of the so-called ‘creative class’, whose rise has been strongly connected to digital technologies. As generally well trained freelance workers, they experience precarity from a relatively privileged position. Still, the question of how these changes are linked to, enabled by and/or pushed forward through digital systems remains a rewarding field of further research.

Feminist and Postcolonial Queerings

Besides offering systematic theorisations of precarity/precariousness (cf. Judith Butler, Isabel Lorey and others), feminist and postcolonial interventions contribute to the debates on precariousness at least two fundamental lines of critique which can prove crucial to the digital humanities. First, feminist research has long criticised androcentric notions of labour that frequently dominate mainstream research. Introducing a broader and more complex understanding of work opens up a path for analysing the character of labour within digital settings to its full extent: this might encompass not only paid and contracted work, but also connect with affective labour, moments of reproductive logics, free labour and the whole new range of un/remunerated, but precarious work (observable, for instance, in the context of crowdsourcing). Second, feminist and postcolonial theories offer highly complex understandings of domination and power relations, not only in the context of precarisation, but applicable across a wide range of social and political contexts. This knowledge on the interdependences of systems and categories of oppression provides a useful toolkit for critically assessing precarity in relation to digital infrastructures. They enable us to trace and contextualise the varieties within precarity within logics of discrimination and privilege. Neither experiences of precarity nor precarious subjectivities can be understood by themselves, they need to be analysed within societal dynamics of inequalities in general. Feminist researchers like Precarias a la deriva (2004) emphasize the challenge of engaging collectively without loosing differences out of sight: each one's precarity might be embedded into the same societal and economic dynamics, but still turn out to be of fundamental difference depending on each ones social position within his_her society.

Public Discourse

Debates on precarity also circulate throughout a broader public discourse, i.e. mainstream media. Critical interpretations tend to be most successfully transported and mediated through new media. Indeed, traditional media channels generally marginalise critical interventions, especially when articulated by activists (Mattoni 2008). Public discourse especially mediated through mainstream channels negotiates precarity/precariousness in extremely diverse and disputed ways. It tends to coopt critical notions and feed them into a hegemonic framework that does not challenge the status quo (Freudenschuss 2013). Therefore, media – including digital media – prove to be one core arena for social struggles over precarity/precariousness.

Futures

Two major questions determine the progressive use of precarity as a political notion: who is considered as a precarious subject? What implications does precarity have for society as a whole? They might serve as venturing tools to explore working and living conditions of digital laborers (e.g. for crowdsoursing projects like Mechanical Turk). They support the search for and reflection on the ability to collectively organise through the web. Moreover, digital devices are central to the organisation of precarious work beyond the virtual realm: they allow the global care chains to function by enabling long-distance caring for family members, they offer a virtualisation of proximity in defiance of commuting, to mention just two examples.

Reclaiming the notion of precarity for the digital humanities, in particular, demands a critical assessment of these questions. Both underline the potential that the notion of precarity carries for linking the material and the immaterial, freedom and oppression, and in short, the ambivalence of today’s societal changes. Nevertheless, such ambivalence and its joint dynamics can only be fully explored by pushing the notion of precarity/precariousness into new contexts and understandings beyond the borders of nation states and the contemporary divisions of labour.

References

Butler, J. (2009): Frames of War: When Is Life Grievable?, New York: Verso.

Castel, R. (2003): From Manual Workers to Wage Laborers. Transformation of the Social Question, trans. Richard Boyd, New Brunswick: Transaction Publishers.

Fantone, L. (2007): 'Precarious Changes: Gender and Generational Politics in Contemporary Italy.' In: Feminist Review (87), pp. 5-20.

Freudenschuss, M. (2013): Prekär ist wer? Der Prekarisierungsdiskurs als Arena sozialer Kämpfe, Münster: Westfälisches Dampfboot.

Kuster, B; Lorenz, R. (eds.) (2007): Sexuell Arbeiten: Eine queere Perspektive auf Arbeit und prekäres Leben, Berlin: b_books.

Lorey, I. (2012): Die Regierung der Prekären, Wien: Turia + Kant.

Mattoni, A. (2008): 'Serpica Naro and the Others. The Media Sociali Experience in Italian Struggles Against Precarity.' In: PORTAL Journal of Multidisciplinary International Studies 5(2), pp. 1-24.