Noumenal Data: Software Takes Command (Review Essay)

software_takes_command

Lev Manovich’s latest book, Software Takes Command, serves as a follow-up, an extension and an updating of his previous book-length study The Language of New Media (2001). Like Language, Software Takes Command is impeccably organized and thorough, meant to be exhaustive and complete in its topology of the practices of new media production and consumption, from taking an Instagram pic of your boyfriend on holiday to the most advanced modes of graphic design, film and speculative architecture. As the 2001 work sought to map the specific grammar of new media (distinct, Manovich argued, from, for example, experimental film), the present text seeks to present a taxonomy of software-based media authoring, editing and accessing techniques. Manovich’s claim is that this means virtually all artistic production outside of fine art traditions of craft and easel painting: in a sense, we’re all new media artists now. The corollary to this assertion of the software substratum for media is that to think about media at all is necessarily to engage in thought about the nature of software; as he states at the very beginning of the book: “What electricity and the combustion engine were to the early twentieth century, software is to the early twenty-first century” (2).[1]

In order to substantiate this claim, Manovich looks at the history of the modern personal computer (focusing on the work of Alan Kay and his conceptualization of the computer as a machine capable of simulating every other medium), the unique techniques that software-based media applications use in their simulation of media and the cultural and social effects of the “softwarization” of media (developed through close readings of what one does when one uses Photoshop and After Effects in particular). Manovich lays out the thematics clearly:

Between the early 1990s and the middle of the 2000s, media software has replaced most of the other media technologies that emerged in the nineteenth and twentieth centuries. Most contemporary media is created and accessed via cultural software – and yet, surprisingly, few people know about its history. What was the thinking and motivation of people [like Alan Kay and his colleagues] who between 1960 and the late 1970s created the concepts and practical techniques that underlie today’s cultural software? How does this shift to software-based production methods in the 1990s change our concepts of “media”? How have the interfaces and the tools of content development recharged and continued to shape the aesthetic and visual languages we see in contemporary design? (43)

The answer is provided throughout the book, which operates holographically, that is, its theses are distributed evenly throughout the text by way of introductions, summaries and conclusions, which, it must be pointed out, lead to occasional laboriousness and inertia.[2] In brief, Manovich demonstrates that the computer “metamedium” (ie. that which is capable of absorbing the techniques, processes and languages of other media through simulation and augmentation) means that the specificity of particular media – the unique properties of film, video or design, for example, that were the characteristic field of exploration for the modernist avant-garde – are not properties of the media themselves, but of the software, the “interfaces, the tools and the techniques they make possible for accessing, navigating, creating, modifying, publishing and sharing media documents” (336). This transfer of media specificity from the ontic substrate of the medium in question to its software occurs at the point of access: the properties of a digital photograph are, Manovich argues, significantly different if it is accessed using the default viewer application (eg. Windows Preview, Instagram or Photoshop). This leads Manovich to argue two points about media: that media have become increasingly similar, sharing much the same language, whilst retaining some relative degree of specificity (due to the presence of some techniques that are specific to their media, eg. “word count” or “blur”) and that media, whether it is a film, an audio clip, an image or a three-dimensional graphic, can be defined as the intersection of a data structure and a set of algorithms which operate on that data structure.

These latter two points have important consequences, of which the most important is the principle of interoperability. Many techniques for the accessing, modifying etc media data are “medium-independent”; they function in a similar manner whether they are run on text or image (eg. zoom, search, filter), although they will use different algorithms depending on the data structure on which the techniques are used. This principle of interoperability allows Manovich to postulate the significance of media hybrids in the the universe of softwarized media in which “media techniques start acting like species within a common ecology – in this case a shared software environment. Once ‘released’ into this environment, they start interacting, mutating and making hybrids” (164), a condition that Manovich will call “deep remixability.”[3] This condition of hybridization/deep remixability is a condition in which media are not merely collaged/montaged contiguously to one another – as he argues is the case in multimedia work – but actually cause different media to speak a similar language, that is, to develop a metalanguage that is only made possible by the software-based computer metamedium: “If we define an artistic language as a patterned use of a selected number of a subset of techniques available in a given medium, a metalanguage is a patterned use of the subset of all techniques available in a computer metamedium” (276). The metalanguage should not be confused with the metalanguage as it is used in other fields (linguistics, psychoanalysis); the metalanguage is simply the general set of software languages, to which Manovich posits neither internal nor external limit.

It is at this point that it is worth stepping back and asking a few fundamental questions about the sort of work that Software Takes Command actually is. As with The Language of New Media, Manovich’s overt purpose here is descriptive: this is what people working in film, sound and graphic design are actually doing now. (Hence the careful analysis of the details of such applications as Adobe Photoshop and After Effects down to the level of workflow and what it is one does when once uses a particular filter or adds a layer to a still or moving image.) This explains Manovich’s characteristic taxonomization of software techniques and applications. However, there is a polemical note being struck throughout the text, in which Manovich indicates that our thinking of how contemporary media operates has not caught up with the actual practice of contemporary media artists, designers and practitioners. (This is a point that we will be addressing presently.) The larger question becomes: for whom is Software Takes Command intended? On the last few pages, Manovich sends an exhortatory message to those “media practitioners – professions and students creating software applications, tools, graphic designs, web designs, motion graphics, animations, films, space designs, architecture, objects, devices and digital art” (340-341) creating and developing new modes of media through software, noting that it is that they to whom Manovich directs not only Software Takes Command, but all of his work.[4] Certainly, this would explain a good deal about this text, particularly the strange moments of defensiveness that attend Manovich’s explanations as to his use of Photoshop and After Effects – that is, commercially available applications – as opposed to open access and open source software.[5] At the same time, there is the question of the aforementioned laboriousness: pages in the introduction – itself fifty pages of a 340 page book! – that list off the various types of media software applications arranged by type and function, with brief descriptions of what these applications do, along with some fairly interesting discussions of their development – something with which professional media designers would surely be familiar.

Similarly, there is the question of what we referred to in our essay on the New Aesthetic as the “political aporia of the creative classes.”[6] This is the question of capitalism and software studies. Manovich at times seems to admit that this is a necessary conjunction:

All these software mutations and new species of software techniques are deeply social – they do not simply come from individual minds or from some ‘essential’ properties of a digital computer or a computer network. They come from software developed by groups of people, marketed to a large number of users and then constantly refined and expanded to stay competitive in relation to other products in the same market categories. … They are the result of the intellectual ideas conceived by pioneers working in larger labs, the actual products created by software companies and open source communities, the cultural and social products set up when many people and companies start using it and software market forces and constraints. (149)

On the one hand, Manovich uses this to evade the accusation of technological determinism through the assertion that there are, in fact, human actors in this drama who make decisions according to the social world they inhabit rather than following some inexorable logic of the machine. On the other hand, this also allows Manovich to allow that market forces are present in the software ecosystem (a term that we will highlight for further reflection later) while simultaneously ushering them off the stage. Thus Manovich will admit that capitalist business interests seem to be more conducive to software innovation,[7] that some of the applications were developed for less “cultural,” less savoury purposes,[8] and that the innovations promised by the software industries contain their own (profit-driven) internal limits,[9] but will brush these points – which are nothing if not germane to the development of what can be polemically asserted as the hegemony of standardized software – to the side, as in this telling moment:

Of course we should not forget that the practices of computer programming are embedded within the economic and social structures of the software and consumer electronics industries. These structures impose their own set of constraints and prerogatives on the implementation of hardware and software controls, options and preferences (all these terms are just different manifestations of software parameters). Looking at the history of media applications and media electronic devices, we can observe a number of trends. Firstly, the number of options in media software tools and devices marketed to professionals gradually increases. … Secondly, features originally made available in professional products later become available in consumer-level products. However, to preserve the products’ identities and justify price differences between different products, a significant difference in feature sets is continuously maintained between the two types of products, with professional software and equipment having more tools and parameters than their consumer equivalents. …
All this is obvious – however, there is also a third trend, more interesting for media theory. Following the paradigm already established by the end of the nineteenth century when the Kodak company started to market its cameras accompanied by a slogan “you push the button, we do the rest” (1982), contemporary software devices aimed at consumers significantly automate media capture and editing in comparison to their professional counterparts. (223-224)

This, we must point out, is something of a reflex move throughout Software Takes Command: there is an admission that capitalist interests often guide software development as much as the creativity of the people actually involved in the conceptualization and implementation of new modes of software techniques and applications, but that this is less interesting than the internal logic of software’s increased automation of creative tasks. We cannot and will not castigate Manovich for not writing something like The Political Economy of Software. At the same time, this tendency demonstrates that the question of politics and political economy represents a sort of black spot in software studies, something that must be rectified in the future. What ideological gain is achieved by this suppression of the actual nature of the economic in the development of software? If software and capitalism are inextricably linked, as Manovich states, what does this mean in terms of the social effects of the software hegemony? Is a communist software imaginable? These are questions that we extend as promissory notes, as it were, for a future program in software studies to be undertaken.

This strange lack of curiosity about the full role of the economic in the history of media software might be connected to another point at which Manovich’s investigations break off. The central point of Software Takes Command is clearly articulated in the title: that the software substrate of the contemporary cultural sphere is pervasive and ubiquitous, and that to ignore it is to ignore one of the principal ways in which the current situation is to be understood: “If we are interested in the histories of visual communication, techniques of representation and cultural memory, I do think that the universal adoption of software throughout the global culture industries is at least as important as the invention of print, photography or cinema. But if we are to focus on the social and political aspects of contemporary media culture and ignore the questions of how new media looks and what it can represent – asking instead about who gets to create and maintain social relations – we want to put computer networks … in the centre. And yet, it is important to remember that without software, contemporary networks would not exist. Logically and practically, software lies underneath everything that comes later” (183). This is the fundamental polemic that Manovich puts forward, which he reiterates several times (as mentioned above), most bluntly in the “There is Only Software” section of the “Understanding Metamedia” chapter: “”None of the new media authoring and editing techniques we associate with computers are simply the result of media ‘being digital.’ The new ways of media access, distribution, analysis, generation and manipulation all come from software” (148). This leads Manovich to the most, for us, contentious statement in the present work: “There is no such thing as ‘digital media.’ There is only software, the ‘properties’ of digital media are defined by the particular software as opposed to solely being contained in the actual content (ie. inside digital files)” (152, emphasis in original). This is what might be described as Manovich’s qualified Kantianism: there are different data structures to which only programmers and software developers have access; for the most part, our access to these data structures is mediated through phenomenal software interfaces rather than than the noumenal data itself. It is only a “conceptual inertia” that leads us to locate media properties at the noumenal/data level, rather than understanding them as phenomenal features of the software interfaces.[10] In fact, there is a strong sense in Manovich’s text in which data structures are, like Kantian noumena, outside human cognition; although he notes that, as a general rule, only programmers ever, at the practical level, ever deal with data per se, he also notes that most of the time these programmers are involved with the development of software platforms for accessing/generating etc these data structures. Novelty, then, exists purely at the phenomenal level of software. This leads to a significant point about media art production today:

What used to be separate moments of experimentation with media during the industrial era became the norm in the software society. In other words, the computer legitimizes experimentation with media. … What differentiates a modern computer from any other machine – including industrial machines for capturing and playing media – is separation of hardware and software. It is because an endless number of different programs performing different tasks can be written to run on the same type of machine that that machine – i.e. a digital computer – is so widely available today. Consequently, the constant invention of new (and modification of existing) media software is simply one example of this general principle. In its very structure, computational media is “avant-garde” since it is constantly being extended and thus redefined.
If in modern culture “experimental” and “avant-garde” were opposed to normalized and stable, this opposition largely disappears in software culture. And the role of the media avant-garde is performed no longer by individual artists in their studios but by a variety of players, from very big to very small – from companies such as Microsoft, Adobe and Apple to independent programmers, hackers and designers. (92-93)

The agony and the ecstasy of the avant-garde is quietly removed; Bill Gates and Steve Jobs are our new André Breton and F.T. Marinetti. Certainly, the aforementioned displacement of the political allows for this sort of assertion of identity between the mainstream and what-used-to-be oppositional, a symptom, as we suggested in our essay on the New Aesthetic, of a refusal to think of an exterior to capitalism, despite the apparent democratization of software development – the “independent programmers, hackers and designers.”[11] All of which is dependent on the central thesis of Manovich’s text, that software is the horizon of thought, as ontological fundament, when it comes to new media theory, and it is to the validity of this thesis that we must turn by way of the noumenal layer of data.
Manovich, in his discussion of Photoshop, notes that contemporary electronic media have, unlike their mechanical/industrial predecessors, become coextensive with the idea of “information,” specifically, data.xii “Rather than operating on sounds, images, video or texts directly, electronic digital devices operate on the continuous electronic signals or discrete numerical data” (133), which allows for common techniques such as “sort” or “filter” to operate across phenomenally heterogeneous media types; the concomitant increase in economy and efficiency is one of the reasons why the softwarization of media occurred with such celerity. Indeed, early on in his text, Manovich indicates that software is the point of mediation between media and data: “… since media software operations (as well as many other computer processing of media for research, commercial or artistic purposes) are only possible because the computer represents media as data (discrete elements such as pixels of equations defining vector files such as EPS), the development of media software and its adoption as the key media technology … is an important contributor to the gradual coming together of media and data” (30). Media and data come together in computer software: or rather, data phenomenalizes itself as media through software, and, as Manovich writes, “… any digital image can be understood as information visualization …” (154). We are able to “visualize” (but also render audible and tactile – in short, phenomenalize) data only through a veil of representations, as it were, provided by the software interface. The digital image owes all of its properties to the software that phenomenalizes the noumenal data substatum.

The question is, then: does this mean the same thing as “There is only software”? The question might be reframed in this manner: is the only epistemic feature that can be conceptualized really only the software layer, given the firm Kantian line that Manovich draws before data? The answer, despite Manovich’s protestations, must be no. The digital image is, to be sure, a phenomenal mediation generated by software acting on data, but this does not mean that the digital image is ontologically software. There is a sense that Manovich somewhat grudgingly admits this toward the end of his book when he discusses his interest in data mining and algorithmic operations (340). Manovich’s Kantianism holds him, to conclude, from the full trajectory of his argument: there is not only software, there is only data. The ontology of digital media, in order to be fully realized, must overcome this Kantian prescription against moving beyond the software substrate to the ontological structure of data itself.

27EEBF2F-FF84-496A-A7FD-42EDF811AD1D

[1] All citations from Lev Manovich, Software Takes Command. New York: Bloomsbury, 2013. The structural comparison between software and mechanization is embedded in the title of Manovich’s text, which alludes to Benjamin Giedion’s 1947 book Mechanization Takes Command, which was avidly read by Marshall McLuhan while he was at Cambridge. (Giedion was also a friend of Walter Benjamin, a fact that Richard Cavell uses to assert a possible lineage of influence from Benjamin to McLuhan, cf McLuhan in Space: A Cultural Geography. Toronto: UP, 2002, p. 40).

[2] This is more than a little unfair, but it was hard not think of Walter Benjamin’s satirical advice on the writing of large scholarly tomes provided in “One Way Street”: “The whole conception must be permeated with a protracted and wordy exposition of the initial plan. … Conceptual distinctions laboriously arrived at in the text are to be obliterated again in the relevant notes. … For concepts treated only in their general significance, examples should be given; if, for example, machines are mentioned, all the different kinds of machines should be enumerated.” (“One Way Street,” Reflections: Essays, Aphorisms, Autobiographical Writings. Ed. Peter Demetz. Trans. Edmund Jephcott. New York: Schoken Books, 1978, 61-94. p. 79.) It must be pointed out Manovich’s close attention to detail pays off and is more often than not rhetorically effective, and should be considered as something of a symptom of some of the problems that we will diagnose later in this paper. 

[3] Deep remixability is the remixing of “not only content from different media but their fundamental techniques, working methods and ways of representation and expression” (268). Manovich makes strong claims for the remix generally, and for the deep remix more specifically: “If Fredric Jameson once referred to post-modernism as the ‘cultural logic of late capitalism.’ we can perhaps call the remix ‘the cultural logic of networked global capitalism’” (260). This is one of the few times – enough to be counted on the fingers of one hand – in which the word “capitalism” appears in Manovich’s book. 

[4] “… [S]ince both this book and all my writing are directed first of all to media practitioners – professionals and students creating new software applications and tools, graphic designs, web designs, motion graphics, animations, films, space designs, architecture, objects, devices and digital art – you can do more than simply agree or disagree with my analysis. By inventing new techniques – and by finding new ways to represent the world, the human being and the data, and new ways for people to connect, share and collaborate – you can expand the boundaries of ‘media after software’” (340-341). 

[5] At least twice in his book, Manovich explains the use of his examples by noting that more people use, for example, Photoshop rather than Gimp. (For good reason: having used both software applications, I have found the latter almost totally unusable due to its insanely complex interface compared to the former. I should point out that I am not a writer of code and have only vague plans to learn). But note the strangely defensive tone in what follows: “Some readers will be annoyed that I focus on commercial applications for media authoring and editing, as opposed to their open source alternatives. For Instance, I discuss Photoshop rather than Gimp, and Illustrator rather than Inkwell. I love and support open source and free access, and use it for all my work. Starting in 1994, I was making all of my articles available for free download on my website manovich.net. And when in 2007 I set up a research lab (www.softwarestudies.com) to start analyzing and visualizing massive large datasets, we decided to also follow a free software/open source strategy, making the tools we develop freely available and allowing others to modify them” (50). Not for the first or last time, readers must surely be inclined to wonder what all of this is about: who are these people likely to be annoyed, and would Manovich’s explanations mollify their annoyance? 

[6] We would have to look for awhile to come up with a more astonishing sense of these aporia than this shocking paragraph, all the more so from someone who has written about his experiences of learning to write computer code in the Soviet Union of the 1970s: “In 1989, the former Soviet satellites of Central and Eastern Europe peacefully liberated themselves from the Soviet Union. In the case of Czechoslovakia, this even came to be referred as the Velvet Revolution – to contrast it to typical revolutions in modern history that were always accompanied by bloodshed. To emphasize the gradual, almost invisible pace of the transformations which occurred in moving image aesthetics between approximately 1993 and 1999, I am going to appropriate the term the Velvet Revolution to refer to these transformations. (Although it may seem presumptuous to compare political and aesthetic transformations simply because they share the same non-violent quality, it is possible to show that the two revolutions are actually related” (253-254). This is absolutely astonishing as far as offensive metaphorics go, especially since the germ of a potentially essential idea – that is, the relation between technological and political revolutions – is left fatally undeveloped, compounding the sense of an incredibly pointless rhetorical flourish which is at odds with Manovich’s carefully measured tones elsewhere. In short, this is a total failure, but a failure that may not be Manovich’s alone.

[7] “Why did [Ted] Nelson and [Alan] Kay find more support in industry than in academia for their quest to invent new computer media? And why is the industry (by which I mean any entity which creates the products which can be sold in large quantities or monetized in other ways, regardless of whether this entity is a large multinational company or a small start-up) more interested in innovated media technologies, applications and content than computer science? The systematic answer to this question will require its own investigation” (85). Fair enough, this is not a book on the politics of software, but the brief answer we do get is somewhat anodyne: “:.. modern business thrives on creating new markets, new products and new product categories” (85). 

[8] What eventually became Photoshop, Manovich (briefly channelling Paul Virilio’s War and Cinema) notes, grew out of militray applications: “The field of digital image processing began to develop in the second half of the 1950s when scientists and the military realized that digital computers could be used to automatically analyze and improve the quality of aerial and satellite imagery collected for military reconnaissance and space research. (Other early applications included character recognition and wire-photo standards conversion.) As part of its development, the field took the basic filters that were already commonly used in electronics and adapted them to work with digital images. The Photoshop filters that automatically enhance image appearance … come directly from that period …” (134).

[9] “Of course, not all media applications and devices make all these techniques equally available – usually for commercial and/or copyright reasons. For example, at present, the Google Books reader does not allow you to select and copy the text from book pages. Thus, while our analysis applies to conceptual and technical principles in software and their cultural implications, we need to keep in mind that in practice these principles are overwritten by commercial structures” (123). Similarly, the evolution of file formats is constrained by commercial interests: “… [I]n contrast to the 1960s and 1970s, when a few research groups were gradually inventing computational media, today software is a big global industry. This means that software innovation is driven and economic factors rather than by theoretical possibilities. … If file formats were to change all the time, the value of media assets created or owned by an individual or a company would be threatened” (216, emphasis added.)

[10] “We now understand that in software culture, what we identify by conceptual inertia as ‘properties’ of different mediums are actually the properties of media software – their interfaces, the tools and the techniques they make possible for accessing, navigating, creating, modifying, publishing and sharing media documents. For example, the ability to automatically switch between different views of a document in Acrobat Reader or Microsoft Word is not a property of ‘electronic documents’ in general, but as a result of software techniques whose heritage can be traced to [software designer Paul] Engelbart’s ‘view control.’ Similarly, the ability to change the number of segments that make up a vector curve is not a property of ‘vector images’ – it is an option available in some (but not all) vector drawing software” (225). 

[11] There are a few points of note. For one thing, software tends to develop incrementally (47) through addition and hybridization leading to a cultural “continuity turn” (341) as opposed to the breaks and discontinuities as per Deleuze and Guattari’s schizoanalysis of capitalism in Anti-Oedipus (which we will address in an upcoming essay on accelerationism). Similarly, Manovich notes a certain retrograde tendency away from production and toward simple consumption in contemporary computer design in the iPad: “Thus, if in 1984 Apple’s first Apple computer was critiqued for its GUI applications and lack of programming tools for the users, in 2010 Apple’s iPad was critiqued for not including enough GUI tools for heavy duty media creation and editing – another step back from [Alan] Kay’s Dynabook vision. The following quote from an iPad review by Walter S. Mossberg from the Wall Street Journal was typical of journalists’ reactions to the new device: ‘if you’re mainly a Web surfer, note-taker, social-networker and emailer, and a consumer of photos, videos, books, periodicals and music – this could be for you.” The New York Times‘ David Pogue echoed this: “The iPad is not a laptop. It’s not nearly as good for creating stuff. On the other hand, it’s infinitely more convenient for consuming it – books, music, video, photos, Web, e-mail and so on.’” (109). 

[12] “… [M]any of the ‘new’ techniques for media creation, editing and analysis implemented in software applications were not developed specifically to work with media data. Rather, they were created for signal and information processing in general – and then were either directly carried over to or adapted to work with media. (Thus, the development of software brings different media closer together because the same techniques can be used on all of them. At the same time, ‘media’ now share a relationship with all of there media types, be they financial data, patient records, results of a scientific experiment, etc.) … This is one of the most important theoretical dimensions in the shift from physical and mechanical media to electronic media and then digital software (132-133, emphasis in original). Manovich does not spare the italics, ever

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s