Paper presented at
2010 Conference of the
European Society for Science, Literature and the Arts
“Textures”
Stockholm School of Economics, Riga, 17 June 2010
The materiality of media has always had a profound impact on writing and written texts. The various writing materials and instruments in use throughout history—from clay tablets and stylus to parchment and quill to blackboard and chalk to paper and the printing press to the typewriter and the digital computer—are not indifferent modes of production. They take a large part in shaping the product. Writing, therefore, is not an idealized function of signification. It can only ever appear in historical configurations, to paraphrase Vilém Flusser, of diverse surfaces, tools, signs, rules, and gestures. The physical properties and specifics distinguishing particular writing systems play a key role in the formation of text and they bring about very different textual structures. In short: the materiality of the medium affects and, at least partly, determines the texture of the text.
This is, of course, a well-established theory and there has been extensive research on different media technologies to prove it. Ivan Illich did it for the scholastic manuscript, Marshall McLuhan and Elizabeth Eisenstein for the printing press, Friedrich Kittler for the typewriter, Jay D. Bolter and many others for the digital computer. In its most general form, the argument about the “materialities of communication” marked the beginning of media studies proper when, after the Second World War, Harold Innis set out to analyze “[t]he significance of a basic medium to its civilization” and found the decisive factor to be the medial material bias towards time or space.
The predominent writing tool of today is the digital computer. Like all earlier media, it is subject to the fundamental antagonism of mediation that Jay D. Bolter and Richard Grusin have tried to explain by the concepts of ‘immediacy’ and ‘hypermediacy’. Let me, for the purpose of this talk, slightly rephrase and simplify Bolter’s and Grusin’s analytical framework so that ‘immediacy’ will mean all the technological features and strategies seeking to negate the material presence of a given medium, whereas ‘hypermediacy’ will mean all the forces exposing and emphasizing its material qualities.
In writing and written texts, immediacy and hypermediacy are at work on different levels and in different ways. On a fundamental level, the twofold structure of the sign itself points to immediacy (e.g. in the phantasma of a “transcendental signified”; Derrida) as well as to hypermediacy (e.g. in the “insistence of the letter”; Lacan). On the level of the code, when comparing different scripts, one is tempted to use the dichotomy of immediacy and hypermediacy as a principle of classification. The Greek alphabet, in contrast to consonantal or syllabic writing systems, has repeatedly—and falsely, I would say—been called the perfect writing system, an epitome of immediacy, because its meaningless forms are said to be completely transparent to the spoken word. On the level of graphic or visual presentation, different styles can be characterized as promoting either immediacy or hypermediacy. Think of to the clean, orderly page of a contemporary printed book as opposed to the rich compositions of islamic calligraphy or medieval manuscripts with their ornate letters and illuminations.
Thus, the texture of a written text, itself affected by the medium’s materiality, always contributes to the impression of either immediacy or hypermediacy. It is precisely a certain kind of texture that mediates text as a transparent vehicle for cognitive content. As Ivan Illich has shown, only a bundle of particular graphic techniques (uniform page layout, interword separation, segmentation by sections and paragraphs, enumerations, quotation marks, underlining of key words, consistent use of different letter sizes, and indexing) established an ordinatio or structure which turned pre-modern writing into the ‘modern text’ we are familiar with today.
While the antagonism of erasing and exhibiting medial materiality has always been at play in writing technology, the digital computer pushes both tendencies to their extremes. This concerns not only the programmable styles and effects of computer mediation but its very workings. On the one hand, the computer is probably the most sophisticated piece of technology widely available. It is an instrument of intricate physical detail, made up of a range of materials and a myriad of parts, extremely differentiated or textured, if you will, down to the dimensions of mere nanometers. On the other hand, digital computing systematically marginalizes its own materiality. Interestingly, this even applies to the physical substrate as such. Digital technology is not dependent on one specific material. Computers can be built from electromagnetic relays, electron tubes, transistors, integrated circuits and possibly sub-atomic particles or enzymes. The only prerequisite is that the physical parts used in the construction of a computer can be switched from one state to another. The computer also abstracts from specific materialities of earlier media. When simulating a painter’s canvas, a jukebox, or a movie camera, it is still not made from cloth, vinyl, or celluloid.
The marginalization of materiality, though, is most apparent in the computer’s operations themselves. When it works smoothly and doesn’t hang or crash, the computer seems to run as an almost immaterial machine. Vast amounts of data are stored, transmitted and processed in seconds or even fractions of a second. Digital data gains a near unlimited or magical plasticity. The computer’s operations take place in a sphere where the familiar laws of macrophysics and materiality apparently don’t apply. The characteristics of digital text discussed in recent years—hypertextuality, interactivity, mutability, hybridity, and so on—are usually attributed to this material, or rather immaterial, quality of their electronic fabric. In the words of Vilém Flusser: The digital computer is structurally complex and functionally easy. Like no other common technology it tries to overcome the constraints and limitations of inert matter.
The interplay of material basis and immaterial behavior that forms the framework of the computer can be described by Matthew Kirschenbaum’s distinction between forensic and formal materiality. For Kirschenbaum, forensic materiality comprises a computer’s concrete setup, from the casing down to the circuits and electromagnetic inscriptions of data on hard drives. While on this level all parts of the setup are unique material phenomena, their interaction aims at eliminating all material variances and deviations in the storage, transmission, and processing of data. The resulting formal materiality is an abstraction which has cleansed data from the ‘dirt’ and ‘noise’ of physical inscriptions, elevated it to the state of ‘pure’ digital information and thus gives the illusion of “immaterial behavior: identification without ambiguity, transmission without loss, repetition without originality”. It is this “formal environment for symbol manipulation” that makes possible the new sorts and forms of text we are dealing with in digital media.
The distinct framework of digital media, their formal materiality, certainly plays an important role in the formation of digital text. But its novel character is not only due to the ‘flexibility’ or ‘fluidity’ of electronic technology. What’s new about digital text is that, for the first time in history, writing’s materiality is, at least partly, determined by text—not the other way around. Whereas in analog media, it is materiality that implicitly shapes the texture of text (think of the material properties of clay and stylus, a calligrapher’s brush, or a typewriter’s letters and the corresponding written records), in digital media, texture is explicitly formulated as text. Texture has become a product of text.
To give you, for now, only the most prominent example: How the contents of a webpage are laid out and what the text looks like when displayed by a webbrowser (the fonts, sizes, styles, colors) is stated in the so called source code of the page: a text made up from writing in the Hypertext Markup Language (HTML), often supplemented with instructions in Cascading Style Sheets (CSS), Javascript and other code. Many of you are probably familiar with the more common HTML tags for headings, line breaks, emphasized text, hyperlinks and so on. With regard to their formal materiality, the source code’s bits and bytes are all the same: an undifferentiated, or untextured, stream of binary numbers. But concerning its interpretation and representation, the code of a webpage is split into two disparate levels: one that says what letters and signs will appear on the rendered page, and another that says how they will appear there. Nearly everything about a text’s graphic or visual presentation that is implicitly given in the handwritten or printed form has to be made explicit in the digital form. The texture of digital text must be spelled out. This ‘spelling-out’ is what digital markup in the most general sense does.
Originally, the term ‘markup’ meant the handwritten marks a proofreader, copy editor, or graphic designer put on a typescript or printed proof. These marks typically indicated corrections and revisions to the text. But they could also give instructions on how to typeset the text: what typeface to use, what style, what size, and so on. You probably all know such marks from the proofreading symbols described in the MLA style manual and guide.
The origins of digital markup lie in a revolutionary writing technology that, in the first half of the nineteenth century, challenged common notions of materiality: the electric telegraph. The code of the Morse alphabet, like other competing encoding schemes at the time, abstracted from the usual material mode of writing (visual marks on a flat, typically rectangular surface) and was optimized not for storage but for transmission, more precisely: for transmission without storage. To enable the seemingly dematerialized transmission of text, telegraphic codes separated character from letter shape.
The by-product of this ‘abstraction’, ‘optimization’ or ‘separation’ was the loss of texture—at least as it had been known from handwritten and printed documents. A message encoded in Morse code had no type size, no font style, no line breaks, no page layout whatsoever. The received dots and dashes on a strip of paper foreshadowed the undifferentiated stream of binary numbers in digital computing. Only when transcribed into the letters on a sheet of paper could the transmitted text be given a texture in the traditional sense.
Digital markup emerged when the telegraph was linked to another revolutionary writing technology of the nineteenth century: the typewriter. With automatic telegraphs printing letters on rolls of paper a very primitive form of page layout returned to telegraphy and digital code. The, to my knowledge, earliest forerunner of digital markup can be found in the character set of the Murray-Code from around 1900. This 5-bit code, which was later adopted by Western Union for use in its teleprinters, dominated global telecommunications in written form up to the 1970ies. Beside the printable characters of letters, numerals and punctuation marks, the Murray-Code contained two non-printable control characters which triggered a carriage return or line feed in the receiving teleprinter.
Admittedly, carriage return and line feed are extremily crude means of giving texture to a printed text. But the control characters for these functions in the Murray-Code mark the conceptual beginning of the more sophisticated markup codes and languages of today. ASCII, the direct successor to the Murray-Code, already assigned no less than a fourth of its 128 positions to control characters—from ‘start of text’ and ‘end of text’ to ‘file separator’, ‘group separator’, ‘record separator’ and ‘unit separator’.
The shift from single control characters for the markup of digital text to whole languages (like HTML) happened in the 1960ies when computers were first used as writing machines. To format documents with the earliest word processors like RUNOFF (1964), one had to intersperse the text—very much like HTML documents—with control words. These short commands determined what the text would look like when sent to a printer. By use of the control words, the author could set and change line length, indentation of paragraphs, justification, line spacing, page numbering, page headers, centering of lines, and more.
The concept of control words remains the cornerstone of formatted text, i.e. digital text which has been given texture. All modern file formats designed for text—Microsoft Word documents, PDF, RTF, Unix manual pages, HTML—go back to the RUNOFF programm and share the same basic principle.
It is fairly obvious but still important to note that the textual reconstruction of a text’s texture with digital markup as I have just described it does not in any way work towards negating the materiality of writing. On the contrary. Markup languages following RUNOFF give back to writing what is necessarily lost when signs are digitally encoded: the composition of writing in its graphic form.
What’s crucial, though, is that this first transformation of texture through digitization was soon followed by a second one. The markup tags in HTML that I have mentioned mostly do not describe the visual appearance of text elements. Instead, they represent the logical structure of a text—the very ordinatio whose emergence Ivan Illich attributes to the scholastic culture of the book. The markup in HTML says: ‘This is a heading’, ‘This a paragraph’, ‘This is a quote’, ‘This is a list item’. How these elements are displayed on the screen or a printout is typically left to the rendering engine of the browser and specifications given separately from the text in a style sheet. This kind of markup is often called ‘descriptive’, as opposed to the ‘procedural’ or ‘presentational’ markup of the RUNOFF, Microsoft Word, or RTF formats.
For most data processing applications, descriptive markup is now considered to be the better choice. This is exactly because descriptive markup does not give any instructions on what to do with the data and how to do it (e.g. ‘Print this text in 12 pt size, justified, double spaced.’) but—as its name says—only describes the data (e.g. as a paragraph). Correspondingly, there are increased possibilities for the automated processing of such data. In computing, this is called the Rule of Least Power. Tim Berners-Lee, the inventor of the WWW, notes: “[T]he less powerful the language, the more you can do with the data stored in that language.” The ultimate goal of descriptive markup, therefore, is an enhanced disposability of data. Sets of data are logically structured, so they can be automatically analysed, combined, and translated into many different forms for many different uses, platforms, and devices.
The most common technology of descriptive markup today is XML, the Extensible Markup Language, which drives a large part of gobal data processing and interchange. XML itself is not really a markup language but a meta language that allows one to define whatever markup one needs for specific purposes or applications. With XML, one can specify different languages which in turn describe the elements in a document and their attributes. XML languages are not limited to text in a narrow sense but can describe any kind of data which can be represented in writing: newspaper articles, product catalogs, adress books, recipes, TV schedules, and so on. Accordingly, markup tags can identify and describe any conceivable entity: ‘This is a person’s name’, ‘This is a geographical location’, ‘This is a product number’, ‘This is a price’, ‘This is an ingredient’, and the like.
To use two big words and borrow from two important philosophical traditions, one could say that procedural or presentational markup follows a primitive phenomenology of data by giving accurate descriptions of its appearance to the user. Descriptive markup, on the other hand, is more like an ontology in that it tries to say what a piece of data is and in what relation to other data it stands.
So it does not come as a surprise that there is, in fact, a whole family of XML languages called Web Ontology Language. These ‘ontological’ languages form an important part of the project of a Semantic Web. They describe what the information on the Web is about and how it can be correlated. The important point, of course, is that this ontology is not intended for humans but for machines. It aims at an ever more efficient, i.e. automated processing of data. “The sheer mass of this data [in the WWW] is unmanageable without powerful tool support. In order to map this terrain more precisely, computational agents require machine-readable descriptions of the content and capabilities of Web accessible resources.”
Before we come to the last part of my argument, let me shortly review the development of texture as one aspect of writing.
Since the beginning of writing, it was medial materiality that mattered with regard to the texture of text. What’s more, texture necessarily had to take a material form which directly coincided with the text’s visual appearance. In the last decades, however, we have witnessed two transformations in the texture of text or, generally speaking, of data. With digital encoding, computer technology systematically separated writing’s characters from their visual form which entailed a loss of physical texture as it had been known. Text, naively spoken, was dematerialized. Yet the very materiality of digital computing—what Kirschenbaum calls formal materiality—at the same time demanded and enabled the re-texturing of text through the use of digital markup. In an astonishing reversal of what, according to Illich, the scholastic culture had done for the book, the visual texture of text became an effect of text. But digital markup, it turned out, could also describe the logical structure of text. The two transformations of texture brought about by digitization are thus: first, from being implicit in the material shape of text to being explicitly formulated as text; second, from physical texture of text intended for human eyes to informational texture of data intended for the workings of machines.
What should we make of the two kinds of markup (presentational and descriptive) and the two kinds of textures (physical and informational)?
Alan Liu, English professor at UCSB, has written an informative article titled ‘Transcendental Data’, published in 2004. In his text, Liu identifies descriptive markup as a key element in the “techno-logic”, as he calls it, of postindustrial society. Encoding schemes like XML address three needs shaping our contemporary economy grounded in the storage, transmission and processing of data: namely, to make discourse “as transformable as possible”, to make it “autonomously mobile” and to “automate” it. Liu’s analysis concentrates on the separation of content and form which allows for an ever more efficient management of data. For Liu, the ideology of the Discourse Network 2000 is the belief in the transcendence of data: “According to its dogma, true content abides in a transcendental logic, reason, or noumen so completely structured and described that it is in and of itself inutterable in any mere material or instantiated form.” This postindustrial techno-logic, according to Liu, goes back to industrialism with its standardization of production methods and products. Postindustrial data processing, therefore, is not to be understood as a break with industralist logic but as its “radical conclusion”. In the postindustrial economy, standardization has become meta-standardization and mechanical production has turned into universal programmability.
While I agree with Liu’s main argument about the connection between industrialism and postindustrialism, I find his conclusion that the Discourse Network 2000 is a synthesis of what Friedrich Kittler has called the Discourse Networks 1800 and 1900 rather unsatisfying. I also think that Liu at times falls prey to the very ideology he analyzes (i.e. the transcendence of data) and tends to neglect the aspect of materiality. I would like to argue for a broader perspective on digital markup, one that puts markup in its most general form within a comprehensive history of writing as the fundamental and central cultural technique. Such an approach would have to take into account not only the irreducible materiality of all writing but also the fact that writing is tied to the materiality of things and often has considerable material effects. Today, since I am at the beginning of my study and we do not have more time, I can only share a few preliminary thoughts on this with you.
The first would be to question the alleged abstractness and passivity of markup. Following our analysis so far, one could conclude that in digital markup we can, once again, see the antagonism of mediation at play, the dichotomy of hypermediacy and immediacy. Procedural or presentational markup says something about an object’s appearance manifest in a physical shape. Descriptive markup says something about an object’s logical structure abstracted from a particular material instantiation. So, markup in general and descriptive markup in particular seem rather passive and innocent: Descriptive markup is all about ‘characterizing’ or ‘specifying’ things, not really ‘doing’ anything.
But digital markup is, in fact, a very powerful tool of appropriation. Writing is the single most important technology of transforming so called ‘nature’ into ‘culture’. By putting written marks on something, by inscribing it, you turn this something into a cultural object, quite possibly one that you identify as your own. Long before Coca-Cola, Marlboro and Google became brands, branding identified the owner of livestock (as early as in ancient Egypt) or even people, as was the case with soldiers and slaves.
Other, less painful but more important, markup techniques are the signature, the tag and the label. Lisa Gitelman, in her book on Scripts, Grooves, and Writing Machines, has reminded us of the importance and function of labels and labeling for commerce—and this also goes for digital commerce. The digital equivalent to signature, tag and label are digital signature and encryption, watermark and metadata, all of which can be thought of as markup. Because digital data is so easy and inexpensive to (re-)produce and distribute, businesses employ Digital Rights Management (DRM) to restrict the consumption, i.e. usage of digitized commodities such as movies, music, books and games. From the DVD, to Apple’s iTunes, to Amazon’s Kindle, to Valve’s game platform Steam: they all rely on digital markup to restrict the use of data so it can still be sold like a tangible good. With the increasing proliferation of information networks, digital devices, mobile communication, RFID tags, and so on, the informational texture covering our everday life is woven ever more tightly and incorporates ever more objects.
The ultimate kind of markup, though, is one which does not describe, identify, or connect objects and processes but which creates them. Markup then becomes makeup (in the generative, not the decorative sense). A famous literary example of markup as makeup is the Golem from Jewish folklore. Many tales of the Golem, among them the popular one about rabbi Loew of Prague, tell that the creature becomes animated only when it is marked with certain Hebrew words on the forehead (for example ‘emet’) or when the words are written on paper and placed in its mouth.
The modern-day equivalent of the Golem, I think, is the manipulation of DNA, the writing or re-writing of the ‘texture of life’. Lily Kay has shown how and why scriptural metaphors of information and code came to dominate the genetic discourse. But whether we agree with Kay’s account and her conclusion that DNA is not really a language, it remains, as she herself made quite clear, an extremely powerful and successful metaphor. Genetic engineering functions as a technology strongly associated with notions of writing, code and markup. So much in fact, that DNA is now effectively being used as writing and markup.
When scientists at the J. Craig Venter Institute recently created what they claim to be the first synthetic organism, a bacterial cell named Mycoplasma mycoides JCVI-syn1.0, they inserted some special DNA sequences into the organism’s constructed genome, each more than a thousand base pairs long. Venter’s team calls these DNA sequences ‘watermarks’ and their main reason is to clearly distinguish the made up bacterium from any ‘natural’ DNA. The watermarks are to proof that this is indeed the synthetic organism Venter and his team assembled. They are the tags or labels which identify the organism as a truly cultural object. In an interview, Venter said verbatim that with the watermarks they had “signed the DNA” and he went on: “we’ve developed a new code for writing english language […] with punctuation and numbers into the genetic code”. There are four separate watermarks in the organism’s genome: the first contains a description of the watermark’s encoding scheme itself, the second gives a secret web address, the third lists the names of all the team members and contributors involved, and the fourth has some famous quotes in it. Besides these more amusing (if certainly irritating) aspects, the watermarks also serve as an indication of intellectual property rights and thus remind us that genetic engineering is already a huge business (think of Monsanto) with even greater commercial prospect. One of the quotes embedded in the DNA of Venter’s synthetic bacterium is from the renowned American physicist Richard Feynman: “What I cannot build, I cannot understand.” One is tempted to add: ‘What I cannot markup, I cannot market.’
Vilém Flusser: Gesten. Versuch einer Phänomenologie, Frankfurt a. M.: Fischer, 1994, p. 40.
Hans Ulrich Gumbrecht/K. Ludwig Pfeiffer (eds.): The Materialities of Communication, Stanford: Stanford University Press, 1994.
Harold A. Innis: Empire and communications, Toronto: University of Toronto Press, 1972, p. 9.
Jay D. Bolter/Richard Grusin: Remediation. Understanding New Media, Cambridge, MA: MIT Press, 2001.
Eric A. Havelock: The Literate Revolution in Greece and its Cultural Consequences, Princeton: Princeton University Press, 1982.
Ivan Illich: In the Vineyard of the Text. A Commentary to Hugh’s Didascalicon, Chicago: University of Chicago Press, 1993.
Vilém Flusser: Die Schrift. Hat Schreiben Zukunft? 5th ed., Göttingen: European Photography, 2002, p. 20.
Matthew G. Kirschenbaum: Mechanisms. New Media and the Forensic Imagination, Cambridge, MA-London: MIT Press, 2008, p. 60.
Ibid., p. 61.
Tim Berners-Lee/Noah Mendelsohn: The Rule of Least Power, 2006, .
Michael K. Smith/Chris Welty/Deborah L. McGuiness: OWL Web Ontology Language Guide, 2004, .
Alan Liu: Transcendental Data: Toward a Cultural History and Aesthetics of the New Encoded Discourse, in: Critical Inquiry 31.1 (2004), pp. 49-84, here 57-58.
Ibid., p. 62.
Ibid., p. 72.
Lisa Gitelman: Scripts, Grooves, and Writing Machines. Representing Technology in the Edison Era, Stanford: Stanford University Press, 1999.
Lily E. Kay: Who Wrote the Book of Life? A History of the Genetic Code, Stanford: Stanford University Press, 2000.
The Journal Science Interviews J. Craig Venter About the first “Synthetic Cell”, 20 May 2010, .
Cite as
Heilmann, Till A. “Markup and materiality.” Paper presented at 2010 Conference of the European Society for Science, Literature and the Arts “Textures.” Stockholm School of Economics, Riga. 17 June 2010. <http://tillheilmann.info/textures.php>.
Till A. Heilmann (Dr. phil.) is a researcher at the Department of Media Studies at Ruhr University Bochum. He studied German, media studies, and history. Research Associate at the University of Basel (2003–2014), the University of Siegen (2014–2015), and the University of Bonn (2015–2021); doctorate for a thesis on computers as writing machines (2008); visiting scholar at the University of Siegen (2011); Fellow-in-Residence at the Obermann Center for Advanced Studies at the University of Iowa (2012); acting professor of Digital Media and Methods at the University of Siegen (2020–2021); book project on Photoshop and digital visual culture (ongoing). Fields of research: Media history; media theory; media semiotics; history of media studies. Research focus: digital image processing; algorithms and computer programming; North American and German media theory. Publications include: “Blackbox Bildfilter. Unscharfe Maske von Photoshop zur Röntgentechnischen Versuchsanstalt Wien [Black Box Image Filter: Unsharp Mask from Photoshop to the X-Ray Research Institute Vienna].” Navigationen 2 (2020): 75–93; “Friedrich Kittler’s Alphabetic Realism.” Classics and Media Theory. Ed. P. Michelakis. Oxford University Press 2020: 29–51; “Zur Vorgängigkeit der Operationskette in der Medienwissenschaft und bei Leroi-Gourhan [On the Precedence of the Operational Chain in Media Studies and Leroi-Gourhan].” Internationales Jahrbuch für Medienphilosophie 2 (2016): 7–29; “Datenarbeit im ‘Capture’-Kapitalismus. Zur Ausweitung der Verwertungszone im Zeitalter informatischer Überwachung [Data-Labor in Capture-Capitalism. On the Expansion of the Valorization Zone in the Age of Informatic Surveillance].” Zeitschrift für Medienwissenschaft 2 (2015): 35–48; “Reciprocal Materiality and the Body of Code.” Digital Culture & Society 1/1 (2015): 39–52; “Handschrift im digitalen Umfeld [Handwriting in the Digital Environment].” Osnabrücker Beiträge zur Sprachtheorie 85 (2014): 169–192; “‘Tap, tap, flap, flap.’ Ludic Seriality, Digitality, and the Finger.” Eludamos 8/1 (2014): 33–46; Textverarbeitung: Eine Mediengeschichte des Computers als Schreibmaschine [Word Processing: A Media History of the Computer as a Writing Machine] (2012); “Digitalität als Taktilität: McLuhan, der Computer und die Taste [Digitality as Tactility: McLuhan, the Computer and the Key].” Zeitschrift für Medienwissenschaft 2 (2010): 125–134.