On Reflection: Print in the Digital Age, Where Light Comes to Light
Adhuc mea messis in herba est.
Aldus Manutius
My thesis is straightforward, simple in a way, intuitive, but maybe not obvious. I want to make a case for the place of those old stalwarts, print media and photographic reproduction, in our digital and photonic age, which is moving archaeological discovery exponentially forward, literally at the speed of light.
What I want to suggest is the following: When the archaeologist has produced his astonishing new revelations of sketch lines under paint, or of previously undetected painted images on a terracotta statue, or enabled the transcription of hitherto undecipherable inscriptions, or rendered readable what before had been indecipherable traces on a palimpsest, or brought the worn surface detail on a coin into bold relief—through new digital technologies such as polynomial texture mapping, reflectance transformation and laser imaging—a compelling argument can be made that the results of his findings, the images that are the final result of these light-borne and digitally enabled discoveries, are best reproduced—if the objective is to afford an image for careful scrutiny and close, concentrated study by the viewer—in the medium of print, that is, in a high quality photographic reproduction in a bound journal or book as opposed to being digitally rendered on a screen.
Let me explore with you a few of the reasons this may be so, stating things that are perhaps patent, but that we simply take for granted. Various studies of brain activity using fMRIs (Functional Magnetic Resonance Imaging) and EEGs (Electroencephalograms) have shown that the brain responds differently when viewing something, be it text or image, on a digital screen than on paper. Different parts of the brain are activated by each activity. For reasons that I will try to elucidate (reasons which neuroscientists who have studied this are themselves hard-pressed to explain and don’t always agree on, given the staggering complexity of the human brain), the brain works “harder,” that is, engages more neuronal networks, endeavoring to comprehend the same material on a digital screen than it does on paper. In the final analysis, it is the brain that not only categorizes and cogitates but that actually “sees,” and not the eye. Aristotle was right when he surmised in De Anima that our minds create images, that is, make internal representations of the external world which we in turn use for our thought. The photoreceptors on the curved surface of the retina send stimuli to very specialized neuronal networks in the brain. The lateral geniculate nucleus residing in the thalamus, the central connection between the optic nerve and the occipital lobe, compares the two-dimensional images captured on the retinal surfaces of both eyes, assembles the relevant information about shape, space and distance to create 3-D images, and then sends them on to the conscious part of the brain for image recognition and further processing. Different neuronal groups activated in response to other stimuli send messages about color, form, movement, language, etc., and these respective neuronal groups are situated in separate locations in the occipital, parietal, and temporal lobes of the brain, all of which communicate via associational neuronal clusters, a kind of cortical circuitry, with the frontal lobes, which exercise an executive function. In other words, vision, like everything else in the brain, is modular, different signals are sent to different specialized areas which each perform their separate function. These groupings process signals at different speeds. But each system separately, and then collectively, when working in coordination with the frontal lobes via associational neuronal clusters, can be said to be working toward a common goal, which is to distill, from the ever-changing information of the visual world, what is important, working that is, in tandem with the part of the brain associated with memory, to represent the permanent, essential character of what it sees—what things are as opposed to how they merely appear from moment to moment. In this sense, the brain is possibly more Platonic than Aristotle would have wished. It is designed to eliminate the ephemera of sensory data and to discover essences. What I want to explore in this paper is the notion that, in certain respects, the digital environment may be less conducive to sustained concentrated attention and thoughtful focus than the medium of print.
Although the brain ultimately “sees,” and not the eye, optical stimuli naturally originate with retinal receptors. Eye-tracking studies have shown that eye fixation differs when a subject is looking at an image on a digital screen vs. paper or canvas. Separate eye-tracking studies by scientists working on the psychology of perception conducted at the Metropolitan Museum of Art and at the Tate Gallery, respectively, revealed—I merely summarize their complex findings—that eye fixation on digital screens tended to gravitate toward the most brightly illuminated, eye-catching areas; whereas eye fixation by someone viewing a painting or even a projected slide tended more toward fuller contextualization, that is looking at the central figure or image in relation to its surrounding context. In the case of Millais’ Ophelia at the Tate, viewing the painting digitally, the eye fixated on her face and hands, but viewing the painting itself, the eye dwelt on the algae and the reeds and the growth on the banks seeming to embower her, and the emblematic flowers that tell her story. In the MET experiment responses to projected slides and reproductions were compared with digital viewings and the same tendencies were observed. The quantitative term the psychologists used to describe the property of perception that appeared to be diminished by the digital experience was “density.” There was less “density”—one might be tempted to translate this into qualitative terms as complexity, or depth of understanding of the relation between the whole and the parts—to the digital viewing experience. But why should this be?
The physiology of the human eye developed to respond to reflected light and not radiant or direct light. For almost the whole of human evolution, until fairly recently, mankind has known only two sources of direct or radiant light: fire and the sun. Staring directly into the glare of the sun has never been conducive to deep thinking. Neither the eye nor the brain is made that way. And gazing at flames flickering in a glowing hearth generally puts the gazer into a calmed, trance-like state. The exercise of the higher faculties, needed for everything in human evolution from detecting predators, to making tools, to taking the measure of the landscape, to building cities, to viewing, indeed making paintings and reading books, indeed all our cognitive endeavors have always taken place in an environment where the eye was responding to reflected light. But staring at a digital screen, one is staring directly into a radiant light source, and one, moreover, that is pulsing. In evolutionary terms, this is an anomaly.
The structure of the human brain has not fundamentally changed in 40,000 years. It utilizes very old structures in the most ingeniously adaptive ways to perform tasks, like reading, for which it was never originally designed. While there are genes associated with language and vision, there is not a single gene associated with reading.* As Wolf argues in detail, to read, and make integrative sense of complex combinations of words and images, the brain engages its evolutionarily older circuitry which is specialized for various kinds of object recognition—neuronal pathways designed not only for vision “but for connecting vision to conceptual and linguistic functions.” It makes new connections among these old structures; it “form(s) areas of exquisitely precise specialization for recognizing patterns in information”; and it “learn(s) to recruit and connect information from these (discrete) areas automatically.” (12) In reading, basic visual regions in the occipital lobes connect with adjacent regions dedicated to more sophisticated visual and conceptual processing in other occipital and the nearby temporal and parietal areas of the brain. “The temporal lobes are involved in an impressive range of auditory and language-based processes.” The parietal lobes also “participate in a wide variety of language-related processes, as well as spatial and conceptual functions.” (29) To make meaning, that is, read a meaningful sign, whether we are talking about an Egyptian hieroglyph or a Greek inscription or a passage of Plato or Dostoevsky (though the brain does makes sense of logographic writing, like Chinese, differently from the way it makes sense of alphabetic script, like English), “our brain connects the basic visual areas to both the language system and the conceptual system in the temporal and parietal lobes and also to visual and auditory specialization regions called ‘associative areas.’” Complex symbolic thinking “exploits (and indeed) expands two of the most important features of the...brain: (its) capacity for specialization and (its) capacity for making new connections among (these) association areas,” (29) as well as its ability to store the mental representations it makes for future use. This new circuitry connects the angular gyrus region of the brain, which is associated with language functions and memory, with nearby areas in the parietal lobe involved in numeracy, and with areas in the occipital-temporal areas involved in both object recognition and language. (31)
As Wolf explains, when generating inferences or interpretation appears to be involved, a bi-hemispheric frontal system stirs activity around what is called Broca’s area. Interpretative activity generates interaction between this frontal area and what is called Wernicke’s area in the temporal lobe, with certain parietal areas, and also the right cerebellum. When these inferences are integrated with stored knowledge, a language related system in the right hemisphere comes into play. (161) This is notable, because language processing per se resides primarily in the left hemisphere, where visual, orthographic, phonological, and semantic information is processed. Thus we are speaking here quite specifically about the activity involving genuinely complex thinking—such as reading and interpreting—in and through language, none of which, it should be remembered, is genetically encoded. The brain had to learn to do all of this, just as every child has to learn how to read, ontogeny recapitulating phylogeny. The brain’s autodidactism is in all likelihood coeval with mankind’s development of the earliest complex writing systems some 5000 years ago.
Although the complexity of the process I have just described staggers comprehension (at least for me), it can be asserted with some degree of certainty that the brain works at every observable level, in all its basic functions as well in its higher, more complex operations, to eliminate noise and distraction and to concentrate on what is essential—that is, to discover fundamental, unchanging properties. Eliminating sensory overload and extracting variable features, the brain looks for constancies and invariant patterns it can identify. Take the “basic” example of color.* As Sir Isaac Newton observed in the 17th century, light itself is not colored. But it “stir(s) up,” as he put it, “a sensation of this or that color.” Light is an electromagnetic radiation with different wavelengths. It is the brain that actually “creates” color, by determining the reflectance of illuminated surfaces. “Different surfaces have different efficiencies for reflecting light of different wavelengths.” (97) The brain finds a constant, which then becomes the determinant of color. “The reflectance of a surface for light of a given wavelength is called its efficiency for reflecting light of that wavelength.” This efficiency is “expressed as the percentage of the incident light of that wavelength which it reflects. The reflectance (itself) never changes, although the amounts of incident light on and reflected (by a) surface change continually.” (233, italics added) The brain makes a “lightness record,” (234) based on all the different intensities of light. Lightness is a correlate of reflectance for light of different wavebands. (For example, red will have a high reflectance for long wave light and green a low reflectance for long wave light.) The brain doesn’t know about reflectance per se, but it makes a comparison of “the reflectance of different surfaces for light of the same waveband,” (234) “thus generating the lightness record of the scene for that waveband.” (235) The brain compares the lightness records of the scene for long, medium, and short wavebands and generates the corresponding colors. The intensities of light change continuously but not the reflectance of the surfaces, which is “a comparison of comparisons,” (235) essentially a “ratio” (97) calculated in the brain. The brain is not interested in the amount of reflected light per se, but only in the amount reflected from a given surface, since it is only interested in the amount of light of any given waveband reflected from that surface in comparison with light of the same waveband reflected from surrounding surfaces. On this basis it assigns the surface a color. If this relatively elementary cortical function sounds complicated, it is.
But there is a principle to be gleaned from this example: the brain’s optimal operation involves discarding all stimuli inessential to its main business of determining the “fundamental, unchanging properties” (241) of things, not their intermittencies. It is not interested in reacting to, and processing, all the random stimuli reaching it, but, on the contrary, in using the least cognitive effort to know the most. In fact, it positively blocks the distracting, variable elements. In the case of color vision, the brain “extract(s) all the invariant features of the objects and surfaces in the visual environment” (243, italics added) and “constructs (a) visual image by relating the analyzed components to each other.” (244) The brain focuses on the unchanging and essential, the reflectance for light of different wavebands, and compares the efficiencies for reflecting light with the lightness record stored in its memory, and then categorizes the original stimulus according to what it knows. This is manifestly not, and is indeed something exceedingly more complicated than, merely reacting to stimuli coming from the variable environment, much of which the brain has discarded to make its determination.
This example of color vision illustrates how the brain makes it its business to eliminate distractions in even its elementary functioning, since vision generally, and color vision in particular, are basic to the brain and enabled by genetic coding. The more complex example of the reading/interpreting brain example about which I spoke earlier illustrates how just much circuitry and coordination is involved in enabling the brain’s higher operations, especially where there is no genetic blueprint for the brain to follow, and where the brain has had to improvise, utilizing older functions and structures and configuring “new” associative areas. Given that the brain’s desiderated efficiency impels it to know the most with the least effort, I would like to look at several studies which measured activity in the brain, with regard specifically to attention and concentration, when the same text was encountered in printed format and on a digital screen.
It is probably not news to anyone in this room that digital screens flicker. An electron beam scans the phosphor surface of a screen causing stimulated sections to glow temporarily. It is necessary to rescan constantly with the requested pattern of electrons. This rescanning is known as the refresh rate. Although the refresh rate is high enough that it is not visible to the naked eye, when we sit in front of a computer screen, or any digital surface, the mind’s eye registers this constant pulsing. As argued above, through the course of human evolution our eyes developed to respond, and to send stimuli to the vision brain for our higher cognitive activity in response, to objects viewed in reflective light, not emitted as radiant light, via, what is more, a million pulsating pixels. On screen, words and images are constantly fading and reigniting, as if we were staring into a cluster of twinkling stars. Hence the “flicker.” Is this, I can’t help but wonder, the optimal surface off which to read and/or study material requiring concentrated focus and complex thought? In his 1890 The Principles of Psychology, William James defined attention (cited in one of the studies noting its attrition in the digital age) as well as anyone ever has: “Everyone knows what attention is. It is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others, and is a condition which has a real opposite in the confused, dazed, scatter-brained state which in French is called distraction, and Zerstreutheit in German.”
With this in mind, some revealing phenomena can be observed when comparing brain wave patterns and neurological responses in someone reading or studying an object on a radiant screen versus studying the same on a page. Firstly, luminance itself, the luminous intensity of light, its source and wavelength, will influence where information is processed in the brain, the neuron firing patterns, and the brain wave patterns aroused in the subject. Therefore, direct radiant light emitted from a flat screen into which one stares already changes the cortical equation. Attention, and the lack thereof, can be measured by studying brain wave patterns. Visual stimuli are processed, as we have seen, in several distinct areas, primarily in the occipital lobes at the back lower portion of the cortex and the parietal lobes, directly above the occipital area and proceeding toward the top of the cortex. There are two pathways to the visual stream. The ventral pathway leads to object recognition and the dorsal pathway leads to spatial perception. The ventral stream leads the occipital lobes to process object identification and color, and the dorsal stream leads the posterior portion of the parietal lobes to process luminance, which yields three-dimensional rendering and spatial information. The parietal lobes also play a role in directing eye movement and focusing attention. Stimulated neurons react in patterns, as already mentioned, in patterns both genetically inherited and also learned through previous experiences.
Regarding the way the brain transmits and processes incoming signals, the areas of the brain that are consequently stimulated, and the wave patterns that are generated, there is—despite the brain’s uncanny autonomy and autodidactism—a bit of truth to Marshall McLuhan’s famous dictum “the medium is the message.” The retinal receptors send electrical signals to the occipital and parietal lobes, where they are decoded and compared and contrasted with previous experience. Processing is then further differentiated as either top-down or bottom-up, depending on whether previous experience is needed for decoding or whether the signal triggers an automated motor input. These masses of neurons in the brain communicate with each other by emitting tiny electrochemical pulses of varied frequencies. These synchronized electrical pulses are known as neural oscillations, or brainwaves, and their type, frequency, amplitude, and phase, can be measured by an electroencephalogram. There are several types of wave frequencies but the two most relevant to observing patterns of attention and cognitive processing are Alpha and Beta waves. Alpha waves are associated with states of mental relaxation, and beta waves are associated with heightened alertness and mental activity. Well-documented studies have shown that there is an inverse relation between alpha rhythm and cortical activity seen during periods of attention, and a direct relationship between beta rhythm and attention. Alpha waves are suppressed during periods of mental activity. They are blocked by beta waves—associated with alertness, attention, and concentration—which produce faster frequencies at lower amplitudes.
A study which compared states of attention between reading a text in print in reflective light as against reading the same text from a computer screen, staring directly into the latter’s radiant light, produced some highly interesting results.* Let me describe what these neuroscientists documented before drawing any conclusions. Significant differences were observed in the bottom-up stream in the occipital lobes for alpha wave blocking, between reading in reflective light, i.e., print, and reading in radiant light, i.e., illuminated screens. There was also a commensurate difference between the two media with regard to heightened beta wave activity. In the parietal lobes, the scientists noted no significant difference in alpha processing, but did observe a significant difference in the beta activity. In the top-down stream in the occipital lobes, they observed a significant difference for alpha blocking between reading print and reading radiant screens, as well as a significant difference in beta wave activity. In the top-down stream of the parietal lobes, there appeared to be no significant difference in alpha processing but there was, once again, a significant difference in beta wave activity.
What might all this mean, or what conclusions did these neuroscientists draw? The findings indicated that reading from an illuminated screen stimulated increased beta wave activity, which means that more of an effort at concentration was required to read something off a screen than to read it in print. The brain had to work harder and expend more neuronal activity. The authors of the study speculated that the flicker effect, the pulsing of light from the radiant light source of an illuminated screen, “would appear to be the cause.” They contended that the “radiant light of a computer screen is an unusual source for the human eye and brain.”
But they drew even bolder conclusions from the activity they observed in the dorsal stream of the parietal lobes. A pattern of smaller, tighter beta wave activity indicated that the subject really had to “work harder” to view the same material on a radiant screen. They interpreted this, based on activities generally associated with this area of the parietal lobes, as not being merely a matter of working harder to maintain attention, but an actual increase in what they termed “cognitive load.” The parietal lobes are responsible for processing luminance, so, they argued, it could very well be that radiant light shining directly into the eye, and its concomitant flicker, could be the cause of this cognitive overload.
In general, alpha blocking is a useful measure of attention and orienting response, and useful in indicating the amount of processing going on as subjects go from an eye-closed state to alertness or an activity that requires sustained concentration, like reading. But the beta results of their study, they maintained, were striking, and required some interpreting. Concentration on a difficult or demanding subject, like solving a challenging mathematical problem, will normally generate smaller, tighter beta waves, indicating the increased level of attention and cognitive load. But in this study, the text being read was pitched at the sixth grade level! So the heightened beta activity offered strong evidence that the difficulty in processing the message was because of the medium.
Although this study had primarily to do with attention, attention is, as the researchers pointed out, the first step in cognition, and thus cognition, memory and other complex mental processes would also be correspondingly affected. Citing other published experiments that have demonstrated that “flicker” also affects other areas in the brain, they soberly concluded their study: “Radiant screens negatively impact attention in the parietal lobes where eye movement is directed.” Staring into a radiant light source would appear to affect the brain as headlights freeze a deer. Add in the flicker, and you have a stroboscopic effect. I am being hyperbolic, of course, but, given the effort involved in inferential and interpretive reading, given the multiple areas of the brain enlisted and correlated in this sui generis activity which no gene prescribes, reading and interpreting complex material on a digital screen is like jump-starting the circuitry of the brain with a succession of minor strokes. Or to be a little less dramatic, it’s like trying to nudge and flip your way skillfully through a demanding game of pinball, all the while trying to keep the machine from tilting, while someone else is vigorously shaking the flashing backbox at the other end.
Numerous studies employing different techniques and experimental conditions, and focusing on other criteria, have come to very similar conclusions. Mutatis mutandis, and however measured and computed, studies suggest qualities such as “depth and concentration” and “reading comprehension” are what regularly appear to be diminished; and the flicker of screens has been variously shown to affect “degree of engagement,” “attention to detail”, and “cognitive focus.” This has been observed to be acutely the case where the reading involves, as it does, for example, with most archaeological publications, coordinating the activities of reading texts and viewing images. Virtually every study concurs: printed format facilitates this more easily, whereas in the digital environment “navigating between images and texts introduces distractions that diminish reading comprehension and experience.” Digital formats, studies tend to emphasize, often have restrictions in spatial layout that makes them less than optimal for contextualization and cross- or back and forth reference. As a rule, the cumulative evidence of such studies undertaken by researchers in different disciplines, from cognitive psychology to neurobiology, suggests that people process information more inferentially, analytically, and critically in print than in digital format. An article reviewing other such studies of digital screen vs. print media attention, whimsically entitled, after the Beatles’ lyric, “I read the news today, oh boy,” concludes that a digital format produces, “a shallower, more fragmented, less concentrated reading.” “Continuous partial attention” is the way neuroscientist and psycholinguist Maryanne Wolf sums up the culture of reading screens in her recent book on the reading brain.
The explanations may be legion. Different visual systems in the brain process different kinds of information; motion, for example, is processed in a separate channel from other stimuli such as form and color. If the brain does indeed sense the flickering movement of what it sees on a digital screen, then this subtly disconcerting “movement” of something meant to be viewed as “still” possibly introduces a slight cognitive dissonance to which the brain must adjust by engaging another neuronal pathway to suppress a stimulus it registers. In a study conducted for the Royal Mail by the market research firm Millward Brown, which employed neuroscientists using fMRI, functional magnetic resonance imaging scanning, to examine brain responses to stimuli, found a marked difference between the way the brain processed physical vs. digital materials. Because the former is “internalized differently,” the study concluded, it becomes “a more constituent part of memory.” With print media, they observed higher activity in the area of the brain that integrates visual and spatial information. Physical material, they concluded, is simply more “real” to the brain, and so it engages spatial memory networks and thus stimulates memory better. The final words of a recent, rigorously experimental cognitive analysis comparing the difference between print and digital media, which was published in a leading marketing journal, read, “Put it in print.” Whatever the measure, print is preferred as the superior “cognitive medium…better for showing complex interrelations for the brain to process cognitively.” In a study done on economists working at the International Monetary Fund it was shown that the preponderance of their work (by a very large percentage margin) is done through reliance on paper and paper reports as opposed to reading econometric studies, charts, graphs, and tables digitally. One would think if anyone would feel at home in a digital environment, it would be econometricians, given the extent which all their working data is computer generated. But they too apparently prefer to analyze complex data in print.
The brain’s ability to make new circuits automatic and to eliminate distracting stimuli leaves more cortical space for more complex thought processes, less time for decoding and filtering out inessential stimuli, and more time for deeper analysis of recorded and novel thoughts. Imaging studies confirm that the fluent reading brain activates newly expanded cortical regions across frontal, parietal, and temporal lobes of both hemispheres during the comprehension process, which include cognitive activities such as inference, analysis, and critical evaluation. The physical print medium seems more aligned with, and possibly contributed to the evolutionary development of, the brain’s optimal cognitive functioning.
There may also be good reasons that the scroll yielded in the 4th century to the codex and the codex in the 15th and 16th to the printed book. (Although it is not possible to develop this analogy here, a strong case could be made that digital reading has many more affinities with reading scrolls than bound volumes.) The litany of anxieties over digital media does seem boundless (no pun intended). Though some of what follows is admittedly sketchy, it gave me pause as I perused the vast literature in this field, mostly alien to me. Digitalizing has the effect, write those familiar with the process and interested in the ontology of the virtual, of homogenizing an image, despite digitization’s demonstrable ability also to enhance it in certain ways, because even if the image had originally been captured by means of light reflected off its surface, it is then rendered into a mechanically predetermined set of pixilated coordinates. “In a process almost opposite to making a mechanical reproduction, a physical object is dematerialized into its intangible digital surrogate.” An archaic English term, pixy-led, meant to be led astray by pixies. While there may at times appear to be genuine gains zooming in on this or that detail with digital images, subtlety and textured pictographic nuance of other kinds, like richness of saturation and depth, are limited because of screen standardization. Film, even scanned and reproduced, can still store far more detail than any digital capture system.
Another topic that came up with surprising frequency in the neuroscience literature was the pronounced responsiveness in the brain’s visual pathways to edges. The brain apparently likes lines and edges. Its “music,” to paraphrase a remark of guitarist Richard Thompson, is “in the edges.” “Lines,” writes Semir Zeki without equivocation, a leading neuroscientist not given much to unequivocal statements, “constitute probably the most basic visual stimulus with which to excite a very important category of cells in the cortex.” (93)· Compared with an image floating on a screen, high-quality print reproduction on good paper stock has a definitiveness reinforced by the layout of the journal article or book, crisply bordered and spatially defined by a top, bottom, margin, and spine. Print media have an organizational layout—typography, paper finish, stock—and these all influence a reader’s attention. Things like typeface, spacing, margins, binding, and heft contribute to a reader’s focused receptivity. Digital images may at a glance seem sharper and edgier, but that is because they are backlit. Relative to the radiant surface on which appear, the eye perceives a diffuse luminance more than delineated edges, and the visual brain is less responsive to a uniformly luminous surface than a sharp print. Eventually the blur the brain has already detected on the digital screen begins also to affect the eye. Contrast more than luminosity, which is generally more marked in a well-produced photographic image than a back lit screen, is what stimulates the retinal cells and cranial neurons. Clearly defined lines have the effect of drawing the viewer’s attention to the specific features of an image that the presenter wishes to emphasize. The brain likes those lines of demarcation exaggerated and will respond to them more directly, in the same way that we will respond more immediately to a caricature of a famous person’s face that, with a few sharp lines, exaggerates his particular identifying features and reveals something essential about him. A realistic full-color rendering is going to convey less of an impression and less of that essential message than the line drawing that expresses that essence. One actually has more control over emphasizing detail—and what features the presenter wants the viewer to study—in printed format than digital, where the edges are in reality soft, if bright, and the eye of the viewer tends to float around what is being viewed or even, as in the case of 3D modeling, moves around the object viewed, having a virtually simulated three-dimensional experience but in reality merely making his own distortions.
As Wolf and others note, electronic media conduces to specific topicality—its search engines presume the reader knows in advance what he is searching for—but a bound volume has a tone and expresses a point of view, and there a reader may make discoveries about things he didn’t even know he was interested in. (Interestingly, the most prestigious journals in science are just such general print journals, like Nature and Science, which are not disciplinarily bound at all and where lay and expert readers, even in other fields, may make unexpected connections and genuine discoveries.)
One final note: psychological studies have referred to a type of cognitive dissonance related to reading screens that they term “haptic dissonance.” One such study notes, “haptic perception is of vital importance to reading and should be duly acknowledged…When reading digital texts, our haptic interaction with the text is experienced as taking place at an intermediate distance from the actual text, whereas, when reading print text we are physically and phenomenologically in touch with the material substrate of the text itself.” Among the earliest written records we possess are those inscribed tablets from Mesopotamia, square or oblong pads of clay, about three inches across, which were made to be held comfortably in the hand. “The human nervous system,” another neuroscientific study submits, “has a special control mechanism for the coordination of the hand with the focusing muscles of the eye.” The earliest writers, in pictographs and then cuneiform, already knew that one reads what is held in the hand differently. Early Christian monastic orders viewed reading as a spiritual exercise and called it ruminatio, chewing the cud, implying something visceral, a material act of internalization. In Wallace Stevens’ famous lines, “The house was quiet and the world was calm. The reader became the book.” We have our electrical circuitry to be sure, but we are in the end bio-molecular, flesh and blood, organic beings, not cyborgs or holograms, at least not yet. Touch matters, as does the feel of the book in the hand, and the natural reflection of light off an organic surface, as we view and absorb what they eye is drawn to. For something to be truly digested, if being and book are to be one, the rest of the animal has then to be dragged along too. We subjectively (and it seems our brains also agree) attach a different sense of permanence to an image on paper printed with chemical ink than we do to an ephemeral image that is dissociated from the hardware of the device on which it is read, like a twinkling star in a vacant sky. The sensory connection, a physical intimacy of interaction, even if it is only to paper and the binding in one’s hand, apparently matters; and it represents a qualitatively different experience, a different engagement—enabling knowing and retention—from that of looking at a light flashing in cyberspace generated by a microchip interface. Even some postmodern theorists lament the absence of presence in the digital experience. Writes one, “Digital images are produced without the intermediaries of film, paper, or chemicals and as such never acquire the burden of being originals because they do not pass through a material phase.” Even in photographic reproductions, a mere image becomes a document.
Reading the writing on the walls has taken many daring and innovative forms. There is more the one way to decipher a mystery. Sir Henry Rawlinson, the Father of Assyriology, hung suspended 300 feet from a rope to read some of the first cuneiform inscriptions carved into the side of a cliff. Ovid thought a lover’s secret messages were best read off her naked serving woman’s back. I make my brief today as neither an adventurer nor, despite appearances to the contrary, entirely a luddite either, but as a humble and perhaps self-interested journal editor; and I am merely suggesting that the print medium, and in particular high-quality photographic reproduction, is still the best way to publish findings intended for serious study, which require close attention, careful scrutiny and sustained concentration. Domain is clearly defined. Layout can create desired relationships between image and text—coherence of context rather than a scroll, a click, and a flash. That which the presenter wishes to present is presented as intended, drawing particular attention, through photographic or printed reproduction, where the presenter intends it; and the viewer/reader is predisposed to receive and respond to it in the manner prescribed. Print layout offers a map of the terrain, a topography, in which you can orient yourself. People who read maps tend to know where they are, but people who rely on a GPS often have no idea. Moreover, the GPS, like the digital screen, moves with you, destabilizing your orientation; the fixity and contextualization afforded by the map enables a clearer sense of orientation. Unlike the evanescing starscape of pulsating pixels, unmoored in an electronic cyberspace, the print medium bestows a relative permanence, fixed lastingly on paper with attention drawn to precisely those details the scholar has brought to light, but also in a format best suited to being contextualized and internalized by the mind, analogous perhaps, and not irrelevant, to the archaeologist’s larger project—that of making part of the permanent record, and thus achieving a kind of permanence for, those artifacts he has worked so hard, with the tools of advanced digital and photonic technologies, to recover and restore to their original condition and thereby preserve for posterity. The French critic Jacques Rivière wrote the following about Cubism, but he might have been describing the brain, or for that matter remarkable technological tools like Polynomial Texture Mapping or Reflective Transformation and Laser Imaging, as well as the ultimate need, as I argue, to preserve as permanent record in print medium the truths they can reveal: “Sight is a successive sense; we have to combine many of its perceptions before we can know a single object well. But the painted image is fixed.”·
“Move still, still so,” as Florizel bids Perdita in Shakespeare’s The Winter’s Tale.
©Herbert Golder 2015
* Much of my ensuing discussion of the “reading brain” is heavily indebted to Maryanne Wolf’s account in Proust and the Squid: The Story and Science of the Reading Brain, Harper Perennial 2007. I have sometimes quoted and sometimes paraphrased her here.
* Much of my discussion of the visual brain, specifically its determination of color and more generally its attention to unchanging properties rather than variable stimuli is heavily indebted to several studies by Semir Zeki, notably his A Vision of the Brain, (Blackwell Scientific Publications, Oxford 1993), “Art and the Brain,” Daedalus Vol. 127, No. 2, The Brain, (Spring 1998). Pp.71-103, esp.97ff., and Inner Vision, An Exploration of Art and the Brain (Oxford 1999).
· From the article cited above.
· Cited by Zeki in the article mentioned above, 84.