Raine Koskimaa

Raine Koskimaa is a Professor of Contemporary Culture Studies at the University of Jyvaskyla and Vice Director of the Finnish Centre of Excellence in Game Culture Studies. Koskimaa has published widely, especially on game cultures and digital literature, and his writings have been translated to several languages. His current research interests are eSports, and games and transmedia.

Contact information:
raine.koskimaa at jyu.fi

Reading Processes: Groundwork for Software Studies

by Raine Koskimaa

Noah Wardrip-Fruin’s book inaugurates a new publication series by the MIT Press, one of Software Studies, and does it in an impressive way. Wardrip-Fruin states right in the beginning of the book his main impetus: “…it isn’t just the external appearance and audience experience of digital media that matter. It is also essential to understand the computational processes that make digital media function.” (p. xi) To emphasize his stress on the computational processes, Wardrip-Fruin has developed the notion of expressive processing. Under this umbrella, he is discussing things like artificial intelligence applications, simulations, story generators, computer games, and electronic literature.

Expressive processing carries two separate meanings. When creating works for digital media, authors define rules for the system behavior. That is, computational processes are a means of expression for authors. The authorial take may be even more importantly located on this process level, than on the directly observable surface level. On the other hand, the computational processes express, through their design and form, decisions which may be connected to larger cultural contexts. Close examination of the processes may, in some cases, reveal quite different functionalities than the common descriptions of the systems may claim (even the authorial descriptions may prove quite misleading, as Wardrip-Fruin frequently demonstrates). Wardrip-Fruin stresses the importance of this latter approach for understanding digital media, and even more importantly, to understand software in general. This he sees as particularly important for the current information society, where it is crucial for people to understand, on the level of principles, the logic of software-based systems in such areas as surveillance.

Wardrip-Fruin is extremely well read and familiar with the relevant theories in the field, but he is not a grand theory builder himself. His main strength lies in his ability to analyze, interpret, and explain works, by himself or others, in a way that reveals their processes and how the processes bear both authorial intentions and contextual influences. Also, his wide interests and expertise ranging from early computer games to the artificial intelligence experiments and most sophisticated electronic literature works, enable him to demonstrate the general value of the notion of expressive processing throughout the various cultural and academic fields. As such, this book is the perfect volume to begin the new publication series in the software studies. Rather than building the theory for software studies, it works as a model of how to do software studies. The wide variety of materials discussed, however, may be the Achilles’ Heel of the book. As we are all influenced by endless array of information technologies and their software processes, Expressive Processing is, in a way, including everybody in its audience. Still, restricting the target group by modestly limiting the topics covered might have made this book even better.

The three effects

Expressive Processing revolves around the notions of three distinct effects, them being the Eliza effect, the Tale-Spin effect, and the SimCity effect. Wardrip-Fruin provides the reader with lengthy, and rewarding, discussions on each of these effects. Eliza effect receives its name from the well-known AI application, simulated psychiatrist and dialogue generator programmed by Joseph Weizenbaum. The Eliza effect refers to the “illusion that an interactive system is more “intelligent” (or substantially more complex and capable) than it actually is.” (p. 25) The whole issue, however, is not quite that simple. Eliza is a well known program, repeatedly mentioned in literature. The common descriptions, however, are often inaccurate, and Wardrip-Fruin makes an effort to clarify some of the misunderstandings. These begin with the name Eliza, which refers to Weizenbaum’s software system employing natural language processing. Eliza could be modified by using it together with various scripts, the most famous of these being the Doctor. Thus, when people talk about Eliza, they usually mean Eliza system run by the Doctor script. The more correct expression would then be Eliza/Doctor. As the first example of expressive processing approach in practice, there is a detailed analysis of how Eliza/Doctor actually works (based on the account of Weizenbaum in his 1966 article). It is explained, both, how the program decomposes the input text from the user, and how it then composes the reply. Even though the system is quite simple, this detailed discussion reveals that it is still somewhat more complex than commonly assumed.

Even with all the nuances of the system, it is still surprising how strong an impression of meaningful discussion Eliza/Doctor can produce. This Wardrip-Fruin, as many others, addresses to the cleverly chosen and strongly limited context (first visit to the therapist), and the human tendency to regard a system which appears to exhibit intelligent behavior as intelligent. He does follow Janet Murray in continuing the discussion towards the direction, where Eliza/Doctor is regarded a dramatic character in expressive media environment. Especially he stresses how Eliza/Doctor invites people to play with it. This attitude especially quickly leads to breakdowns in the conversational logic, and Eliza/Doctor reveals its limited nature. It is right here that the expressive processing notion shows its strength: the ways how Eliza/Doctor breaks down enables the revealing the underlying logic of the system, and allows for ‘learning the game’. After a period of training, the user is able to steer the conversation to specific directions, either maintaining the coherence, or, forcing absurdities.

The chapter on The Tale-Spin, and Tale-Spin effect, includes the most impressive scholarly investigation provided by the book. The story generator programmed by James Meehan is not quite as famous as Eliza, but it is still often referred to in discussions dealing with AI and digital literature. As with Eliza, much of the secondary sources are plagued with errors and misunderstandings. Wardrip-Fruin has not been contended to rely on secondary sources, but has gone directly to the primary source, interviewing Meehan himself, looking for stored records of the program code and its output, and managed to get his hands, with Meehan’s help, on some rare, authentic materials (even though the original source code of the program seems to be lost). Very much like with Eliza, Tale-Spin has a complimentary second part, a natural language generator called Mumble. What an ordinary user experiences is the textual output of Mumble, in a way just the tip of the iceberg of the complex AI processes taking place in the Tale-Spin system. Depending on that surface alone, a user simply has no way to infer the logic behind, which has prompted that output in the first place. The overtly simple sentences by Mumble hide the conceptual dependency expressions and planbox-based processes, which give rise to a full set of fictional possible worlds and related character beliefs the system needs to evoke for each story. The Tale-Spin effect, then, is the opposite to the Eliza effect.

Finally, there is the SimCity effect based on the hugely popular simulator game SimCity. Whereas the first two effects somehow imply a failure in the performance of the system, or, at least, a discrepancy between the processes and the surface experience of the system, the SimCity effect has a dominantly positive tone. The SimCity effect begins the same way as the Eliza effect. The player has some expectations of how cities work, and in the initial phase the game seems to respond to those expectations. Soon enough the illusion unavoidably breaks down, as happens also with the Eliza/Doctor dialogues. In SimCity, however, “the elements presented on the surface have analogues within the internal processes and data.” (p. 302) Even though the internal model (inspired by research on urban dynamics) is simplified and, to some extent, artificial, compared to real cities, there is a meaningful relation between the surface and the processes, and also between the game and reality it is simulating. This opens up the possibility for the player to grow an understanding of the system processes through experimenting with the game. This seems to be the ideal of expressive processing for Wardrip-Fruin. Playing with the system is simultaneously learning the system, and mastering the game requires good understanding of how the processes work. This is what happens with Eliza as well, but the simplicity of the processes can’t provide motivation for a prolonged play. This sort of transparency of the system processes is related to the notion of the ‘critical technical practices’ Wardrip-Fruin has written about elsewhere, in which the aim is to empower the user through avoiding opaque interfaces. Only if the processes are inferable from the surface elements, it is possible to learn the system logic, and without understanding the internal logic, the user is on the mercy of the technology. The SimCity effect stands for this playful learning process of the system processes, some software products enable.

Data and processes

One of the basic distinctions in Wardrip-Fruin’s model of digital media is between data and processes. He goes through some of the most common process-shaping tools in software, and especially game, development. He discusses with lucid sample cases how quest flags, dialogue trees, and finite-state machines work in games like Neverwinter Nights, Star Wars: Knights of the Old Republic, and Prince of Persia: The Sands of Time. All of the techniques have their strengths (because of which they are employed in the first place), and weaknesses, which lead to problems and failures of specific kind, as Wardrip-Fruin convincingly demonstrates.

It is these discussions of the game design techniques, alongside the SimCity effect section, which bear most direct interest for games and game studies oriented reader. There is probably not much new for people engaged in game design, but for others it is enlightening to learn, especially, why techniques like quest flags frequently lead to certain types of system failures. On the other hand, true to the SimCity effect, many of the players without any knowledge of the game programming issues, have come to an intuitive understanding of these processes and their shortcomings through extensive play.

As his interest lies in the expressive processing, he pays attention also to the question of process intensity, notion coined by Chris Crawford in 1987 and referring to the balance between process and data of a given work. Computer games, for Crawford, were the prime examples of process intensive software. Later on, the situation has changed so that in 2006 Creg Costikyan argued that most work of the game creation goes to the art assets, that is, data development rather than processes. It is one of the strengths of Wardrip-Fruin, that he is ready to problematize even his most central concepts. Right in the beginning of the book he acknowledges that the distinction between data and processes is not clear-cut at all, and regarding the discussion on process intensity he must note, again, that process intensity as a concept has some inherent problems. It is still useful, and especially so for Wardrip-Fruin, as he emphasizes the intensity of behavioural processing, pointing to peculiarities of digital media authoring.

In what comes to computer games, Wardrip-Fruin’s seems to expect richer behaviour through more complex processes, and more ambitious simulation through more complex modelling. The development trends in AI research and electronic literature offer some potential solutions to this end. One would hope that game developers, too, read this book with an open mind, despite its all-encompassing scope.

Artificial intelligence, models, and simulation

Quite a chunk of Expressive Processing is dealing with the main trends in the AI development, and related model-making, and ideology and belief models. The branches of ‘neat’ and ‘scruffy’ AI are described as a background for his own approach. Neat AI, with John McCarthy as the leading figure, aimed at modelling intelligence and intelligent behaviour as a set of logical relationships, which could be expressed with precise mathematical terms. The scruffy branch, with Roger Schank as the leading figure, wanted to target quite different tasks, related to linguistics and psychology, and closer to unordered everyday practices. Much of the examples Wardrip-Fruin discusses, belong to the scruffy branch, and it is quite clear, that the scruffy AI is much closer to his own approach than the neat one.

One of the intriguing distinctions in the book is between simulation-oriented and process-intensive approaches to digital fiction. While much of the computer games and a part of the electronic literature fall into the simulation-oriented category, Wardrip-Fruin’s own work and interest is clearly steered towards the process-intensive approach, employing eg. n-grams and other tools for linguistic play. This direction he labels under ‘textual instruments’, which open up the possibility for the readers to play with the works, in a sense one might play with a musical instrument. There is an uneasy tension here, as from the process-oriented perspective the textual instruments seem much more interesting and challenging cases than many of the simulation-oriented ones. The bulk of the book, as well as works there exist, go into the simulation category though. But still, it comes as quite a surprise, when Wardrip-Fruin admits, just before the Conclusions chapter, that during the writing of this book, his own attitude has changed, so that “the richness of the simulative tradition - […] I found on electronic literature and game design, both of which tend to focus on approaches with low process intensity - has convinced me of the potential of this direction.” (p. 408) Maybe this kind of mental hovering has been one reason for the disparity of the cases discussed, and troubles in limiting the perspective somewhat. It might also been seen as a sign of the powerful influence of the blog-based peer review placed upon the attentive author.

Given this confession, and the sympathy with which the author discusses the SimCity effect, however, gives reason for some doubts on my side. Being the precise and honest critical reader Wardrip-Fruin is, he often seems somewhat unsatisfied with even those works he considers as the most ambitious and sophisticated ones. The Sims, the game in all its iterations, and Façade, the interactive drama by Michael Mateas and Andrew Stern reach closest to the standard, and receive a sort of flagship status in the field of digital fiction. As I belong to that part of the human population, who has never seen any appeal in The Sims, this doesn’t offer much to expect. To me the textual instruments, like Wardrip-Fruin’s own The Impermanence Agent, bear much more promise. But it has to be granted, that Wardrip-Fruin manages to convince the reader, through his sheer enthusiasm for the works of digital fictions he discusses, about their expressive potential. I don’t expect much of the sims of the future, but after reading this book, I certainly want to believe that many interesting and rewarding new works of digital fiction will appear along that road, as well.

Experimenting with Blog-Based Peer Review

In the editing phase of the book, alongside the regular peer review, a new form of blog-based peer review was also employed. This took place in the Grand Text Auto blog, where Wardrip-Fruin is one of the authors, alongside with Mary Flanagan, Michael Mateas, Nick Montfort, Andrew Stern, and Scott Rettberg. A new section was posted each weekday morning to the blog, where the participants could comment on it, paragraph by paragraph. Much of that online discussion has ended up in the book, mainly in endnotes, but also as revisions in the main text. This is a highly interesting experiment, indeed, and one that seems to have been worth taking. Even though it seems inevitable that academic publishing and peer review practices have to change, to better accommodate the contemporary online practices, I am not quite convinced that this model would become that common. The main problem, to me, lies in the high level of activity and dedication it requires from the reviewers. Not only do they have to read extensive manuscripts in quite a strict schedule, but also be ready to engage in the, possibly, lengthy discussions that follow. Also, this kind of process is highly demanding for the author himself. Much of these problems are addressed by Wardrip-Fruin himself, in an Afterword to his book. Judging by the result, the blog-based peer review has been a success in this case, but in many other cases it may not function nearly as well. Several factors have to come together: there must be the proper blog to begin with (Expressive Processing and Grand Text Auto seems to be a perfect pair), there must be enough of people willing to engage and share their expertise in a process much more demanding than the usual peer review, and finally, there must be a manuscript as inspiring as Wardrip-Fruin’s Expressive Processing to work with.



©2001 - 2011 Game Studies Copyright for articles published in this journal is retained by the journal, except for the right to republish in printed paper publications, which belongs to the authors, but with first publication rights granted to the journal. By virtue of their appearance in this open access journal, articles are free to use, with proper attribution, in educational and other non-commercial settings.