- I. Background
- II. The Database at the Ends of Literature
- III. The Role of the Electronic Literature Organization
- IV. Toward A Semantic Literary Web
- V. Process
- VI. The Structure of a Digital Literary Archive
- VII. A process of legitimation for publication in databases
- VII. The relocation of the literary in and through databases
By Joseph Tabbi
The Consortium on Electronic Literature (CELL) is a networked edited resource with scholarly standards, consistent with “principles of the Open Access movement that seeks to maximize the free exchange of scholarly knowledge." At the same time, it is a place where participants can access, identify and write about literary works as these are created in multiple media and as they appear in databases.
Principles and Goals
Our project is meant to be for readers of born (and genetically) digital writing what the Public Library of Science has been in the sciences. Indeed, the migration of literary scholarship to electronic writing spaces means that humanities scholars are now in a position to emulate the sciences by situating journal articles and creative writing at a higher level of importance than (print) book publication. A wider recognition that scholarship and creative writing advance as much, or more, through articles and actively evaluative curatorial practices (than through capture alone) is a primary institutional goal for the CELL project.
We also intend the CELL project to be, for research scholars in native digital writing, what Wikipedia has been to researchers generally: an open access, non-commercial resource where reliable information and relevant works can be found. Unlike the Wikipedia, however, which disallows original research,* the content contributed by CELL partner databases are only original. Insofar as literary knowledge by its nature is processual, contingent, and contentious, the study of literature is entirely consistent with current best practices in digital media. The relocation of reading practices in and through databases promises to renew scholarship and reinvigorate the idea of a “general audience” for literary work. (The narrowing of the “digital divide” is thus as much a pre-condition of an emergent digital world literature as the expansion of literacy through institutions of education was a condition, according to Benedict Anderson, of the rise of the nation state.**)
*http://en.wikipedia.org/wiki/Wikipedia:No_original_research** Anderson, Benedict R. O'G. (1991). Imagined communities: reflections on the origin and spread of nationalism (Revised and extended. ed.). London: Verso.
Numerous databases exist for the storage of essays; some of these are participants in our Consortium though few have the searchability that is a feature of the CELL website. In contrast to JSTOR, Hathi, and other enclosed sites, the common search engine gathers results not from journals and sequestered sites but from partner sites that each allow readers to access works across databases using a shared set of taxonomies. In this way, a search should call up all entries that have been tagged (as literary) by editors in each member database.
Peer to Peer Review
What we present is freely available but also actively evaluated by an integrated community of general readers and dedicated scholars. Entries on our member sites are signed and participants are recognized as authors, even as scholars and readers at each member site identify and comment on work written by others.
The technical basis of our grouping is a shared search engine developed by the NT2 in Montreal. Working across databases hosted at each partner site, the engine enables collaborating scholars, curators, and editors to join together for purposes of cross-refernencing and active scholarly collaboration in an online research environment. The CELL project adopts the MODS websemantic standards and uses the OAI-PMH transfer protocol to allow members and all others to look over contents and to index resources of their own. Maintained by the Library of Congress and continually under development there, the Metadata Object Description Schema (MODS) seemed to us to be the right schema for the type of literary-critical semantic content the CELL project is transferring between and among partners. The OAI-PMH data transfer protocol was selected because its XML schema is compatible with MODS.
On the user level, the device operates mainly through a faceted search that required the constitution by all active members of an agreed set of taxonomies. These identifiers, which are conceptual as much as indexical, emerged from a decade of research in the field of electronic literature, its creative works and scholarship. And it is appropriate for a field that takes seriously its material affordances, that the semantic, taxonomical layer is not merely an instrumental necessity for calling up search results. Rather, by bringing our taxonomies forward, we offer readers of individual works a set of vocabularies that describe the way each work participates in an emerging field.
The Consortium on Electronic Literature (CELL) was initiated in 2009* by the Board of the Electronic Literature Organization for the purpose of coordinating international projects in the emerging field of electronic literary arts and scholarship. In the spirit of collaboration and sustained participation in an emerging field, these meetings have been hosted by various organizations - by the ELO, by our European, Canadian, and Australian partners, and by the Center for Literary Computing at West Virginia University who secured funding for the development of a shared search engine that went live in May 2015.
*Formal letters of agreement between the U.S. based ELO and selected projects in the U.S. and internationally, were exchanged that year, and members have developed specific collaborations in meetings at the LitNet project in Siegen (Winter 2008), the Maryland Institute of Technology in the Humanities (Summer 2008), Washington State - University Vancouver (Summer 2009), The University of Colorado, Boulder (Winter 2009), Brown University (Summer 2010), the University of Western Sydney (Winter 2010), the University of Bergen (Summer 2011), West Virginia University (Summer 2012), Paris 8 (Summer 2013), and The University of Wisconsin - Milwaukee (Summer 2014).
As scholars, we see no harm in conveying our legacy texts from print to electronic media - or, for that matter, the overlooked and forgotten texts, the diaries, diatribes, scattered manuscripts and scribbles that until now have languished on library shelves. But an expanded sample cannot, of itself, describe a field. The mining and mapping of such fields for frequencies and shifts in usage raise issues of a linguistic and social, not necessarily a literary interest. When it comes to our own scholarly writing, the crossover to digital media of essays, articles, and books - for all its usefulness in terms of preservation and credentializing - has not done much toward the realization of the early promise of hypermedia. Notional desktops, notebooks, pages, and watermarks scarcely realize Ted Nelson’s pioneering concepts* of transclusion, transdelivery, or parameterization. And scholarship is scarcely advanced by the predominance of pdf’s whose text cannot be clipped and pasted into one’s own documents - not if we’re reading with an eye to our own scholarly or imaginative writing. Too many commercial and academic databases, instead of staging encounters among artists and audience have become, in effect, enclosures of literary art and humanities scholarship. What our literary Consortium intends to add to the expanded database is a way of structuring the scholarly discussions that, today, touch on literary works more directly than was ever the case with the printed book.*Nelson, T. H. (1981). Literary Machines. Mindful Press.
II. The Database at the Ends of Literature
Our Consortium arrives at the end of electronic literature, in the sense that the inclusion in databases of literature and its scholarship is the achievement or goal of a unified field and discipline. Previously, before CELL and its several affiliated projects, there was no formal way to bring together, make accessible, and make visible electronic literature as a global phenomenon. The fact that all of textual production can now be brought into databases, and that these can now (in principle) be brought into contact with one another, can be said to mark the end of new media as a space of innovation. Creativity might flourish, in works that explore and deform the affordances offered by commercial and customized media; such works might locate themselves on the commercial surface or collaborative deep Web but the relocation of our work in an interconnected global network is complete. Arguably (and the argument was debated by Mark Hansen, Ursula Heise, Megan Massino, Arielle Saiber, and myself in the 2007 meeting of the Modern Language Association) we have arrived this late, in an era of digitally networked media, at the realization of the centuries long ideal of a world literature: the same ideal articulated, but never really enacted by Goethe in the 18th century and Marx in the 19th.
In part, the change is quantitative. Franco Moretti may have been the first to state explicitly how databases might take scholarship away from a model where one or two hundred canonical works can be thought to represent, for example, "Nineteenth Century Literature.” Moretti saw how subsequent students of comparative or international literature have failed to “live up” to Goethe’s Eighteenth Century vision of a “world literature in formation.” That was before Moretti launched his own data mining projects at Stanford and published his evaluative books, the Atlas of the European Novel 1800-1900 and Maps, Graphs, and Trees. For Moretti, neither close reading nor the comparative study of so small a selection could ever grasp literature as the “collective system” it is, that needs “to be grasped as a whole.” The inclusion at present of millions of published books in databases (by JSTOR, Google Books, Hathi, along with new work that today must number thousands - works that have not been carried over but instead were generated in digital environments) brings an entire field into proximity where it can be, if not read, then mined by programs for patterns and practices. Such programs currently are being designed by literary scholars for the purpose of exploring conceptual and linguistic trends and (one hopes) answers to cognitive and critical questions of works that cannot be, and needn’t be, read by any one or several period specialists, let alone general readers over the course of a lifetime.
Even if we recognize, with Moretti and Karl Marx, that quantity changes quality, and even as we observe the ten or twenty or more million books being scanned from America’s public libraries by Google (through a curious interpretation of U.S. law concerning fair use*), we should bear in mind that those cognitive and critical questions - the ones that define a work’s reception - can be raised only by literary practitioners in contact and conversation with one another. This was of course the whole point of Goethe’s initial insight into the potential of a world literary formation - when he noticed that the reception of one of his plays in France was generating more talk than he’d ever heard in his native Germany, and such conversations were circulating among ordinary theater goers but also among tastemakers in position of cultural and economic power. And what can the internet bring to literature, beyond a vastly expanded field of storage, if not just this ability to track conversations and to turn the paratext into something that can itself be followed, and studied, and turned to account?
The literary field, then, can be said to be closed since it is only a matter of time before all that is storable will be stored, if not accessed, read, and received by a literary audience. That should not mean, however, that our scholarly relation to the corpus becomes wholly curatorial. The resituation of literature in new media environments is at present mainly a matter of continuing to bring works that readers identify as literary into databases whose contents are linked one to another (through keywords and contexts) and available to all readers according to Open Access principles. No further technical innovation is needed, for electronic literature (or, in some usages, "born” or "native digital" literatures) to be recognized as a field open to scholarship. The construction of a worldly literary discipline depends on the searchability of our databases, how the conversations around works is opened, and to whom. (To which publics, and to what manners of reading - whether reflective or performative; enclosed by firewalls or interacting; cognitive or convivial and so forth).*Needless to say - though entirely needful outside the realm of practicing authors and scholars where such legal decisions are made - the Google defense of “fair use” only incidentally allows for the free citation of passages from other works in the context of scholarly essays, reviews, and creative remixes. What Google has in mind is arbitrarily leaving pages out of the works they make freely available on the Internet, which at present (mid-2015) does nothing at all to increase readability or usability of a document and in fact is an insult to anyone wishing to read or interact with any of the millions of works so made available: this distinction, between printed books under glass that are (literally) “read only” and born digital writing that is only interactive, can be said to distinguish pre- and post-digital writing.
III. The Role of the Electronic Literature Organization
From its inception, the Consortium on Electronic Literature signaled a para-institutional interest in a database that could document the field, presenting scholarship alongside an emerging corpus; with creators and scholars each contributing to an evaluative process that at once identifies and makes use of technical affordances for creative and scholarly purposes. The Electronic Literature Organization has served as one institutional model for such projects (with its Electronic Literature Directory and series of Electronic Literature Collections); it has increasingly become, in its conferences and collateral publications, a nexus of international discourse for them. The natural evolution has been a formalization of the Consortium and a realization that the organization's core aim of documenting the field is essentially being carried out by the community - through members creating specific user accounts with the ELD and the ELC, but also through the formation of entire databases with formal links to these earlier models. The formal challenges to pooling all resources while respecting the integrity of members has resulted in the creation of a common portal for electronic literature, intending to cultivate critical and creative attention and to formally validate work that contributes to this aim in conversation with our peer (and aspirant) communities worldwide.
IV. Toward A Semantic Literary Web
Keywords and Categories
The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries." "W3C Semantic Web Activity". World Wide Web Consortium (W3C).
The data already is being shared and “mined” by literary scholars, notably those who have followed Franco Morretti’s encouragement of “distant reading” that focuses on what can be mapped and graphed (as trends and frequencies) rather than engaging with verbal expressions that evoke further expression in readers and aspiring writers. ELMCIP is the first of our members to have explicitly adopted a distant reading model. The latter, expressive and conceptual engagement has been the focus of the ELO’s Electronic Literature Directory. (See Rettberg, Scott and Walker Rettberg, Jill, "Mining the Knowledge Base: Exploring Methodologies for Analysing the Field of Electronic Literature"; and Joseph Tabbi, “Toward a Semantic Literary Web.”)
Because the conceptual, theoretical and semantic categories for the literary field are continually in development (and the emergence and disappearance of keywords is itself a topic of literary study), members of the CELL editorial group decided to use the data gathered at member databases for the development of taxonomies that would filter the results of keyword searches. Even the field name can vary according to a member’s aesthetic or geographical location: what emerged as "electronic literature" in the U.S., co-emerged for example as "littérature numérique" in France and "littérature hypermédiatique" in Francophone Canada. The various terms, some of them differing even within the same language cannot and should not be conflated any more than “humanities computing” and “digital humanities” should be conflated or preferred, from a strictly scholarly perspective. Rather, the extent to which literature can be regarded as numerical, the persistence or overcoming of "humanism" as a period descriptor, and the degree to which electronic media can support the generation and circulation of literary work, are questions that should remain open topics of scholarly investigation and institutional renewal.
The objects of study, similarly, are presented for discussion and evaluation, not for uncontested inclusion in a disciplinary canon. The literary objects that have emerged in new media, moreover, will themselves contribute directly to the creation of a terminology for the discipline. Rather than having a discipline evolve and form itself separately from its objects of study, the discipline emerges out of the objects of study and is in a very real sense co-evolving with them.
As literary value in electronic media can only emerge from experiences recorded by many readers, so must our categories arise from the free tagging of works. The tagging is “free” and the documentation is ongoing because both processes are accomplished during acts of reading that are pleasurable and voluntary, circulated among peers not assigned as a task. Individual records are displayed on each of the member sites, and these serve as an important resource for our cross-site searches.
Yet tagging by readers, curators, and editors is only a first step - if our individual, variously impressionistic, instrumentalist, or argumentative terms are ever going to inform a field under development. Over time, our members have mostly observed how our own projects end up using a series of specific terms, or taxonomies, instead of free tagging. (See for example the terms set out on site by ELMCIP, NT2, I love E-Poertry, PO.EX, and others.)
In the spirit of OuLiPo, the Consortium on Electronic Literature recognizes that literary production, and its terms of understanding, always moves forward under constraints, whether these are conscious or unconscious, put in place by others or imposed on oneself during the act of composition. Our members experienced the limitations on searchability after decades of free tagging. Though the practice of free tagging might convey words in the mind of a reader (one committed enough to a given work to generate and display the tag in the first place), the results among our database sample of thousands, not hundreds of thousands of terms, were too heterogeneous a set to be useful; too personalized to be conveyed to others for independent consideration in their own acts of reading works and reaching an understaning of a field.
Had the set been truly enormous, there would have been some self-organization observable in tags generated freely over several years by various member databases. But this particular set, though too large for any one reader to absorb is still small enough that it’s just heterogeneous, not capable of generating patterns through human observation or data mining. Hence the creation of terms that were rarely if ever searched directly by newcomers to the field, produced fewer discoveries, not more. And for this reason. the project leaders decided to go with a set of faceted search terms under a small number of categories, namely: Year of Original Publication; Work Language; Publication Type; Procedural Modality(ies); Mechanism(s); and Format(s).
With this discovery and these criteria in mind, our search routine is not free and does not rest with crowd sourced knowledge and free tagging; rather, our members actively shape this wealth of documentation into categories and values that can be discerned, advanced, or (what is crucial for scholarship) challenged. Instead of simply adding more tags, the CELL editorial group creates a consensual layer of categorizations designed to facilitate the search and semanticize search results. A search at the CELL site displays entries from all the member sites. The displays do not simply contain text that matches a given keyword; they are synched rather to an articulated semantic field.That is the point of our members' sharing a common agreed set of taxonomies - namely: a CELL search is contextual, and the context is structured not through free tagging alone or crowd sourcing, but through extended conversations among dedicated scholars that describe the literary qualities and the categories. In this way, our searches describe cultural and literary economies much as Google searches describe a working cognitive economy. The difference of course is that ours operate on a numerically small scale but with robust and concentrated interest among communities of researchers. It's a constituted field in Bordieu's sense of the term, whose constituitive works can be returned to over time, and so advanced.
The CELL taxonomy once determined, is fixed and identified as a given “version.” Versions will not change, but new versions are released as needed and made public here on the site.
To participate in the global search, every partner will tag their content with the CELL taxonomy. This is one of the reasons why the taxonomy is cast as a “version.” It is onerous for our editors to retag all contents if a new term is added to the list, so a re-versioning will occur only when the CELL members observe, and argue in favor of, a shift in the categories. For this reason, CELL has both a technical and an editorial procedure in place for creating and pushing out new versions of taxonomies.
Free tagging and Taxonomies
The difference between keywords typed in by users and taxonomies, which is the difference between private reading and public, shareable categorization, can be understood on a technical level: The keywords are the words a user will put in a search box in order to look for specific words in an entry; results are then displayed by the search engine. The taxonomies act more like a filter or semantic structure of all content displayable by the search engine.
With each new version, taxonomic terms are added to existing records in the partner databases.
A Semantic Overview
The use of a controlled set of taxonomies allows the editorial group to categorize the results of a CELL search and make them available for researchers and educators. By drawing on our members’ expertise as scholars and editors of electronic literature, we can create a semantically rich and usable set of taxonomies. Moreover, just as our project sets out to build a comprehensive search engine for the field, so too can these taxonomies provide an overall vocabulary - agreed terms and descriptors - for the field. The outcome will be a semantic description of electronic literature, a fundamental resource and a source of scholarly value comparable to M. H. Abrams’s Glossary of Literary Terms for literature of the print era.
Synapse (The CELL Search Engine) and Membrane (The Editorial Working Group)
Over years of operation, through subjective free tagging our member databases can be said to have described a literary field. The CELL search engine, which makes searching across databases possible, cannot for its part rely on free tagging. And neither can the editorial process outsource the work of categorizing and evaluating. The task of the CELL editorial group, Membrane is to create a controlled vocabulary for accessing the many thousands of tags generated freely and subjectively by our member databases.M minglingE electronicM markupsB byR retrievingA archivedN networkE enscriptions
Metadata Standard and Name Authority
The various databases in the CELL project have varying levels of critical peer review and range from stub entries (with bibliographic data only) to curatorial, encyclopedic and critical accounts of individual literary works. Not all of the entries in all of the databases are keyed to works, and the projects differ in linguistic and programming languages. Even author names vary within a given database and from one database to another. The CELL project, is at once an attempt to bring peer review to the field of electronic literature as a whole and also to promote, at the level of a common search procedure, a common vocabulary and attention to documentation. This latter, technical protocol functions as a kind of peer review and is key to the formation of electronic literature as a field that is open to critical scholarship - in its processes no less than its outcomes.
VI. The Structure of a Digital Literary Archive
In The Language of New Media (Cambridge: MA: MIT Press, 2001, page 37), Lev Manovich describes the “new media object” as an interface to a database. The CELL project applies this definition to literature. The SYNAPSE search engine lets our editors locate literary works and the conversations surrounding works across archives, databases, and sites, some of which are affiliated with CELL and others not.According to members Portela and Torres,* the meta-structure of a digital literary archive should perform three separate functions: textual representation, contextual simulation, and interpretive interaction.* The text in this section derives from internal, unpublished reports that were shared with the CELL membership by Manuel Portela and Rui Torres."O projecto «PO.EX'70-80» enquanto edição digital de um conjunto multimodal de documentos: problemas de representação textual" [The PO.EX'70-80 Project as digital edition of a multimodal set of documents: problems of textual representation].
In addition to digital facsimiles, text transcriptions, and other forms of print/screen remediation, textual representation means that a literary database might incorporate data about the original documents (full bibliographic records that include a description of the medium and technique, for instance), but also information about the digital surrogates themselves (processes, norms, formats) and the protocols for preservation and archiving (to guarantee the integration and interoperability of this archive with other digital repositories). The latter, interoperative interaction, is ensured by the SYNAPSE search engine that was built in 2015 for the Consortium on Electronic Literature. (Torres, Portela, and Sequeira)
Contextual simulation refers to the ability of the archive to recover the history of a work’s production (the genetic dimension of the archive) and the history of its reception (the social and professional dimension of the archive), including awareness of the archive as a new tool for producing context (establishing its own network of intertextual associations among items).
The interpretive interaction describes the literary archive’s ensemble of digital functionalities as a critical environment for generating interpretations, evaluations, and contentions. Document encoding (XML, XSLT, HTML5, etc.), metadata, database structure, and programming should result in the discovery of new patterns and relations through automatic processing, even as the significance of the patterns and relations is discussed and debated among scholars with various institutional and aesthetic commitments. Aggregated searches according to open criteria that produce a radial constellation of documents or the possibility of adding scholarly annotations (and critical commentary) are two examples of this level of critical reinterpretation.
The implementation of the interpretive function entails an understanding of the archive as a research space. Works that are referenced (and occasionally stored) in member databases will be valued not for themselves but in use: namely, the works circulate insofar as they are cited, annotated, and connected with other works in a living archive.
VII. A process of legitimation for publication in databases
For better and for worse, in a way that was justifiable in some cases, less justifiable in others, the barrier, the “cutoff,” the book’s stopping point, still protected a process of legitimation. A published book, however bad, remained a book evaluated by supposedly competent authorities.
Jacques Derrida, “Word Processor”, in Paper Machine, Stanford University Press, 2005: page 32
Like Derrida before us, our Consortium members have lived through (and in our various database projects, documented) “the end of the book.”* Now that print is no longer a primary or predominant medium for conveying works of literary imagination, it is up to us, literary scholars to decide which institutions and practices carry over into current media and whether they will legitimate themselves in the form of contract labor, the academic gift economy (what remains of it), or an autonomous literary practice formulated specifically for new media reading and writing practices. As hybrid forms of textual production emerge and find their location in databases, other forms of legitimation need to be devised. To avoid doing so is to cede questions of literary value and “competent authority” to page rank algorithms, field enclosures, and such like monopolistic and marketing practices.*The discourse of the end, as invoked here, has more to do with the book’s purpose than its rate of production, and the same can be said for born digital writing. Indeed, soon after the CELL project went live in May 2015, the ELO was sponsoring a conference on "the ends of electronic literature" in Bergen, Norway.
The emergence of “a process of legitimation” is of particular importance in humanities scholarship at a time when professional accountability might otherwise turn into matters of accounting. Already, explicitly in the European “Bologna” system and implicitly in many programs in the United States, value is assigned to scholarly publication on a point system that is consistent with computational capitalism not scholarly recognition. Indeed, by bringing forward the process of critical evaluation in communications that can be accessed along with publications, the connected literary database can support the professional autonomy of literary scholarship.
Of course, in all such accounting systems the printed book scores highest. And even programs that maintain a professional distance from the accounters generally expect scholars to publish books, if they are to gain entry into departments of literature and cultural studies. Yet it is also true that the number of tenurable positions in these departments is dwindling even as the number of unreviewed and largely unread books of literary scholarship are produced. The disciplined reading of literary works through the ages would seem to be going the way of Classics and Departments of philosophy. By contrast, fields such as Composition and Rhetoric, which do not place so much emphasis on book publication, thrive in the current media environment, though reference to imaginative literature as such is not essential to most Comp/Rhet practice.
What CELL can bring to the online legitimation process in all branches of literary study is a formalized system of peer-to-peer recognition that can ensure literature’s continued participation as part of the university’s core mission. Already, through participating CELL sites publication credit has been awarded to undergraduates, graduate students, and advanced research scholars in numerous disciplines. The conversations and review mechanisms that result in authorship of database entries, are no different from those that lead, in literary conferences and colloquia, to the production of literary works and scholarly essays. The only differences have to do with intended audience and circulation of various kinds of critical writing - a range that is much wider than what can be offered though the ideal of book publication and that discourages the idea of a multi-year apprenticeship as a precondition to meaningful participation in the literary field.
That many if not all of the established scholar-artists in the electronic literary field have never to this day published books, and several of these (such as Kate Armstrong, John Cayley, Caitlin Fisher, and Stuart Moulthrop) have attained positions in academia that do not necessarily separate scholarly and creative writing, attests to an ongoing transformation of legitimation practice. Quite apart from any moves toward “distance learning” (a largely informatic development of little relevance for education in literature and the arts), an embrace of the connected database, not the publisher or enclosed online storage sites, can rejuvenate the professions of literary scholarship and creative writing.
VII. The relocation of the literary in and through databases
Though current enclosures and algorithms post-date Derrida’s remarks in “Word Processor,” they certainly can be counted among “the big political issues” identified in that essay. These are issues that authors and scholars (and scholar-creators) need to decide for ourselves, within own literary practice – and not solely in the cultural critique of practices that we imagine are somehow separate, or separable from the ways we ourselves generate works and put them into circulation. In media where words themselves are stored, ranked, and standardized for use in exchanges that are contractual, not consensual, we need to rethink concepts of authorship, authority, and how we speak and are spoken by language. When many or most of us sign up with Facebook and Google for example, “[t]he underlying transactions and the relationships” are very different “from any that arise when you or I take down our dictionary to look up a word.” (John Cayley)* For an author to place one’s work in a database, or to have it placed there without us, ourselves or trusted representatives participating in the database construction, is certainly possible but can be a way too of ceding authority, and even authorship, to corporate writing practices not fully understood or appreciated.*http://amodern.net/article/terms-of-reference-vectoralist-transgressions/
Am I blind, or maybe dumb?To see TWO cents has made me numb.Would you do work for this measly amount?Would you take it seriously? Would it even count?This is insulting in so many ways.
This bit of doggerel verse is cited by Rita Raley in “Outsourced Poetics,” her review of the collection, Of the Subcontract. Or Principles of Poetic Right nominally authored by Nick Thurston and published by a group called Information as Material. Poems like this one, Raley remarks, “were subcontracted to workers who were paid pennies for their creative labor through Amazon’s Mechanical Turk Platform (AMT). Each poem … is the work of a ‘Turker’ completing a Human Intelligence Tasks (HITS) ” on demand in a matter of minutes, even seconds, before moving on to the next task requiring human intervention rather than machine intelligence.”**American Book Review Focus on “Machine Writing,” January/February 2014: page 5
(With thanks to Stuart Moulthrop, Dene Grigar, Sandy Baldwin and Gabriel Gaudette for feedback during the weeks when the CELL engine and site were actually under construction.)