MLA
Enter a term to search the site
Search tips | Log in
Resources Job List publications bookstore style convention governance membership

The Good Web: Workshop in Teaching Your Students How to Evaluate Web Resources

Presented by Matthew Jockers (Stanford Univ.) and Susan Schreibman (Royal Irish Acad.)

Abstract

This session at the 2008 Modern Language Association Annual Convention features a new format with which the Committee on the MLA Convention and the MLA Committee on Information Technology are experimenting. Not your typical three-paper or panel session, this format invites more user participation and is perhaps more pragmatically based than those sessions one might typically attend at the MLA convention.

"The Good Web" workshop will introduce participants to tools and methodologies that help instructors and students navigate the ever-increasing information space that is the World Wide Web.

Background

Before the advent of the World Wide Web, students learned in a relatively mediated information space: textbooks were chosen by teachers or professors, and books in the library were vetted by librarians. Once students reached the master’s level, they were instructed in the use of bibliographic sources (such as the MLA International Bibliography) that include books and articles published, by and large, in academic journals backed by university departments, academic or commercial publishers, or scholarly societies.

This relatively moderated and, one might even say, "controlled" information space is one that many of us who came to academic maturity before the advent of the Web would recognize. This is not to say that such an environment encouraged a homogeneity of perspective (far from it!), but it did mean that we took for granted the reliability of resources at our disposal. Indeed, before the Web, what one typically questioned was the theoretical or political approach of the information resource. The ways one questioned these perspectives and approaches could be fairly transparent, particularly as one developed expertise in a field.

But the Web, where it can be impossible to ascertain the publisher of a journal or an article, has changed all this. Indeed, most information resources on the Web are not "published" in the same way as in print: a piece may not receive an imprimatur from a publisher, a scholarly society, or an educational institution. We have not yet found ways to classify, to quantify, or to give academic credit to the myriad new scholarly resources being produced by and for the constituencies we serve. Blogs, wikis, thematic research collections, and databases all present problems of attribution and provenance, problems that do not easily mesh with the traditional view of a publication as single-authored monograph. Resources on the Web appear and disappear with alarming rapidity, and even those resources with staying power and a degree of permanence (such as Wikipedia) are in a constant state of flux.

This rapidly changing information environment will only increase in complexity as technology advances, allowing for new forms of publication, for participation in information resources by an ever-widening circle of professionals and amateurs, and for scholarly outputs that might seem more normative to a PhD in computer science than to a scholar in the modern languages.

It is thus not surprising that we and our students frequently find it difficult, even somewhat vexing, to navigate this raging flood of resources. Our most trusted guide is no longer a professor or librarian but a commercial company started just eight years ago by a couple of Stanford graduate students. In place of the librarian's experience and the professor's domain expertise, Google offers us a top-secret search algorithm that promises to filter good from bad by means of a type of "crowd-sourcing" that determines rank based on a complex metric of interconnected links and term frequencies.

If understanding what's inside Google's black box isn't daunting enough, consider that many important resources are not even indexed by Google: most of the rich, fee-based databases to which many academic libraries subscribe remain untouched and unavailable to Google's Web-crawling spiders. These databases, along with many others that are freely available, are known as "the Deep Web." Although containing many trustworthy, well-edited, and scholarly resources, the Deep Web is frequently invisible to search engines. However valuable these resources may be, they are often difficult to access (quirky interfaces and searching paradigms) and may be discounted by both professionals and students alike as hard to use. Yet it would be a mistake to think that everything available in the Deep Web should be favored over those resources that search engines do index.

Consider, for example, a 2006 article published in Nature comparing the rates of error in Wikipedia and Encyclopedia Britannica (Giles). Although many instructors ban their students from using Wikipedia, the study found that the rate of error in Wikipedia was only marginally higher than in Encyclopaedia Britannica (2.92 mistakes per article in Britannica vs. 3.86 in Wikipedia).

While the error rate of Wikipedia may be comparable with that of similar print resources, it is far too often the only source students turn to. Why they so frequently do, particularly with the wealth of resources available to them, will be one of the topics discussed at the workshop. Other topics will include practical methods to navigate, discover, and evaluate online resources.

The workshop will be conducted through a combination of presentations and hands-on exercises. Please join us and contribute to the discussion.

Work Cited
Giles, Jim. "Wikipedia Rival Calls in the Experts."
Nature 5 Oct. 2006: 493.

Resources

Useful Guides and Tutorials

Research on Web Credibility

 

 
© 2014 Modern Language Association. Last updated 04/27/2010.