Home » Content Tags » Digital Humanities

Digital Humanities

3/8/23

By Matthew Kirschenbaum

What if, in the end, we are done in not by intercontinental ballistic missiles or climate change, not by microscopic pathogens or a mountain-size meteor, but by … text? Simple, plain, unadorned text, but in quantities so immense as to be all but unimaginable—a tsunami of text swept into a self-perpetuating cataract of content that makes it functionally impossible to reliably communicate in any digital setting?

Our relationship to the written word is fundamentally changing. So-called generative artificial intelligence has gone mainstream through programs like ChatGPT, which use large language models, or LLMs, to statistically predict the next letter or word in a sequence, yielding sentences and paragraphs that mimic the content of whatever documents they are trained on. They have brought something like autocomplete to the entirety of the internet. For now, people are still typing the actual prompts for these programs and, likewise, the models are still (mostly) trained on human prose instead of their own machine-made opuses.

But circumstances could change—as evidenced by the release last week of an API for ChatGPT, which will allow the technology to be integrated directly into web applications such as social media and online shopping. It is easy now to imagine a setup wherein machines could prompt other machines to put out text ad infinitum, flooding the internet with synthetic text devoid of human agency or intent: gray goo, but for the written word.

Exactly that scenario already played out on a small scale when, last June, a tweaked version of GPT-J, an open-source model, was patched into the anonymous message board 4chan and posted 15,000 largely toxic messages in 24 hours. Say someone sets up a system for a program like ChatGPT to query itself repeatedly and automatically publish the output on websites or social media; an endlessly iterating stream of content that does little more than get in everyone’s way, but that also (inevitably) gets absorbed back into the training sets for models publishing their own new content on the internet. What if lots of people—whether motivated by advertising money, or political or ideological agendas, or just mischief-making—were to start doing that, with hundreds and then thousands and perhaps millions or billions of such posts every single day flooding the open internet, commingling with search results, spreading across social-media platforms, infiltrating Wikipedia entries, and, above all, providing fodder to be mined for future generations of machine-learning systems? Major publishers are already experimenting: The tech-news site CNET has published dozens of stories written with the assistance of AI in hopes of attracting traffic, more than half of which were at one point found to contain errors. We may quickly find ourselves facing a textpocalypse, where machine-written language becomes the norm and human-written prose the exception.

Like the prized pen strokes of a calligrapher, a human document online could become a rarity to be curated, protected, and preserved. Meanwhile, the algorithmic underpinnings of society will operate on a textual knowledge base that is more and more artificial, its origins in the ceaseless churn of the language models. Think of it as an ongoing planetary spam event, but unlike spam—for which we have more or less effective safeguards—there may prove to be no reliable way of flagging and filtering the next generation of machine-made text. “Don’t believe everything you read” may become “Don’t believe anything you read” when it’s online.

This is an ironic outcome for digital text, which has long been seen as an empowering format. In the 1980s, hackers and hobbyists extolled the virtues of the text file: an ASCII document that flitted easily back and forth across the frail modem connections that knitted together the dial-up bulletin-board scene. More recently, advocates of so-called minimal computing have endorsed plain text as a format with a low carbon footprint that is easily shareable regardless of platform constraints.

But plain text is also the easiest digital format to automate. People have been doing it in one form or another since the 1950s. Today the norms of the contemporary culture industry are well on their way to the automation and algorithmic optimization of written language. Content farms that churn out low-quality prose to attract adware employ these tools, but they still depend on legions of under- or unemployed creatives to string characters into proper words, words into legible sentences, sentences into coherent paragraphs. Once automating and scaling up that labor is possible, what incentive will there be to rein it in?

William Safire, who was among the first to diagnose the rise of “content” as a unique internet category in the late 1990s, was also perhaps the first to point out that content need bear no relation to truth or accuracy in order to fulfill its basic function, which is simply to exist; or, as Kate Eichhorn has argued in a recent book about content, to circulate. That’s because the appetite for “content” is at least as much about creating new targets for advertising revenue as it is actual sustenance for human audiences. This is to say nothing of even darker agendas, such as the kind of information warfare we now see across the global geopolitical sphere. The AI researcher Gary Marcus has demonstrated the seeming ease with which language models are capable of generating a grotesquely warped narrative of January 6, 2021, which could be weaponized as disinformation on a massive scale.

There’s still another dimension here. Text is content, but it’s a special kind of content—meta-content, if you will. Beneath the surface of every webpage, you will find text—angle-bracketed instructions, or code—for how it should look and behave. Browsers and servers connect by exchanging text. Programming is done in plain text. Images and video and audio are all described—tagged—with text called metadata. The web is much more than text, but everything on the web is text at some fundamental level.

For a long time, the basic paradigm has been what we have termed the “read-write web.” We not only consumed content but could also produce it, participating in the creation of the web through edits, comments, and uploads. We are now on the verge of something much more like a “write-write web”: the web writing and rewriting itself, and maybe even rewiring itself in the process. (ChatGPT and its kindred can write code as easily as they can write prose, after all.)

We face, in essence, a crisis of never-ending spam, a debilitating amalgamation of human and machine authorship. From Finn Brunton’s 2013 book, Spam: A Shadow History of the Internet, we learn about existing methods for spreading spurious content on the internet, such as “bifacing” websites which feature pages that are designed for human readers and others that are optimized for the bot crawlers that populate search engines; email messages composed as a pastiche of famous literary works harvested from online corpora such as Project Gutenberg, the better to sneak past filters (“litspam”); whole networks of blogs populated by autonomous content to drive links and traffic (“splogs”); and “algorithmic journalism,” where automated reporting (on topics such as sports scores, the stock-market ticker, and seismic tremors) is put out over the wires. Brunton also details the origins of the botnets that rose to infamy during the 2016 election cycle in the U.S. and Brexit in the U.K.

All of these phenomena, to say nothing of the garden-variety Viagra spam that used to be such a nuisance, are functions of text—more text than we can imagine or contemplate, only the merest slivers of it ever glimpsed by human eyeballs, but that clogs up servers, telecom cables, and data centers nonetheless: “120 billion messages a day surging in a gray tide of text around the world, trickling through the filters, as dull as smog,” as Brunton puts it.

We have often talked about the internet as a great flowering of human expression and creativity. Nothing less than a “world wide web” of buzzing connectivity. But there is a very strong argument that, probably as early as the mid-1990s, when corporate interests began establishing footholds, it was already on its way to becoming something very different. Not just commercialized in the usual sense—the very fabric of the network was transformed into an engine for minting capital. Spam, in all its motley and menacing variety, teaches us that the web has already been writing itself for some time. Now all of the necessary logics—commercial, technological, and otherwise—may finally be in place for an accelerated textpocalypse.

“An emergency need arose for someone to write 300 words of [allegedly] funny stuff for an issue of @outsidemagazine we’re closing. I bashed it out on the Chiclet keys of my laptop during the first half of the Super Bowl *while* drinking a beer,” Alex Heard, Outside’s editorial director, tweeted last month. “Surely this is my finest hour.”

The tweet is self-deprecating humor with a touch of humblebragging, entirely unremarkable and innocuous as Twitter goes. But, popping up in my feed as I was writing this very article, it gave me pause. Writing is often unglamorous. It is labor; it is a job that has to get done, sometimes even during the big game. Heard’s tweet captured the reality of an awful lot of writing right now, especially written content for the web: task-driven, completed to spec, under deadlines and external pressure.

That enormous mid-range of workaday writing—content—is where generative AI is already starting to take hold. The first indicator is the integration into word-processing software. ChatGPT will be tested in Office; it may also soon be in your doctor’s notes or your lawyer’s brief. It is also possibly a silent partner in something you’ve already read online today. Unbelievably, a major research university has acknowledged using ChatGPT to script a campus-wide email message in response to the mass shooting at Michigan State. Meanwhile, the editor of a long-running science-fiction journal released data that show a dramatic uptick in spammed submissions beginning late last year, coinciding with ChatGPT’s rollout. (Days later he was forced to close submissions altogether because of the deluge of automated content.) And Amazon has seen an influx of titles that claim ChatGPT “co-authorship” on its Kindle Direct platform, where the economies of scale mean even a handful of sales will make money.

Whether or not a fully automated textpocalypse comes to pass, the trends are only accelerating. From a piece of genre fiction to your doctor’s report, you may not always be able to presume human authorship behind whatever it is you are reading. Writing, but more specifically digital text—as a category of human expression—will become estranged from us.

The “Properties” window for the document in which I am working lists a total of 941 minutes of editing and some 60 revisions. That’s more than 15 hours. Whole paragraphs have been deleted, inserted, and deleted again—all of that before it even got to a copy editor or a fact-checker.

Am I worried that ChatGPT could have done that work better? No. But I am worried it may not matter. Swept up as training data for the next generation of generative AI, my words here won’t be able to help themselves: They, too, will be fossil fuel for the coming textpocalypse.

-----------

Matthew Kirschenbaum is a professor of English and digital studies at the University of Maryland. He is the author of Track Changes: A Literary History of Word Processing (Harvard University Press, 2016) and Bitstreams: The Future of Digital Literary Heritage (University of Pennsylvania Press, 2021).

 

12/1/22

By J.J. McCorvey and Char Adams

Black users have long been one of Twitter’s most engaged demographics, flocking to the platform to steer online culture and drive real-world social change. But a month after Elon Musk took over, some Black influencers are eyeing the exits just as he races to shore up the company’s business.

Several high-profile Black users announced they were leaving Twitter in recent weeks, as researchers tracked an uptick in hate speech, including use of the N-word, after Musk’s high-profile Oct. 27 takeover. The multibillionaire tech executive has tweeted that activity is up and hate speech down on the platform, which he said he hopes to make a destination for more users.

At the same time, he posted a video last week showing company T-shirts with the #StayWoke hashtag created by Twitter’s Black employee resource group following the deaths of Black men that catalyzed the Black Lives Matter movement, including the 2014 police killing of Michael Brown. His post contained laughing emojis, and someone can be heard snickering off-camera as the T-shirts are displayed.

Musk later posted and then deleted a tweet about the protests — fueled in part by activists on Twitter  — that followed in Ferguson, Missouri, pointing to a subsequent Justice Department report and claiming the slogan “‘Hands up don’t shoot’ was made up. The whole thing was a fiction.”

He has also moved to restore many banned accounts despite condemnation from civil rights groups such as the NAACP, which accused him of allowing prominent users “to spew hate speech and violent conspiracies.” Civil rights leaders have also urged advertisers to withdraw over concerns about his approach to content moderation.

Twitter didn’t respond to requests for comment.

In a blog post it published Wednesday, the company said its “approach to experimentation” has changed but not any of its policies, though “enforcement will rely more heavily on de-amplification of violative content: freedom of speech, but not freedom of reach…We remain committed to providing a safe, inclusive, entertaining, and informative experience for everyone.”

Downloads of Twitter and activity on the platform have risen since Musk took control, according to two independent research firms. The data lends support to his claims that he is growing the service, though some social media experts say the findings may not shed much light on the company’s longer-term prospects. And while there is no hard data on how many Black users have either joined or left the platform over that period, some prominent influencers say they’re actively pursuing alternatives.

Jelani Cobb, a writer for The New Yorker and the dean of the Columbia Journalism School, said he has joined two decentralized microblogging apps — Mastodon and Post News — after leaving Twitter, telling his nearly 400,000 followers last week that he’d “seen enough.” The reinstatement of former President Donald Trump’s account was the “last straw,” he told NBC News.

Jelani Cobb at an event in New York.Jelani Cobb at an event in New York.Roy Rochlin / Getty Images for Unfinished Live

“I can say confidently that I will not return to Twitter as long as Elon owns it,” he said. “Some people think that by staying on the site they’re being defiant, defying the trolls, the incels, the ill-will they’re encountering. But Elon Musk benefits from every single interaction people have on that platform. That was the reason I left. There are some battles you can only win by not fighting.”

Imani Gandy, a journalist and the co-host of the podcast “Boom! Lawyered” (@AngryBlackLady, 270,000 followers), recently tweeted that she isn’t enthused enough by Twitter alternatives to switch platforms.

The longtime Twitter user said in an interview that a combination of blocking, filters and “community-based accountability when it comes to anti-Blackness” make her less inclined to leave, for now. “Sure there are Nazis and jerks on Twitter, but they’re the same Nazis and jerks that have always been there, and I’m used to them,” she said.

Fanbase, another social media app, has seen usership jump 40% within the last two weeks, according to its founder, Isaac Hayes III. “We contribute so much to the culture and the actual economy of these platforms,” he said, “but do we own them?”

Investors in the service, which lets users monetize their followings by offering subscriptions, include Black celebrities such as the rapper Snoop Dogg and the singer and reality TV star Kandi Burruss. Other Fanbase investors — including the often polarizing media personality Charlamagne Tha God (2.15 million Twitter followers) and former CNN analyst Roland Martin (675,000 followers) — have touted it as a Twitter alternative.

For more than a decade, the community known as “Black Twitter” — an unofficial group of users self-organized around shared cultural experiences that convenes sometimes viral discussions of everything from social issues to pop culture — has played a key role in movements such as #SayHerName and #OscarsSoWhite.

In 2018, Black Americans accounted for an estimated 28% of Twitter users, roughly double the proportion of the U.S. Black population, according to media measurement company Nielsen. As of this spring, Black Americans were 5% more likely than the general population to have used Twitter in the last 30 days — second only to Asian American users, it said.

Some signs indicate a slowdown among Black Twitter users that predates Musk. In April, the rate of growth among Black Twitter users was already slower than any other ethnic group on the platform: 0.8% in 2021, down from 2.5% the previous year, according to estimates provided by Insider Intelligence eMarketer. (Growth among white users was 3.6%, down from 6%.)

A recent Reuters report cited internal Twitter research pondering a post-pandemic “absolute decline” of heavy tweeters — which the report described as comprising less than 10% of monthly users but 90% of global tweets and revenue. Twitter told Reuters that its “overall audience has continued to grow.”

Catherine Knight Steele, a communications professor at the University of Maryland and the author of “Digital Black Feminism,” said the departures of Black celebrities may not foreshadow a broader exodus, but she expects Black Twitter users to engage less on the platform over time.

If that bears out, she said, “without a robust Black community on Twitter, the only path forward for the site is to increasingly lose relevance as it becomes more inundated with more hatred and vitriol,” risking further panic among advertisers. The watchdog group Media Matters estimated last week that nearly half of Twitter’s top 100 advertisers had either announced or appeared to suspend their campaigns within Musk’s first month at the helm.

Any decline among highly engaged user segments would add pressure on Twitter’s business, analysts say, as 90% of the company’s revenue last year came from advertising.

“No platform wants to alienate any group of users, particularly an incredibly active group of users,” said Jasmine Enberg, principal analyst at Insider Intelligence eMarketer. “Twitter’s value proposition to advertisers has long been the quality and the engagement of its core user base … so the more that that addressable audience becomes diluted, both in terms of size and in terms of engagement, the less attractive the platform becomes.”

Steele said she has seen Black women in particular disengage amid threats and harassment over the last few years. And in recent weeks, high-profile Black women have been among the most vocal about leaving the platform.

TV powerhouse Shonda Rhimes tweeted to her 1.9 million followers in late October that she’s “Not hanging around for whatever Elon has planned. Bye.” Rhimes, who didn’t respond to a request for comment, has had an outsize stature on the app — having helped popularize live-tweeting with her Thursday night “Shondaland” block on ABC. The practice has been offered a proof point for advertisers wary of marrying Twitter and TV.

Other celebrities including the singer Toni Braxton (1.8 million Twitter followers) and Whoopi Goldberg (1.6 million followers) have also announced their departures, citing concerns about hate speech. The Oscar- and Emmy-winning co-host of “The View” said on the ABC talk show that she is “done with Twitter” for now. “I’m going to get out, and if it settles down and I feel more comfortable, maybe I’ll come back,” she said. Representatives for Braxton and Goldberg didn’t respond to requests for comment.

Steele said the history of Black communities’ withdrawal from other arenas, including offline, bodes ill for Twitter if it can’t turn the tide.

“It’s crippling to the economies of cities when Black folks leave, platforms when Black folks leave, entertainment sites when Black folks leave,” she said. “Twitter would suffer a similar fate.”

Steele said she has seen Black women in particular disengage amid threats and harassment over the last few years. And in recent weeks, high-profile Black women have been among the most vocal about leaving the platform.

TV powerhouse Shonda Rhimes tweeted to her 1.9 million followers in late October that she’s “Not hanging around for whatever Elon has planned. Bye.” Rhimes, who didn’t respond to a request for comment, has had an outsize stature on the app — having helped popularize live-tweeting with her Thursday night “Shondaland” block on ABC. The practice has been offered a proof point for advertisers wary of marrying Twitter and TV.

Other celebrities including the singer Toni Braxton (1.8 million Twitter followers) and Whoopi Goldberg (1.6 million followers) have also announced their departures, citing concerns about hate speech. The Oscar- and Emmy-winning co-host of “The View” said on the ABC talk show that she is “done with Twitter” for now. “I’m going to get out, and if it settles down and I feel more comfortable, maybe I’ll come back,” she said. Representatives for Braxton and Goldberg didn’t respond to requests for comment.

Steele said the history of Black communities’ withdrawal from other arenas, including offline, bodes ill for Twitter if it can’t turn the tide.

“It’s crippling to the economies of cities when Black folks leave, platforms when Black folks leave, entertainment sites when Black folks leave,” she said. “Twitter would suffer a similar fate.”

 

 

8/2/22

By Jessica Weiss ’05

The University of Maryland has received a nearly $300,000 grant from the National Science Foundation that will support efforts to improve the way handwritten documents from the premodern Islamicate world—primarily in Persian and Arabic—are turned into machine-readable text for use by academics or the public. 

Assistant Professor Matthew Thomas Miller and Mellon Postdoctoral Fellow Jonathan Parkes Allen, both of the Roshan Institute for Persian Studies, will work with researchers at the University of California San Diego (UCSD), led by computer scientist Taylor Berg-Kirkpatrick, on the innovative humanities-computer science collaboration. UCSD received its own $300,000 award.    

Over three years, the researchers will work in the domain of handwritten text recognition, which are methods designed to automatically read a diversity of human handwriting types with high levels of accuracy. 

“This work has the potential to remove substantial roadblocks for digital study of the premodern Islamicate written tradition and would be really transformative for future studies of these manuscripts,” Miller said. “We are very grateful to the NSF for its support.” 

This latest research proposal builds on a number of ongoing efforts to develop open-source technology to expand digital access to manuscripts and books from the premodern Islamicate world in Arabic, Persian, Ottoman Turkish and Urdu; Miller currently leads an interdisciplinary team of researchers on a $1.75 million grant from the Mellon Foundation as well as a $300,000 grant from the National Endowment for the Humanities.

There are hundreds of thousands—perhaps even millions—of premodern Islamicate books and manuscripts spanning over 1,500 years, from the 7th–19th centuries, forming perhaps the largest archive of cultural production of the premodern world. Scanning and digitization efforts over the last decade have made images of Islamicate manuscripts in a large number of collections available to the public. However, they remain mostly “locked” for digital search and manipulation because the text has not been transcribed into digital text.  

The task is made more difficult by the diversity and intricacy of many Arabic manuscripts, said Allen, who is a historian of early modern Ottoman religious and cultural history. They may be written alongside diagonal notes, annotations and corrections, in multiple colors and “hands.” 

Under the NSF grant, researchers will develop new techniques that remove the need for extensive manual—or human—labor, a method known as “unsupervised” transcription. Eventually, the tools under development will produce models that will be able to automatically transcribe large quantities of Persian and Arabic script in a multitude of different styles with substantially higher degrees of accuracy than is currently possible.

“The Arabic script tradition is so extensive and so broad,” Allen said. “People need to be able to read these manuscripts, search within them, and integrate them into their research.” 

Image: Staatsbibliothek zu Berlin, Ms. or. oct. 3759

7/5/22

By Jessica Weiss ’05

The University of Maryland has received a $1.75 million grant from the Mellon Foundation to continue development of open-source technology to expand digital access to manuscripts and books from the premodern Islamicate world in Arabic, Persian, Ottoman Turkish and Urdu.

Matthew Thomas Miller, assistant professor in the Roshan Institute for Persian Studies in the School of Languages, Literatures, and Cultures, leads the interdisciplinary team of researchers, including David Smith from Northeastern University, Sarah Bowen Savant from Aga Khan University (AKU) in London, Taylor Berg-Kirkpatrick from the University of California, San Diego, and Raffaele Viglianti from the Maryland Institute for Technology in the Humanities at Maryland. The Mellon Foundation has been funding the project, known as “OpenITI AOCP,” since 2019.

“Over the past four years we have made incredible progress on the creation of digital infrastructure for Islamicate studies, and that is thanks in large part to the Mellon Foundation,” Miller said. “We are honored that the foundation continues to support our efforts to expand access to and digitally preserve such a rich and important cultural tradition.”

There are currently hundreds of thousands—perhaps even millions—of premodern Islamicate books and manuscripts that are not able to be accessed digitally by academics or the public, Miller said.

Thus far, the project team—made up of computer science and humanities experts—has successfully improved the accuracy of open-source Persian and Arabic optical character recognition (OCR) software, which is a system that turns physical, printed documents into machine-readable text. Under the new grant, they will use this OCR software to produce 2,500 new digitized Persian and Arabic texts, as well as expand the OCR system’s capabilities into Ottoman Turkish and Urdu.

They also aim to improve the accuracy of open-source handwritten text recognition (HTR) for Arabic-script manuscripts. A subfield of OCR technology, HTR tools are designed to read a diversity of human handwriting types with high levels of accuracy.

The team will also roll out a user-friendly redesign of its eScriptorium platform, which hosts the open-source tools. This latest Mellon grant will last three years. (Last year, Miller also received a grant from the National Endowment for the Humanities to support the project.)

Though he hopes its next phase of developments mark a major improvement for Arabic, Persian, Ottoman Turkish and Urdu texts, Miller said the goal ultimately is for the open-source tools to be used across a wide variety of languages.

“We really hope the technology will be reused by other users, especially those working in other under-resourced languages,” he said. “It’s designed to meet the needs of varied users.”

Image description: Persian ruba‘i (quatrain) calligraphy dating between circa 1610 and circa 1620. Gift in honor of Madeline Neves Clapp; Gift of Mrs. Henry White Cannon by exchange; Bequest of Louise T. Cooper; Leonard C. Hanna Jr. Fund; From the Catherine and Ralph Benkaim Collection. Learn more.

 

2/19/22

Congratulations to Assistant Professor Catherine Knight Steele for receiving the 2022 Helen Award for Emerging Feminist Scholarship, given by the Feminist Scholarship Division of the International Communication Association (ICA). Steele is the author of the recent book Digital Black Feminism, published by NYU Press. Steele will receive the Helen Award at the annual ICA convention in May 2022, scheduled for Paris, France.

10/14/21

By Rosie Grant

The key to unlocking the secrets of a deceased poet’s writing process might not be found in their tattered spiral notebook or on the back of a restaurant napkin—not if they composed their works during the digital age. In that case, it might be buried in an obsolete Apple HyperCard file.

No one using an up-to-date Mac could hope to access the data, but if you’re Matthew Kirschenbaum, you simply dust off your decades-old Macintosh SE and let the literary sleuthing begin.

Kirschenbaum, a professor of English and digital studies at the University of Maryland, is a Sherlock Holmes in the burgeoning field that encompasses literature, the rise of digital media and how texts are written and revised. His new book, “Bitstreams: The Future of Digital Literary Heritage,” explores how the process of making literature has evolved, as well as the common threads that connect digital works to thousands of years of human creativity.

Kirschenbaum, an affiliated faculty member with the College of Information Studies at Maryland and a member of the teaching faculty at the University of Virginia’s Rare Book School, is also the co-founder and co-director of UMD’s BookLab, a makerspace, studio, library and press devoted to the codex book.

On Oct. 20, the English department will host a virtual book launch of “Bitstreams” featuring a discussion between Kirschenbaum and Professor of English and Director for the African American Digital Humanities initiative Marisa Parham. Ahead of the launch, we spoke with Kirschenbaum about how digital books are made and can be preserved.

Let’s start with the book’s title. What is a bitstream, and what does it have to do with literature?
That’s a word that originates in computing. A bitstream is a sequence or string of ones and zeroes—bits—that make up a digital object, like a file. Nowadays literary activity is also part of the bitstream; so, a writer writes a novel on their laptop, they email it to their editor, they and their editor go back and forth over email with track changes and then the book moves into production. All of this composition, revision, editing and layout is digital. It’s only at the very end of this process that the book stops being a bitstream and becomes a physical thing when it's finally printed.

How has this evolution impacted literary studies and literary research?
We’re all used to the idea of going to a library, an archive, seeing books and manuscripts and seeing where the author crossed out one word and wrote in a different word instead. We need to understand how to do that now in a world overtaken by bitstreams. How do we ensure that when the author sits down to write a novel on their laptop, those files on their hard drive are saved and eventually wind up at a place like the Folger Shakespeare Library where they can be cared for by archivists and curators, where they can be accessible 50 or 100 years later when someone like me comes along and wants to do literary research?

You tell the story of your work unearthing the poems of William Dickey as a sort of case study of how to do that detective work. Tell us about that process.
William Dickey, who was a recipient of the Yale Younger Poets Award, died at the height of the AIDS epidemic in San Francisco in the early 1990s. Before his death he was experimenting with digital poetry. As part of the research that went into the book, I was able to recover and publish online for the first time 14 of his digital poems that had never been seen before. I recovered them from the collections of the Maryland Institute for Technology in the Humanities here at UMD in the literary papers of a writer named Deena Larsen, who was a friend, collaborator and confidante of Dickey’s, and therefore had copies of his poems on her diskettes. His poems were written in Apple’s HyperCard software, which ceased distribution back in 2004. Fortunately, I have a Macintosh SE of my own that was able to run the original diskettes and view the poems and then migrate them to more modern media.

In “Bitstreams” you also describe accessing Toni Morrison’s floppy discs. What was that like? 
Yes, I traveled to Princeton, where Toni Morrison taught for many years and where her papers are housed. Among her papers are four floppy discs from the 1980s, when she was writing the novel “Beloved.” Among other things, I found a file named BELOVED3.DOC, which showed a variation on the book’s famous last lines not otherwise represented in the other draft materials. She wrestled with those final lines of the novel for a long time, until the very last minute. It felt very meaningful to me to see into Morrison's creative process like that and look over her shoulder, if you will.

How does your work at BookLab relate to the book?
I’m a professor interested in the cutting edge but I also enjoy old books and metal type and getting my hands inky. Because to me it’s all the same thing. Whether it’s computer code or metal type, it's still a process. You’re still doing something with your hands, you're still making something. I want to understand how books are being made and manufactured in 2021 and be able to apply the same sort of rigor we are used to applying to understanding the makings of physical things to digital objects.

----------

The Department of English will host a virtual book launch for “Bitstreams” on Wednesday, Oct. 20, from noon-1 p.m. Register here.

4/16/21

By Jessica Weiss ’05 

$4.8 million grant from The Andrew W. Mellon Foundation will fund a new lab at the University of Maryland to facilitate research and scholarship at the intersection of race and technology, and to develop a pipeline program to introduce undergraduates and those in the local community to the field of Black digital studies. 

The Black Communication and Technology (BCaT) Lab is part of a new multi-institutional project led in part by UMD Assistant Professor of Communication Catherine Knight Steele that seeks to work toward an “equitable digital future” through engaging in research on topics like racial inequality, disability justice and Black digital spaces.

The Mellon Foundation grant to the University of Michigan, which is leading the project, will create the Digital Inquiry, Speculation, Collaboration, & Optimism (DISCO) network, a collective of six scholars at institutions across the country.

Steele’s focus, Black digital studies, encompasses the ways that technology—both its possibilities and impediments it can create—impacts African Americans. 

“In this political climate and our post-COVID world, it’s exactly the time for a project like this,” said Steele, who is collaborating with Lisa Nakamura and Remi Yergeau of the University of Michigan, André Brock of the Georgia Institute of Technology, Rayvon Fouché of Purdue University and Stephanie Dinkins of SUNY Stony Brook University on the grant. 

As with the BCaT Lab, partners will leverage their areas of expertise to establish new research hubs, courses and more at their institutions, and will share best practices through monthly meetings. 

At UMD, the BCaT Lab will develop a program model to introduce undergraduates to digital research through workshops and coursework, help students carry out graduate research and create a mentoring network for students and faculty to navigate Black digital studies, focusing on collaboration across generations of researchers. 

“In addition to teaching how to do research in race and technology, the BCaT Lab will explore how to create an effective pipeline of people of color working in the field,” Steele said. “How do we create and sustain a network of scholars who have adequate support, quality instruction and access to mentoring and advising, to move the field in a productive new direction?” 

Eventually, Steele hopes to introduce students in Prince George’s County high schools to the field of Black digital studies and encourage future scholarship.

Steele was the founding director of the Andrew W. Mellon funded African American Digital Humanities (AADHUM) initiative at Maryland, which brings together the fields of African American studies and digital humanities in order to expand upon both fields, making the digital humanities more inclusive of African American history and culture while enriching African American studies research with new methods, archives and tools. 

Her forthcoming book, “Digital Black Feminism,” examines the relationship between Black women and technology over the centuries in the U.S. 

The BCaT Lab will be up and running in Fall 2021, working with undergraduate and graduate students and hosting events, Steele said. A postdoctoral fellow will begin in the lab next year.  

Wednesday, December 09, 2020 - 10:00 AM

Join us for this NEH Digital Humanities Advancement Grant info session where we will discuss the new funding guidelines and some tips for proposal development. We'll touch on the UMD routing process (via Kuali Research) and Grants.gov submission process, and addess any pre-submitted questions.

Subscribe to RSS - Digital Humanities