On the brink of artificial intelligence’s arrival, I pose a question. Will machines read? Will they move past programming, either that which is given to them or that which they construct at their own behest, into the same choices we make as readers? Will they form areas of interest that drive their decision-making as human beings do? Will artificial intelligence not only replicate but also reflect the nuanced workings the human brain? How about the human heart?

I ask these questions before making a stark confession: I have text mined for the purposes of literary research at a personal level. There, it’s done. During a certain hectic three month period, a time of shifting theses and the bottoms falling out of ideas, I used my kindle to search for passages pertaining to my thesis, one involving Virginia Woolf’s Between the Acts and a frothing Afghan hound. The terms ‘dog,’ ‘canine,’ even the vague ‘animal’ were systematically pulled from the text and lined up on a separate screen (if you’re thinking ‘big deal,’ I’m thinking ‘run a check on your bookgeek street cred’). Why do I feel as if I cheated, or at best robbed myself of what Gaston Bachelard labels the ‘true reading’ of any novel, the second, third, and fourth passes, when the work’s genius and beauty leap out, when you move past the narrative into the connections and symbols you can’t believe you missed the first time through. Did I cheat? And, if I did cheat, who fell prey to this specific, brand new breed of corruption? More importantly, in the spirit of Michael Witmore’s addressability, what can we learn from this difference between the human reader and the machine pulling the same passages from a text? And, where does the guilt come from and what does it mean to the future of literary research-by-machine?

Advertisements