The Digging into Data Challenge, that is.
It has been an exciting few months for the Digging into Data Challenge crew here at ODH. Back in November, we were pleased to see a major piece in the New York Times that highlighted Digging into Data. Then in December, everyone started talking about "Google N-Grams" after the journal Science published a major article by two Harvard researchers titled "Quantitative Analysis of Culture Using Millions of Digitized Books" which explored themes that fit quite well with the Digging program. Just last month, IBM unveiled "Watson," a computer that defeated two human champions on Jeopardy. Watson is a terrific example of what the Digging into Data Challenge is all about: discovering new computational techniques for analyzing large corpora of books, newspapers, or other materials in order to advance work in the humanities or social sciences.
So, needless to say, amid all this interest, we are very pleased to announce the return of the Digging into Data Challenge. This second, larger round, is sponsored by eight international research funders, representing Canada, the Netherlands, the United Kingdom, and the United States.
What is the "challenge" we speak of? The idea behind the Digging into Data Challenge is to address how "big data" changes the research landscape for the humanities and social sciences. Now that we have massive databases of materials used by scholars in the humanities and social sciences -- ranging from digitized books, newspapers, and music to transactional data like web searches, sensor data, or cell phone records -- what new, computationally-based research methods might we apply? As the world becomes increasingly digital, new techniques will be needed to search, analyze, and understand these everyday materials. Digging into Data challenges the research community to help create the new research infrastructure for 21st century scholarship.
Let's get digging!