The goal of this blog is to build upon some specific graduate work I began during my time at CUNY Graduate Center where I received a Digital Humanities MALS degree. Concurrently with the composition of my thesis (which was essentially trying to find a more pedagogically sound approach to introducing technical "tools" to "humanists" through praxis), I had done some coursework that I often come back to wherin I used comic book value data as a data visualization topic.
I was in a course led by Lev Manovich on Big Data where we spent the majority of class time learning the R Language. I submitted a paper called "VISUALIZING COMICS" where I displayed comic book value data in visually novel ways, integrating actual comic book cover fascimiles into the data output. I used a specific selection of R Language libraries, many of them already demonstrated by Manovich with a data set of Time magazine covers, but several discovered on my own… Especially memorable was integrating what I learned from a PDF copy of the ggplot2 book I came upon during my studies.
I created the project's dataset by accruing 1992 comic value data (this is right before the legendary comic book BOOM and BUST of the 90's!) and current day data (2013) but focused only on the best selling comics of a specific set of years. The idea was that I could try and visualize SOMETHING based on culturally relevant artifact value. It was pretty half-baked, primarily because it was a single semester course deliverable. In fact, I never finished the gruelling amount of data input necessary to include the '92 data in any meaningful way. Also, My dataset was relatively small, and Manovich was more interested in digital Mondrian simulacra than having what were mostly Humanities PHDs bite down on data science 101.
That said, I feel like the conversations in art technique history and light cultural studies helped the "medicine go down," so to speak, in giving a very solid primer for working with R. And while I've done very little with R since graduate school, lately I've been messing with some deep learning scripts written in Python (NumPy, pytorch); It has me thinking back to my work with trying to algorithmically manipulate comic book values and the process of compiling datasets, now more clearly than ever a valuable endeavour.
And as the GANs push my graphics card to it's limit, I'm starting to realize as we move toward this era of general artificial intelligence, the historical record of datasets may be a very valuable piece of digital real estate. Perhaps they already are.
My hope is to use this space for amalgamating two specific trains of thought: The cultural inquiry into of some of the cultural items on display (close readings, narrative explorations, sociological critiques, etc) as well as the latest in AI driven statistical/deep-learning analysis of the secondary market itself.