This week we’ve caught ‘Archive Fever’, with Derrida’s diagnosis as the procedures by which the evolution of new media technologies are shaping how we analyse, communicate and preserve knowledge/experience/memory, and moreover how Web 2.0 and user interface visualisation (with limitations in metadata crunching and code optimisation) shape our experience of, access to and understanding of information, which are in turn mediated by socio/cultural constructions and political influences.
This excellent article on Reimagining the Archive lists some of the more pressing concerns in the Transition, Navigation and Curation of digital archives and sums up this publishing transition succinctly: “The unquestioned trust and task of defining the authenticity, provenance, and movement of archival objects and collections – once the sole province and prerogative of legacy institutions and expert curators – has become more open, participatory, and fluid. In the face of “remix culture,” “archive fever,” and emergent “long tail” phenomena, institutions and rights holders are struggling to come to terms with these new, shared missions and responsibilities.”
Archives are simply sets of stored information, and reflects an ancient desire of man to collect and order the world, yet the people/organisations who constitute and create archives have power over what data to include/exclude, set points and levels of accessibility, link between data based on similar properties, (either of the data or the user accessing) organise and encode data for storage & transmission and to mediate rights management and be profitable.
Peer-to-Peer networks are the most critical concern for institutions and organisation seeking to profit from holding copyrights to and building extensive archive/distribution business models. The power leveraged by these large ‘creative’ media companies over digital rights management is enormous, and the growing P2P network is in a sense a public’s reaction against analogue media industries lack of speed and insight in capitalising on open and efficient content delivery from online archives. Of course the issues here are infinitely complex (there is not enough space here to delve further), and this article on Peer-to-Peer Services: Transgressing the Archive is an in depth look into what will be the future battleground for content archives and copyright.
Now that some of the important elements have been boiled down, lets look at the readers. Firstly Matthew Ogle’s Article delves into how social-networking sites (Facebook, Twitter, etc) with ‘real-time’ updates create a vast backlog of personal interactions and experiences, yet tapping into these memory events is limited by rudimentary search functionality, which Ogle provides a good list of experiments (an archive of archives) and requirements for us to realise some sort of collective memory archive. The Ars Technica Article is interesting in this quote: “If the web is a giant public archive, then the privately owned and secretive Google is its de facto interface.” that echoes my Week 3 post of Google’s book digitizing project and this how archive construction and possible monopoly over public ‘orphan’ books is questionable copyright practice, yet Google has become the powerhouse Digital Media Distribution Company and aims to match that in its archiving of print media and the internet. Though they’ve got nothing on Microsoft.
Archiving the Internet…