Print & Digital Censorship: A Photoessay

When publishing changes, so does society. Investigate and compare the impact of two publication technologies, one pre-1900, (Printing Press) and one post-1962, (The Internet) on a specific aspect of society. (Censorship)

Flickr link

Censorship is the suppression of public communication which may be considered objectionable, harmful, sensitive, or inconvenient to the general body of people as determined by a government, media outlet, or other controlling body. Types of censorship include: Moral, Military, Political, Religious or Corporate.

Posted in Uncategorized | Leave a comment

Week 8: Content, Expression and Visualisation

Looking back on the previous posts it seems I’ve done a significant amount of working through the theories for a better understanding, but this week I want to bring in a stylistic range of Visualisations I’ve found that illustrate the different aspects of transitioning from print to digital and the key media technologies involved in the public sphere. After-all, pictures are more info-taining than large blocks of words, so here come the visuals…

To kick-off we have this Venn Diagram that perfectly illustrates the intersecting design and content considerations for any given visualisation project. (Click for high resolution)


The next style is this text based visualisation that looks at a number of statistics between Books vs. Ebooks, which is easy to grasp, packed full of facts and lists a side by side comparison, but is cluttered and requires a fair amount of reading.


While I’ve commented on the role of marketing/advertising in previous posts, this handy interactive visualisation with graph/tables makes visible the shady Marketing Trackers on a number of popular websites concerned with digital self-publishing.


To the world of micro-blogging now with this video as a mind-map style spatial visualisation of the most Influential People on Twitter looks incredible and contains information on name, category, influence and activity including the first tweet made.


The interactive aggregator website Newsmap is a real-time visualization of Google News that fills the available screen space, were size denotes the importance of specific headlines, color distinguishes news categories, and brightness the story novelty.


And finally 3D visualisations, particularly globe constructs, are becoming more popular and powerful as technology and software develops, bridging the gap between data and art. Check out this video from digital media artist Aaron Koblin about different visualisations.

There are countless more Examples, design forms and data sets to look at here, Infographics Like These combine visualisations and statistics quite well, while this is the best Collection of Visualisations I could find anywhere on the internet.

Information is Beautiful…

Posted in Uncategorized | Leave a comment

Week 7: Wikileaks and Media Transperancy

For this week we’ve been given the task of arguing for or against Wikileaks as an agent for media transparency and the methodology of information dissemination. Since I’ve been assigned to the negative, I’ll try and use this post to flesh out the main arguments against the Wikileaks organisation so here comes a lot of text:

Perhaps the main criticism is the way in which information is filtered by Wikileaks in partnership with media outlets. One of the strong supporters of Wikileaks, Reporters Without Borders, (in August, 2010) criticized Wikileaks in an Open Letter because “indiscriminately publishing 92,000 classified reports reflects a real problem of methodology and, therefore, of credibility. Journalistic work involves the selection of information.” It can be argued that Wikileaks was in its infancy at this stage, yet this ‘dump’ is a major problem, as the New Yorker Reported, that soldiers and informants names and details were not redacted, jeopardizing personal and national security operations, publishing technical details of an army roadside bomb preventative, and the only known blueprint for a nuclear fission device, among other details, calling into question what information is relevant for the public interest and attention.

The diplomatic cables released by Wikileaks has shown an evolution of methodology over time, leaking cables slowly, however to filter through thousands of documents and present only slices of information to the media and public is extremely problematic. While some cables (such as Human Rights Abuse in India or the Australian Internet Blacklist) reach the world media, they remain complex issues that have not been adequately followed up partly because they have a localised effect pertaining to a particular state apparatus, other cables sensationalising rumour and opinion within international relations has had more of a media splash because the information is of a global scope, stimulating the public’s imagination as to the previously unknown world of international relations. Without a guiding methodology or criteria for the responsibly of media institutions, information can easily be directed, misrepresented,  or attract an inordinate amount of unnecessary attention.

The reliability and context of the sources of the leaks is also of importance. To what extend do we know or understand the sources of the various cables, at what position are these sources in to comment on these situations, where do they get their sources from and how reliable is this testimony, is it objective fact or personal opinion? There is a lack of this kind of transparency, not necessarily one that can be leveled at Wikileaks, but calls into question the extend that a public must know of the delicate international dealings that are beyond a collective public understanding and how much attention the media should be give to the different cables released, that journalists should fact check and treat with objectivity rather than rush to meet a deadline in the fast paced world of ratings/readership race.

Yet it is not only the filtering of sensitive information that is at stake, it is also how these publications are followed up and injustices corrected through action. A comparison can be made between Wikileaks’ publication of the Iraq Apache helicopter Attack video and The New Yorker’s publication of the Abu Ghraib Abuse photographs. Both disclosures involved extremely graphic and disturbing images, unreleased or classified government records and generated a public sensation. However while the Abu Ghraib photos prompted lawsuits, congressional hearings, prison sentences and declassification initiatives, by contrast the WikiLeaks video produced no investigation or any real public backlash. While Wikileaks as an organisation bears no responsibility to follow up what it releases, perhaps the sensationalism surrounding Wikileaks inhibits the State Apparatus response due to a perceived illegitimacy of the source information and the organisation itself as a medium of publication.

The sensationalism surrounding Assange as the figurehead of the Wikileaks organisation is also problematic. There are Many Whistle-blowing Websites and other channels for sensitive information, but when the publicity generated by a single organisation and its figurehead outshines the important information that is published, public attention that should be directed towards identifying and correcting serious issues raised in the cables becomes secondary.  The Wikileaks disclosures have been praised by many who believe that they will allow the public to hold the government more accountable and thus improve foreign policy, however leaks like this simply make those in power retreat further into the shadows to defend themselves and their positions against a clearly defined entity and its spokesman, while the international public, over-saturated with sensationalist world spectacles, are less likely to mobilise support for transparency and corrective action than to polarise the simple debate of Wikileaks/Assange vs. Government/State practices.

While these are legitimate concerns I believe the Pro’s far outweigh the Con’s in this debate though I think Assange summarises his methodology best  in the above video. Whistleblower Bias: Is WikiLeaks Losing Its Objectivity? Berkeley School of Journalism

We just got Wiki’d…

Posted in Uncategorized | Leave a comment

Week 6: Attention and The Commons

Now we’re delving into extremely complex territory in this weeks readers, synthesising the neuro-biological processes that underlie our attention when we Seek, use our Computers and consume information that in turn drives our appetite for more. It feeds into this issue as to what extent modern technology shapes our neurological processes, as Ritchell notes, “how heavy technology may fundamentally alter the frontal lobe during childhood, how addictive behavior can lead to poor decision-making and how the brain is rewired when it is constantly inundated with new information…” but also how technology augments and simplifies the seeking system with various informational filtering tools.

Dr. Adam Gazzaley presenting at TED about the Brain: Memory and Multitasking summarises the complexities of neuroscience very well and provides us with a few practical examples as to how dividing attention effects us cognitively:

Since the shift began in advanced capitalist societies from an industry commodity production economy to an information industry based on networks of exchanges, the Attention Currency is slowly what comes to define value in this epoch shift. TV ratings, paper readership and internet hits have long been defined as what generates revenue, (along with advertising) yet it is the public and private attention invested in these media that drive the industry. These media industries often create attention controlling apparatuses in the form of ‘Spectacles,’ advertising and scheduling, exerting a kind of Psychopower over individuals and society. The readers from the Paying Attention conference go into more detail than I could understand, so perhaps it’s best to let their promotional video summarise the many facets of attention :

So if attention and information value are reappraised and remixed by lead users in the internet age, so too must the social institutions such as law, and in particular copyright and the commons, evolve. What better system then, to champion the ideals of a commonwealth, than peer 2 peer networks and media advocates? We have a Call to Arms against conglomerate corporations’ stranglehold on creative copyrights, along with this excellent Reclaimation Paper flying the flag for the Commons, to the stronghold located at the P2P Foundation. While perhaps the commodity industrial complex is on the wane, reflected by economic instability and the GFC, and that outdated laws need to be revised as technology develops social interaction, these articles did make me feel we can make a difference in our lifetimes. Yet there are still many seemingly insurmountable unanswered questions that need to be debated and developed before real institutional and lasting change is set. Let’s end here with these short videos from two champions of the new P2P Commons cause: Michel Bauwens on The Commons & Lawrence Lessig on Creative Copyright

Hoisting the Flag…

Posted in Uncategorized | Leave a comment

Week 5: Archive Fever

This week we’ve caught ‘Archive Fever’, with Derrida’s diagnosis as the procedures by which the evolution of new media technologies are shaping how we analyse, communicate and preserve knowledge/experience/memory, and moreover how Web 2.0 and user interface visualisation (with limitations in metadata crunching and code optimisation) shape our experience of, access to and understanding of information, which are in turn mediated by socio/cultural constructions and political influences.

This excellent article on Reimagining the Archive lists some of the more pressing concerns in the Transition, Navigation and Curation of digital archives and sums up this publishing transition succinctly: “The unquestioned trust and task of defining the authenticity, provenance, and movement of archival objects and collections – once the sole province and prerogative of legacy institutions and expert curators – has become more open, participatory, and fluid. In the face of “remix culture,” “archive fever,” and emergent “long tail” phenomena, institutions and rights holders are struggling to come to terms with these new, shared missions and responsibilities.”

Archives are simply sets of stored information, and reflects an ancient desire of man to collect and order the world, yet the people/organisations who constitute and create archives have power over what data to include/exclude, set points and levels of accessibility, link between data based on similar properties, (either of the data or the user accessing) organise and encode data for storage & transmission and to mediate rights management and be profitable.

Peer-to-Peer networks are the most critical concern for institutions and organisation seeking to profit from holding copyrights to and building  extensive archive/distribution business models. The power leveraged by these large ‘creative’ media companies over digital rights management is enormous, and the growing P2P network is in a sense a public’s reaction against analogue media industries lack of speed and insight in capitalising on open and efficient content delivery from online archives. Of course the issues here are infinitely complex (there is not enough space here to delve further), and this article on Peer-to-Peer Services: Transgressing the Archive is an in depth look into what will be the future battleground for content archives and copyright.

Now that some of the important elements have been boiled down, lets look at the readers. Firstly Matthew Ogle’s Article delves into how social-networking sites (Facebook, Twitter, etc) with ‘real-time’ updates create a vast backlog of personal interactions and experiences, yet tapping into these memory events is limited by rudimentary search functionality, which Ogle provides a good list of experiments (an archive of archives) and requirements for us to realise some sort of collective memory archive. The Ars Technica Article is interesting in this quote: “If the web is a giant public archive, then the privately owned and secretive Google is its de facto interface.” that echoes my Week 3 post of Google’s book digitizing project and this how archive construction and possible monopoly over public ‘orphan’ books is questionable copyright practice, yet Google has become the powerhouse Digital Media Distribution Company and aims to match that in its archiving of print media and the internet. Though they’ve got nothing on Microsoft.

Archiving the Internet…

Posted in Uncategorized | Leave a comment

Week 4: Assemblage Theory and Social Complexity

First of the readers we have Actor Network Theory, which is essentially a (French post-structuralist) methodological research and analytical tool in understanding how (not why) heterogenous networks are formed, the actors (person/object/organisation as components) that constitute the network and how much agency (relations/influence/power) that each actor extends within the ‘assemblage’, which is of course is made of smaller assemblages and is in turn part of larger assemblages, as with the actants within. Additionally it is down to the researchers powers of ‘Litanizing’ every component that defines a given apparatus’ scope and thus its usefulness as an analytical tool.

The ANT theory often pre-supposes that all actants within the network have roughly equal agency in any given assemblage, however this is usually not the case as certain components may act is ‘nodes’ within any given network constituting a gravitational focus point for the agency of actants, as a interior relation to the assemblage. Remove these ‘nodes’ and the system will fall apart, whereas smaller or peripheral actors within the assemblage may be removed or transposed with little effect to the network, an exteriority of relations, as ANT theory doesn’t seem to take into account the scalability and mutability of assemblages, and in most cases digital networks. While there is little word-space to go further, lets move on to…

Assemblage Theory and Social Complexity is less a methodology like ANT and more an ontology that utilises expanded assemblages to explain the how/why relations within social constructions at different scales. DeLanda posits that components (actors) within an assemblage are constituted as material/expressive (semiotic) forms (it’s properties) and exert agency (it’s capacity to) as territorial/deterritorialising an assemblage, in which specialised systems (such as genetics or linguistics) encode/decode the assemblage for transmission of relations within the assemblage and to other systems of assemblages.

The most crucial departure from ANT is that Actors/Components cannot be defined entirely by its relations to others in any one given system, (interiority) that each contain their own portable properties that may be transposed between networks. Another departure is that actors are often real-world entities and not empty-signifiers, and as such belong to a diverse range of assemblages simultaneously, and also cannot take into account more abstract institutionalised relations such as power/inequality and other ‘events’ that may occur.  Thanks to The Pinocchio Theory for expounding where Wikipedia hasn’t.

Well that was a lot of working through the theory, but what of digital distribution? The Internet is essentially the largest network assemblage, perhaps we need a couple of quantum computers to compile that extensive list. In the mean time, please check out this Giant Map of Web Trends

Join the Inter-network…

Posted in Uncategorized | Leave a comment

Week 3: Print Adaptation and Digital Assimilation

While last week I mentioned some of the broader issued involving the transition of traditional print media to web based distribution, I now want to look at some specific examples that highlight the implications of a digital transition. First off the presses:

Judge Rejects Google’s Online Library Monopoly
Google’s latest and most ambitious project, to digitize every published book ever and build a vast online library, has been rejected by a US judge due to copyright issues and the monopolisation of e-book profits. One of the contentious topics was ‘orphaned’ works who’s rights holders cannot be found, thus giving Google the means to profit from free public material. While having a vast worldwide online library makes for rich content and ease of delivery, having the largest company in the world controlling digital rights to and profiting from such works is dangerous territory. More interesting is the uneasy alliance brokered between Google and many publishing companies in support of this legal settlement, where previously the Authors Guild and the Association of American Publishers were suing Google in 2005 over the very same practice. Check out This Follow Up on the uneasy alliance and Google’s other options.

The New York Times’ Digital Paywall
As a follow up to this weeks reader we have the official announcement of the New York Times blanket ‘paywall’ across their online content. The first 20 articles are free, an effort to keep its casual readership which makes up the bulk of their internet traffic, but thereafter starting at $15 for 4 weeks subscription. The paywall system is fast becoming a contentious issue in online journalism, raising the question, can one put a price on quality journalism in an online environment that was always designed to be open and free. Also to what extent does the NYT employ a pricing structure based on the quality image and publications or need to bolster revenue from online subscription when Digital Advertising Will Increase, at least according to the most powerful man in advertising, Sir Martin Sorrell. The scalability of the paywall system is worth a mention, in that subscription prices will fluctuate with market trends in advertising expenditure, but perhaps the reason why other bastions of print like the Guardian are waiting and watching is to see which models of price/distribution/content will win out.

Google vs. Apple in Digital Media Distribution
Finally we have the proverbial clash of the digital titans over digital distribution through the internet and microelectronic devices. Why erect paywalls when you can embrace a vast pre-existing distribution network and consumer public? Google’s Android platform recently became the biggest selling operating system on microelectronics and with ease of customisable product prices and content, compared to Apple’s higher percentage and lack of flexibility. Clearly both platforms will be utilised for maximum revenue yet it calls to question how the technology that mediates and supports these distribution networks shapes the pricing structures and marketability of publishers content. While Murdoch has The Daily: ipad edition, he Plans to Introduce Paywalls to Australian Papers next year.

Bring on the links…

Posted in Uncategorized | Leave a comment