Week 3 – Fusion

Laura from NIEA and I battled our way through CSV files, KML files and buggy Google Fusion tables interfaces to make some interesting looking maps.

We were interested to see how arson crime in NSW is distributed according to geography, we though more arson crime would be occurring in country bushfire prone areas.

Using data from the NSW Government about arson crime data we were able to plot the places where arson occurs on a map. Not particularly useful because you can’t really see any detailed information. We then used electoral boundary data and fused it with the arson data to turn the points into larger more meaningful areas on the map. The link between area boundaries and arson data allowed us to colour code the areas according to how much arson has occurred there.

Colour coded map of arson crime by electorate in NSW.
Colour coded map of arson crime by electorate in NSW.

Before we began, we were expecting there to be more arson crimes in rural and outback areas. We realised on inspecting the raw crime values that the number of crimes are higher in urban areas due to the higher population density so we should have normalised the data using population density. That can be a task for another time…

Representation and Simulation

I’ve been reading some texts about interactive media for my Digital Theory and Aesthetics course. The first was Representation, Enaction, and the Ethics of Simulation by Simon Penny and the second was Seven Ways of Misunderstanding Interactive Art by Erkki Huhtamo

The first text discusses the use of simulations in the military and otherwise for ‘body training,’ metaphors in simulators, simulation constraints

In a section about metaphorisation I found this quote that I liked:

Even in immersive stereoscopic environments (such as the CAVE) the user is navigating not a real space, but a pictorial representation of a space, according to certain culturally established pictorial conventions of spatial representation (such as perspective) established centuries ago for static images.

There, that’s related to whatever I was saying in my last blog post, something about accurate representations of what the human eye sees. Perspective and stereoscopy are established metaphors for spatial awareness and depth perception.

The text also discusses how these simulations can unconsciously train users in response to images that appear to them. What we are creating with The Amnesia Project is not going to be a simulation, but there will be the chance for users to respond and manipulate the images generated. Could this be used to train users? Could amnesia sufferers learn to adapt unconsciously using logging and simulation tools?

Week 2 – Immersive Cinema

The week two class was focused on the immersive cinema aspect of the course (which is the part of the course I am most interested in).

We traced a history of immersion in art, using techniques like perspective and depth perception through stereoscopy to further immerse viewers. Successful techniques are those which more accurately recreate what the human eye perceives. Using vanishing points in drawing simulates the perspective that the human eye or a camera sees.

Single vanishing point in a photograph
Single vanishing point in a photograph

3D stereoscopy simulates depth perception to add another layer of realism to visual imagery. I think sound would add an extra something to an immersive experience, but it is an often overlooked form and we may not have the time to experiment or go in depth with it in this course.

We also had a brief walkthrough of the Unity environment from a game development perspective, although I can see how this could easily be adapted to an immersive environment. I’m not exactly sure how we would go about projection mapping the Unity scene on to a 180 degree cylinder section though; it seems like something you would need Unity Pro for. The interface is not too difficult if you have ever used Maya, and having some experience in C# development made the code side of things look familiar.

This is the third time I’ve been on a tour of the iCinema facility at UNSW, so I have seen most of the demonstrations before. There was a new demonstration that I had not seen before and I was impressed by, but unfortunately my memory has failed me and I can’t remember what it was. What a coincidence. If only I had blogged about it immediately or had some kind of logging device to aid my memory.

Week 1 – Introduction

I watched Memento the night before the first class instead of going to bed. The memory loss in the movie seems to resemble what Claire has (based on the explanations Jill gave us). Apparently Memento is a very accurate reflection of what anterograde amnesia (inability to create new memories) is like in real life. How can you trust yourself if you can’t remember your motives?

I’ve been trying to imagine what it is like to suddenly not remember the last few minutes. So far I have been unsuccessful, everything we do is based on some form of memory.

Jack and I were hoping there would be more technical hands on and coding stuff in this project. At the moment it looks like we’ll be mostly using the Autographer cameras to help realise the NIDA graduates’ narrative arc. But then again no one seems to know what lies in store, maybe there will be a chance to work on the projection, visualisation and technical detail. 

I can't remember what I did this morning, how am I meant to remember what I did in The Amnesia Project