Blog interview about Social Fragments

posted 02 Aug 12

My new work Social Fragments is up and tweeting away at The Edge. I was interviewed about this work here on the The Edge website. Below is a copy of the interview:

What is the Tweet phone (does it have another name?)
I've called this work Social Fragments. It's an interactive installation that learns how to put together tweets using the words people use when having a conversation with it.

Where did you get the idea from?
Last year I was asked to consider ways people could announce to others their arrival in a space; in the same way you can  announce your arrival by checking in on Facebook, but on a much smaller and more local way. I thought at the time social media would be a good way to do this. But I also realised not all people use Twitter, Facebook or Foursquare, so using these services exclusively would exclude people.

At the time I was also playing around with an idea I had proposed for Experimenta's current exhibition "Talk to me". Experimenta were looking for works that explored the idea of how we communicate with each other at this time. So I proposed an interactive work that let people hear and respond to tweets using speech synthesis and recognition. My proposal would have filled a room, so it was a little ambitious. Although I thought it was a good start, the idea matured into what it is now, but much smaller.

What clever and crafty things did you have to do to put it all together?
The installation is part software, part physical object. The object part is a wall mounted, pixelated circle, with strip LED lights illuminating through clear resin, with a red retro telephone handset hung in the centre. Embedded inside is a Google Android Nexus S handset running custom software written in Java.

The software I wrote uses the Nuance Mobile Developer SDK. This is the same software used on the iPhone that makes Siri. With this I'm able to convert speech to text, so that when you talk to it I can get a transcript of what you say. Likewise I'm able to produce a script and questions that is converted to speech and spoken by a synthesised voice, with a peculiar take on an Australian accent.

With the transcripts of answered questions, I analyse these using a Markov process, which allows me to guess potential word structures based on the way a person answers the question. For example it collects information such as starting words, ending words, words that go before and after certain words. With this information I play a game of chance, essentially rolling for the next word until I reach 140 characters, then tweet the results.

Is this a one off or something that you are developing for ongoing uses?
I would like to make a series of them, given the opportunity. The software supports multiple languages, multiple voices and accents, and different personalities.

More broadly I'm interested in ways we interact with our built environment, how we engage with computers and how they engage with us. My ongoing experiments in this area may bring up different objects over time, I hope this is the start of further investigation and exploration in this area.

What’s the @address for punters to follow?
You can follow the tweets on @SocialFragments

Tagged with: The Edge, interview