I’m in a new band! Here is a video from our first gig.
If you like this kind of thing, you can listen to more here.
… in a game (yes, a computer game) with no guns, no puzzles, no violence of any kind. It just tells a great story, and rewards your curiosity. Gone Home is wonderful example of what is possible with the genre, and makes me feel good about games again.
Play it! (Especially if you grew up in the 90s, there are some great nostalgia inducing details).
This is an interesting presentation from Bret Victor, with some great ideas for how we can support new kinds of data-driven visualisation in software design (with many implications for the design of other ‘programming’ tools):
The whole video is interesting, but the thing that really stood out for me was his statement that “programming is blindly manipulating symbols”.
Blindly manipulating symbols. What he means, of course, is that programming is an abstract (or indirect) activity. When you draw, you put a pen to paper, where you place the pen is where the line appears: direct manipulation. When you program to ‘draw a line’, you are writing a set of instructions that must be interpreted by both software and hardware in order for a line to appear. What you manipulate (the code—symbols) is not what you produce (the image—a line). Programmers don’t know (they can surmise based on experience, but can never be sure) what the output of their programs will be.
For Victor, this is a problematic situation, in that it removes the directness of thought process that is manifest when working directly, in a material way, with the final artefacts of your production. (This was further elaborated in his earlier presentation ‘Inventing on Principle‘). I agree with him on this point, that in most cases, indirect manipulation is a hindrance to idea generation.
What I want to say about the design that Victor presents here though is that while his work does make ‘direct manipulation’ easier, it hides a code/generation process behind the interface. Although direct manipulation is made more available for those who don’t need to learn to code, it is still only ‘direct’ within the technological limitations and UI affordances imposed by the designer of the software.
This is, of course, not to criticise Victor’s work, I think it’s incredibly valuable (and as he says, it’s just a starting point to get ‘tool makers’ thinking along these lines). Now, I’ve spent a lot of time over the years blindly manipulating symbols. And there are times where such blind manipulation can be valuable, especially if what you are interested in discovering is the limits of the particular software/code/hardware system that you are working with. Behind Victor’s interface, the ‘blind manipulation’ is still going on, it’s just hidden from the users of the tool. (This is of course normal for software, hardly any users of software ever interact with the code in any real way).
Writing this has sparked this incomplete thought: I wonder if there is some point in between what Victor presents here, somewhere between ‘blind manipulation of symbols’ and ‘direct manipulation’ that could allow for a level of experimentation at both a direct-drawing level and a code level? Hypercard?
I found one of those projects that was caused of those ‘why didn’t I think of that?’ moments. It’s “The Aleph: Infinite Wonder / Infinite Pity“, a ‘modern’ take on the Borges story. The project takes random sentences from the Gutenberg archive and Twitter that begin with ‘I saw…’ and strings them together into a strangely coherent mass.
It is so simple that it’s a wonder that it didn’t already exist. What I love about it is It uses a very basic metric to slice across a huge cross section of data, and presents it in a way that is compelling and beautiful. In some ways it is (more) effective than the Borges paragraph that it references, because of the way that it really does seem infinite: the awful UI paradigm of infinite scrolling has finally found an appropriate implementation. Combining literature and Twitter it gives a strange, otherworldly sense of immediate and historical. It de-contextualises then re-contextualises to create something new, but maintains an implicit reference to something real in the world. The only thing that could make it better would be if every sentence linked back to its origin (though this may, in fact, break the otherworldly nature of it.)
It reminds me of a very old (please be kind) piece of collaborative student work that I was involved in: violence of text, in which, at one point, we took an essay about the ‘epigram’, and reduced it to an epigram by reducing the length of the text on each scroll through. Simple and kinda silly, but it has some interesting parallels; mostly in the way that the interface presentation is an essential part of the aesthetic argument of the piece. The interface is not a content container, rather, the interface is part of the rhetorical argument as to how the content should be understood. The interface is performative.
When I reflect on the digital work that excites me, I feel like I’ve actually been doing the same work over and over again for the last 10 years, without even realising it. I enjoy taking data (streams, databases, networks etc) and re-presenting them, using interface design, in ways that the data-designers never intended (assuming data structures were even designed at all) . I want to show data out of context, I want to show data in new contexts. I want to use systems designed for temporal presentation (blogging software is the obvious example) and use it to present other, non-temporal structures. I want to take metadata and make it the presentation metaphor. I want to take comments and make them the descriptions. I want to take tags and make them the content. There is something here about improvisation, something about playfulness, something about rethinking norms in interface and presentation, while staying within familiar paradigms. It is struggling against a medium, but pushing lightly and with purpose.
When I talk about music composition, I often tell people that I like that my guitars have ‘character’, that I need to ‘fight’ them. This feels related to my design practice in ways that I’d never considered before. I like improvising by ‘fighting’ my guitar; and I like coding/designing by ‘fighting’ a database. I like pushing against boundaries to create something ‘beautiful’. With the guitar, the instrument and it’s inherent character is the medium; with software it is code and the code’s interface with other software and hardware. But the process and goals have similarities. My adeptness with each seems about equivalent. I have a certain defined repertoire and as a result I repeat myself a lot, but sometimes I surprise myself with something unexpected and wonderful. Sometimes these unexpected wonderful outcomes are a direct result of my in-adeptness: in the fight against the medium the medium ‘wins’ in a sublime way. Could this be an argument against Bret Victor’s ‘inventing on principle’? Maybe there is a genuine benefit in working in a medium that is unwieldily and difficult to master. When I can’t immediately manifest my ideas, I end up with outcomes which are substantially different from what I would have got otherwise. It is a combination of my repertoire and my exploration of the nature of the medium that produce the final outcome. It is not a process of inspiration and production, but it is not totally uncontrolled either. The process is exploratory and improvisational.
This thought feels unfinished — I’m not really sure where I’m going with it — but I’m finding these connections between music composition/improvisation and interaction design a really interesting space that is certainly worth more exploration, especially regarding my PhD research, where I’ve become interested in the specific actions that designers can do to actively encourage improvisation within complex, cross-disciplinary design projects.
The Living Archive project — the project that has taken up all of my time for the last 2 years, the project that takes up so much time that I haven’t even ‘bothered’ (according to one unhappy customer) to update my one iPhone app to work on the slightly larger screen of the iPhone 5 — is now ‘live‘.
‘Live’ is a funny word to use really. The Living Archive prototype, in some form or another, has actually been ‘live’ online for over a year. What ‘live’ means here is that the current version, the ‘new’ version, is accessible to the general public, to the world, unrestricted.
I have a few qualms about this ‘liveness’, mostly hold-overs from my former life as a professional web and software designer and developer. Looking at the version that is available to the public, I can’t help but feeling there is so much wrong with it. It is buggy. It crashes. It only works on a selection of web browsers, a smaller selection of phones. It is not optomised for anything. It is unreliable, possibly confusing, underdesigned, unfinished.
The nature of this being a research ‘prototype’ as opposed to a commercial venture is that there is actually years of invisible thought underlying the design. That said, in this particular project, the manifestation of the research through the design is subtle. This is no cool data-visualisation project with excititing visual outcomes to share on design blogs. Nor is it a technologically complex project: none of the technology involved is new, or even remotely groundbreaking. Most of the research is hidden in the processes involved in producing the outcome, and the outcome itself is something of a side-effect of these processes.
The problem—where I perceive it—is that this ‘side-effect’ is the only thing the rest of the world sees. This tension between ‘research’ and ‘development’ presents itself in the project at every meeting, and is becoming more and more explicit as the project moves into a ‘public’ phase. Yes, this is a research project. Yes it is ‘only a prototype’. But people all over the world can use our project as a way of interacting with Circus Oz, a performing arts company with a reputation to maintain. There are so many things that we haven’t thought about, so many design decisions and issues that we decided to ignore because they were not the focus of the research. These things all glare at me, especially when I consider how it might seem to a new user, one who has no idea about the research nature of the project, interacting with the archive for the first time.
On the other hand, for a project that really only had two developers working on it (two developers who were spending most of their time on other, tangential research activity), it’s a pretty good effort. It mostly works, on a selection of browsers. The content is mostly accessible. It’s buggy, but nothing insurmountable. It is a very useful proof of concept of my (and the project team’s) research work. It is a useful tool for Circus Oz. Most of all, it is a useful tool for future research into digital performance archives.
Dan Hill argues in his wonderful ‘Dark Matter and Trojan Horses‘ that “there is a danger in describing projects overall as prototypes, in that it suggests they are in some way “not real”, that they can be turned off, decommissioned”. I agree wholehartedly with this statement. This is no prototype, it is the Circus Oz Living Archive, online, ‘live’, and real.
(Don’t worry—not another post about my blog)
I’ve been writing a lot using Google docs lately. As most of my writing is in aid of my PhD, I’m using it for several reasons. It’s common practice in academia to write collaboratively; RMIT (my University) has recently moved all our email and calendars across to Google; using Google docs means my PhD supervisors can keep tabs on what I’m up to, leave comments and make copy edits directly; it makes it easier to share my writing with friends and family for feedback…
One of the features of collaborating and sharing with Google docs is that in any particular document, you can see who is also looking at that document. You can see their name in the top-right corner, you can ‘chat’ (through text), you can even see their cursor and each character as they type. This is great for when you and one or more colleagues are actually working on a single document.
I’ve had a few strange experiences with this lately though. One is the experience when I’m in the middle of writing and I ‘see’ someone reading the document. Watching me write. It doesn’t matter who it is. Or even if they are actually watching (you see, with Google docs, you can see a name, but you don’t know if they are looking at the screen, if they have your document open in some hidden browser tab, or if they’ve just forgotten to log out). This feeling of being watched can be strangely constraining on what kinds of writing I’m willing to do. It’s like having someone looking over my shoulder.
The other experience is one of watching. Or more to the point, not wanting the writer to know that you are reading. I’ve opened colleague’s documents several times in order to have a quick scan, and felt uncomfortable when I see their name up in the top right corner of the screen. How do they feel about me reading their half-finished work? It’s strangely like reading over someone’s shoulder.
I don’t know what to do about these feelings (other than to not use Google docs). But there must be some kind of happy medium possible between Google doc’s real-time, see-everything collaborative editing, and only sharing ‘published’ updates.
…and not by me: 13 ways of looking at Medium
What I find particularly interesting about Medium (as discussed in the aforelinked article) is the fact that organises it’s content into what it calls ‘collections’. Actually, not just that it organises content this way, but that the primary view on the content is collections. Not people. Not chronology. Not location. (Though I’m still have trouble separating what Medium call ‘collections’ from ‘categories’ in this context).
I’ve been thinking a lot lately about the assumptions that we have around content organisation online, Medium appears to be something of a shift. A shift to where exactly? Who knows… Anyway, read the article, it’s good.
A few new online services popped up recently — loaded with plenty of hyperbole — that seem to indicate a move towards more control over public stuff online. It’s something that I’ve been thinking about for a long time, and its great to see it showing up in the public sphere.
First up is Branch, “A new way to talk to each other”. At first glance it is something like a semi-private (invite only) hosted discussion board. Control over participation. I found it especially interesting that “go beyond 140 characters” is touted as a feature — it’s funny what arbitrary restrictions will do to your perception.
App.net provides a semblance of control over development trajectory. It is billed as “a real time social feed without the ads”. Will a Twitter clone paid for by developers be better for developers that Twitter has been recently?
Medium is “rethinking publishing and building a new platform from scratch”. It comes across as a something of a blogging platform, but rather than being organised by time, posts are organised in ‘collections’. A hosted blogging service with control over organisation?
Looks like a trend to me…
Something I drew as part of my PhD research, then forgot about, then found again.
Another blog post about my blog
I’ve realised (and have been told) that the more I post on this blog regarding my PhD study, the more incoherent and confusing this blog becomes. I need to document my PhD research somewhere, but this blog doesn’t seem to be the place: blog posts are too closely tied to the moment, too personal, too structured around text; and this blog in particular was never set up to be an academic repository of any kind. I feel too constrained by the blogging structure to put up ‘unfinished’ ideas here, and an ‘academic’ style of writing is odd when placed in relation to some of the other contents on here.
To address this issue, I’ve set up a dedicated site specific to documenting my research: http://phd.absentdesign.com. There is not much here at the moment (it is still very much a work in progress), but from now on it will be the go-to place for anything related to my research work.
This blog will now return to being a personal exercise: not so serious, more shorter thoughts and reactions, more of what’s going on with Absent Design and my other software venture, Paper Giant. More blogging.
I’ve seen quite a few examples of the genre of work in the video above (though this is a particularly good one: do watch it to the end). Oddly, I happened to watch it just before reading the following passage on “small multiples” in Envisioning Information:
At the heart of quantitative reasoning is a single question: Compared to what? Small multiple designs, multivariate and data bountiful, answer directly by visually enforcing comparisons of changes, of the differences among objects, of the scope of alternatives. For a wide range of problems in data presentation, small multiples are the best design solution. 1
Last time I read Tufte extensively was in the context of doing information design work for the ACID/ABC Pool project. Today it is in the context of thinking about how people make sense of information in the context of digital video archives. I’ve been thinking for a while that one of the powerful aspects of the digital video archive is that it can allow multiple visual comparison in a way that physical archives can’t due to the limitations of analogue technology. “Small multiples” is definitely something I want to explore in more depth in the next stage of my PhD research.
Just More Regular Posting instead! No guarantees about quality.
To get things rolling, some recent goings on…
I presented a few different takes on my PhD research recently, one to the Design Futures Lab at RMIT, another to a panel of researchers for my official PhD progress report. I got some great feedback, and had some interesting discussions about language, about archives, and about the difficulties of doing PhD research as an ‘interaction designer’ in a semi-commercial context. What this also means is that, once some paperwork is filled out, I’m officially half-way through my PhD. That was a quick year and a half.
An app-development company I started earlier this year with my friend and colleague Chris Marmo has reached round 2 of the RMIT Business Plan Competition. We have an interesting app idea that has developed directly from our PhD research, and we hope to have it out by the end of the year all going well.
There is a reason (or actually a whole bunch of reasons) I haven’t posted anything here for a while: trying to make sense of the density of my PhD work. I’m hoping I’ll have a better idea of how to write about it ‘in public’ soon.
Meanwhile, here is a little sketch which describes my current train of thought:
One of my most used apps on my Mac at the moment is a prototype of my own making: a plain text editor with a working title of subtext. You could say I designed subtext by accident. I’ve been working on another app (in collaboration with Chris Marmo), and I built subtext as a tech demo: I was learning how to create view based NSTableViews in Cocoa, learning about the NSControl system and some of the in’s and out’s of text handling. The idea for the tech demo was to see if I could built a table view that could handle multiple cells containing passages of text, with a few animated elements, copy & paste support, drag to reorder, dragging between documents, a few other things. Basic stuff. On a whim, to test my knowledge of control event capturing, I added a new interaction – hitting ‘return’ at the end of a line would, rather than inserting a new line in to the current cell, insert an entirely new cell of text. This might not seem so dramatic – it’s just like treating cells as paragraphs, right? Well, the answer to that is ‘sort of’. What I discovered is that having paragraphs as draggable ‘cells’ by default, (which affords quick structural editing), and having paragraphs conceptually separated (by horizontal lines) — and this is not hyperbole — completely changes the way that I think about writing. Subtext is like a funny cross between a plain text editor, a basic outliner (with no nesting), and an iPhone style list. When using a regular text editor I write in ‘paragraphs’ or ‘sentences’. When using an outliner I feel the need to establish structure before writing. When using subtext I write in ‘thoughts’. What subtext enabled was something I didn’t even know I was missing: a different level of control over structural flow. Using subtext, I can create an outline as a I go. I can write an idea as soon as I think of it, out of order, and move it later. I can delete a thought without worry. I can easily skip over a paragraph while reading. Regular text editors can do this too, but subtext seems designed for it. It’s a subtle difference, but an important one — one that has changed the way that I write. This points to two important things related to my PhD research: the first is the act and impulse to collect (much, much more on this later, I promise). The second, which I’ll discuss here, is the material practice of prototyping. Something that I think is missing from the UX and IxD design world is enough focus on material practice as a way of discovering and knowing. What I am talking about is making as a way of discovery. Design as research 1. Here is my personal revalation: the process of making software can materially change the way that a design works, and — more importantly — changes what is possible to think of. When working with digital materials, prototyping is the best way to do this: I’ve talked about ‘dynamic gestalt’ 2 before: when working in a digital space you can’t understand something until you use it — you have to actually make something to know what it is. Now: I know that I could never have ‘designed’ subtext had I sat down specifically to do it. I would never have considered that it might be useful to me. I would never have considered using the interactions that the app now includes. I certainly wouldn’t have thought that a basic structural text editor could become my most used app. The process of making made all this possible. This is why it is important for designers of technology to actually work with their materials. It’s not enough to have a great idea, it’s not enought to ‘design’ it: you have to make it to know what it is. You have to make it to know what it could be.
There is a little sketch that I keep drawing in my research notebooks: Two boxes with linking arrows. Sometimes the arrow is dotted, sometimes it is solid.
The experience of understanding the archive, of being in the archive, comes from the ability to make connections between objects. In a traditional physical archive, these connections are made through proximity (in time, in space), or through cataloguing decisions (made by an archivist), or through history (the access log provides a kind of record: who saw what, when).
When dealing with digital records, connections can be implicit. If dealing with a video archive of performance (which I am), videos containing the same performers should be related: they are the same in some aspect, some metric. Likewise for videos on the same date, or videos on from the same location, or videos shot sequentially. Links are implicitly formed using data about the record. Metadata.
There is another kind of link used in a lot of video community “archive” sites — YouTube is the obvious example — that of the community generated link. Tags. Comments. Browsing behaviour. An digital system can infer that videos are related because people say the same things about them (similar tags, similar descriptions), watch them one after another regularly (proximity through time), etc.
There is another kind of link between objects that could, and should, be afforded in a digital archive: “These two objects go together because I say so”. In the case of the Circus Oz Living Archive: Here are two shows that I saw as a kid. Here are two shows that I think should be related because they contrast in an interesting way. Here are two acts that go together because I think they are good jokes. Here are two acts that, if put together in sequence, might make a new and better show.
These links can’t be made via data in the content (implicit), or crowd-sourced consensus (community), these are links that can only come from individual understanding. You might call them explicit links. This is the user — the reader — acting as an archivist.
My argument is that it is individuals are best placed to know what meaning is inherent in any particular object, on relationship between objects. Crowdsourcing is great up to a point: the information from your social graph can be scarily accurate sometimes. But only you really know why you are looking at something, why you make a connection between two things.
There has been plenty of work around the power of the collective in multi-user environments, social networked sites. What I am interested in exploring is the power of individual user agency and knowledge in these environments.
So this becomes the question: how do we design a digital archive environment that encourages the formation of explicit links between objects?
It’s been more than a year since I released my first iPhone app, Time Flies. One year, 3 months, 4 days in fact. (Yes, I use my own app. If I didn’t do that, what kind of developer would I be?) It’s also been that long since I released my only iPhone app. I had plans to make more apps, but since then I’ve, let’s see… started a software development company with a friend, started a PhD, got married… you know… life, etc.
But this post isn’t to lament my lack of time to work on iPhone software. This post is to announce an update to Time Flies. An update! At last! With the two most requested features: reminders and data export. (It’s true what they say: you don’t have to respond to feature requests — you’ll hear about the most important ones again and again and again). I had planned for iCloud integration too (and have been working on it), but iCloud + Core Data is currently too unreliable for me to be happy releasing a product into the wild. I might rant more about my problems with this another day.
So now comes the part where time doesn’t fly: Apple’s obligatory “waiting for review” period. Expect the Time Flies update to show up on the app store in a week or so. Meanwhile, thanks everyone that has given me feedback over the last year, 3 months and 4 days — even if I don’t respond to your email, know that I read and appreciate every one — and thanks for waiting.