Tweet Cascade: A listserv for Twitter

July 6th, 2010

So yesterday I finished enough code to feel comfortable launching Tweet Cascade. This is the post where I talk about it a bit.

I noticed sometime last year that, due to the way that Twitter is set up, there’s this interesting gap in the “what sorts of discussion Twitter supports” zone. Now, this is pretty much true of any communications medium (telephones are notoriously bad for groups to talk on, for instance), but two things stood out to me for Twitter.

1) It’s a communications platform. The open API is specifically designed to encourage new and interesting uses of the Twitter infrastructure and ecosystem. Further, the platform is almost entirely software-based. That is, I don’t have to design new hardware (like a new telephone) to take advantage of the platform’s flexibility.

2) I saw a solution. This is a big one. Generally I note a problem with a system and just shrug, unable to see an easy fix. This time it was different. In fact, it was such an obvious move that I’m now left wondering if the fact that I haven’t turned up any similar attempts makes me a total genius, or represents some equally obvious flaw with the scheme that I’m blind to.

Like most communication tools, Twitter is largely a product of its origins. It began, as far as I can tell, as an attempt to solve a massive flaw in SMS messaging: the inability to manage group conversations. Maybe I’m wrong, but this is how things have always felt to me. So Twitter started off as a way for close-knit friends (that is: people you might SMS) to keep in touch.

It’s grown since then, of course. Changed and evolved. But that original set of assumptions has shaped that growth and evolution so that Twitter is still largely focused on keeping tight-knit groups in touch with each other.

Of course this isn’t everyone’s experience of the service, which is one of the beauties of Twitter, really.

Anyway, because of that focus, Twitter is really good at managing the sorts of conversations and interactions that friends have. Whatever your friends post, even if it’s something like “I sure do like cheese!”, chances are you care simply because you care about the person writing. It’s not that you have some general interest in peoples’ cheese opinions. It’s that you have specific interest in the opinions of your friends.

Now, a second sort of thing has snuck onto Twitter, and that’s highly focused accounts. You know, like the one CNN uses to talk about breaking news. If you follow these, chances are you’re interested in everything that comes from them because they are curated/edited/filtered up front. They are narrow in scope because they aren’t about people, they’re about purpose. This is a pretty cool thing.

The problem I see is the one in the middle. What about those people who aren’t really your friends to the point where you CARE that they like cheese? One option is to simply ignore them. But what if they also happen to have occasional insights into a topic you do care about? Like (for instance) knitting. Currently you’re stuck between choosing to follow their account and ignore most of the stuff on it, or miss out on their amazing knitting advice.

Tweet Cascade is an attempt to find a middle ground. And it’s built on a relatively old internet pattern: the email listserv. Much like a listserv, Tweet Cascade works by establishing a new address (in this case, a Twitter account) for the robot which handles the discussion list. Then, when you want to talk to the group, you just send your message to the robot’s address and it handles the rest.

Using the language of Twitter… If you mention (that is use an @robot) a robot account (we call them Cascade accounts over at Tweet Cascade), the account picks up on that. Then it checks to see if it is following you (that is, checks to see if you are a member, this is important because anyone can mention a Twitter account, and we don’t want those people gaining access). If the robot is following you, then it simply retweets your message. That way anyone who follows the robot, but doesn’t follow you, will see it. And if they reply to the robot, you (and everyone else in the group) will see that reply.

It’s very listserv-y. Like in email, each address is treated as a person, except for the robot. And talking to the the robot stands in for talking to everyone you want to. It’s also a discover-assisting device in that it aggregates interesting discussion in a centered, rather than horizonal, social structure. That makes it easy to find, and easy to join.

At the same time, Tweet Cascade adds something new. Something that isn’t necessary with email, but is helpful with Twitter. See, in email, you only get messages that are specifically targeted at you. But Twitter, in some ways, acts more like an aggregator, pulling in lots of data from sources you choose, regardless of whether that source wants to target you with that information or not.

Because of this aggregator-like behavior, and the unpredictable nature of the messages the various sources you follow might choose to send, having systems to filter those sources becomes useful. Twitter implemented one of these a while back: you don’t see messages that begin with a mention of someone you don’t follow. So you don’t see conversations between your friend and people you don’t know. Or, that’s the theory, and it mostly works.

Tweet Cascade provides yet another filtering mechanism. It lets you choose to add specific types of discussions to your feed without having to worry about their sources. Also, using the built-in suppression of mentions of unfollowed accounts, it frees you from any guilt over having massive discussions using Tweet Cascade. Your friends who don’t follow the Cascade account in question won’t have to see what you’re saying.

One final note: in order to better integrate things with Twitter, Tweet Cascade is managed through DMs rather than through a web interface. Once an account is hooked into the Tweet Cascade robot, all further modifications are handled through your Twitter client. I happen to think this is a nicely elegant design decision since it means that you don’t have to learn any new tools beyond the specific command style/language that Tweet Cascade uses.

So that’s Tweet Cascade. It’s up and running if you want to try it out, and I’d love to hear your thoughts on it.


Shew. Thesis is done.

July 6th, 2010

This actually could have gone up eight weeks ago, but there was what I suppose is a traditional post-thesis crash. Which isn’t to say I didn’t get plenty of other things done (there’ll be a post on that in a minute), but it is to say that I’ve basically put off any thinking/work related to Banyan Speak since then.

It’s about time to get back to that. But first, to catch people up if they’ve been here on the blog and haven’t seen the work I’ve been doing.

I gave my final presentation on May 3. You can check that out right here (it’s about 20 minutes long):

ITP Thesis – 3 May 2010 from Anathomical on Vimeo.

If you’d like to take a look at the current draft of the paper I’ve got explaining the project, feel free to take a look at the most recent Google Document. The appendices, especially, need work and clean-up, but the core ideas are all in place.

Which means now it’s time to get back into things. I think the place to start is with the Dreamwidth people. They’ve got some very smart developers who have a strong understanding of community. They’ve also begun implementing some features that make some of the ideas behind Banyan Speak worth implementing on a strictly local level, which would mean getting to test out architecture and dealing with some user issues without having to worry about cross-site support.

We’ll see if I can manage to keep things more updated around here.


Creative Commons as a hacking exercise in triplicate

March 10th, 2010

A lot of Lessig’s thinking in Code can be seen in the way in which the Creative Commons project developed and ultimately went about achieving its goals. Specifically, you can see Lessig’s thinking about hacking, and about the way code affects behavior, in the three-level approach of the Creative Commons project. The way the project defined its three levels to write for (machine, human, legal) is interestingly analogous to writing a given piece of code for three different programming languages or operating systems. In fact, I think the operating systems analogy might be apt here.

With its goal of changing the way that people think about, distribute, and reuse creative works, the Creative Commons had to target multiple levels of society. Interestingly, though, each level needed similar things, but accomplishing those things was vastly different depending on what was being targeted.

The legal system, the code of law (an especially apt turn of phrase in this instance) required a strict and carefully defined set of documents which conformed to the “operating system” of the courts. This code had to be comprehensive, well-planned, and legally unassailable if the project as a whole was to succeed. Errors at this level could easily cause the entire project to collapse as they could lead to the social goals of the project failing due to the lack of legal support. If the project claimed to encourage sharing, but legally did nothing, then it could not possibly succeed.

On the opposite end of the spectrum there was the machine-readable code. Computer code in a literal sense (it actually might be more accurate to talk about it being a code schema rather than actual code). It isn’t entirely clear to me just how make-or-break this was to the project’s success, but it could easily be rather high. The machine-readable part of the project was primarily about reducing the effort of using Creative Commons-licensed work. By marking such work in ways that computers could filter, it made it much easier for people looking for materials to use to find them. This sort of friction reduction often makes the difference between achieving a critical mass of adoption for a big project and it ending up in a niche of people who like the idea and are willing to put up with the tremendous amount of work required to sustain participation.

Finally, there was the human code. Making the Creative Commons project understandable in plain language (with the legal language supporting that plain language understanding) was vital for a number of reasons. It allowed non-lawyers to see what was going on with the project, but more importantly it allowed the project to explain itself. This was vital because at its core the Creative Commons is an attempt to hack the way society thinks. The legal and machine-readable code all exists to support a shift in human thought and behavior patterns, and the code of human society, while supported by both, is neither legal nor machine-readable.

Ultimately the Creative Commons project succeeded to the degree that it did because it combined hacks to all three of these “operating systems”. Hacking any single one of them would have failed because, for instance, changing the legal code without making people care about those changes would have made the project basically inert.


A look at Lessig’s “Code”

February 17th, 2010

I’ve always been very interested in looking back at the things people wrote about the internet ten or fifteen years ago. Back when the shape of things was so different, and we didn’t really have much of an idea of what was going on, or especially where things were going, but we tried to get a handle on it all anyway. Code was published in 1999, over a decade ago, before the dot-com bubble burst. Given the time (and the timing) it is not in any way surprising to see that there are plenty of things Lessig got wrong. What I find most fascinating is the number of things he got right, or at least right enough.

As is inevitable with this sort of predictive stuff, when you’re right, you tend to be right about structural things rather than specific details. Lessig rightly observed that 1) the structure of the internet, the protocols upon which it is built, are viewpoint agnostic, 2) commercial interest drives development of new protocols, 3) commercial interest benefits from better identification technologies for all sorts of reasons.

He was wrong about details like the advent of a wide-spread unified identity protocol. Which, in some ways, strikes me as quite interesting. It seems to suggest that users are willing to provide identifying information for specific purposes, but that carrying a sort of “photo ID for the web” isn’t something they want to do. This, perhaps, has to do with the increased awareness of identity theft risks, and people would rather not put all their data in one place due to the risk of compromise. Even if, ironically, that’s safer than the current “put it in many places” strategy.


My angle on Cyberlaw

January 26th, 2010

While it’s not really in line with the readings, I did want to get my thoughts written down before the class really gets rolling.

While I’ve bounced around the internet enough to have plenty of different angles and takes on the copyright issue (music, media, software, and so on), my most recent looks have been focused pretty heavily on fan-produced derivative works. Fanfiction, amateur music videos, that sort of thing. And that has provided me with a number of interesting takes on why copyright matters, why people care, and also where people feel comfortable ignoring it.

So while I’m certainly interested in the actual legal structures which surround copyright law both domestically and internationally, especially as those structure struggle to deal with the massive changes the internet has inflicted on the media landscape, my main interest is really in things like personal reactions and justifications. Why do people think it’s important to protect the sanctity of authorial control.

I’ll almost certainly end up talking about this sort of thing quite a bit, but one of the examples that strikes me as extremely interesting is that fan-based groups, which tend to play pretty fast and loose with the letter of copyright and produce staggering amounts of highly derivative work without even thinking about seeking permission from the original creator(s), tend to be very upset when their other people in their communities make derivatives of their (already derivative) work without permission. That juxtaposition is fascinating, and hints at a lot of complexities in what people want out of copyright law, or at least what people want out of whatever rules/norms end up defining authorial control of generated content.

Yeah. You’ll probably hear me talking about this a lot.


Banyan Speak – A first-pass explanation

January 26th, 2010

My ITP thesis project, the current working title of which is Banyan Speak, is, at its most basic, an attempt to decouple public/semi-public internet-based discussion from specific URLs. Or, put another way, it’s an attempt to make discussion threads embeddable, or at least portable, objects on the web.

Which isn’t much of an explanation, so let’s see if I can expand a bit on that. At the bottom of this specific blog post, you’ll find that you have the option to leave a comment. Maybe by the time you read this someone will have done so already. In fact, maybe they will have left a comment, and someone will have responded to it with something incredibly insightful. And perhaps that’s kicked off an incredibly intelligent discussion only partially prompted by this initial post.

Now, if you wanted to send an email, or talk on your own blog, or make a post to a forum and you wanted to draw ideas from my blog post itself, that’d be easy. You can just highlight what you want, copy, and then paste. Then, maybe, you include a link back to my original post for people who want to do more in-depth reading. But if you want to excerpt part of the discussion at the bottom of my post? Not nearly so easy. Sure you could highlight, copy, and paste, but you’ll find that because of all the meta-data about who said what when, it doesn’t actually move very well. And, further, if the discussion is ongoing, then the people who see your excerpt might well miss out on awesome new developments. And that doesn’t even get into the complexity of what it would be like trying to copy and paste a discussion that used an organizational technique like threading.

Banyan Speak is an attempt to take that discussion at the bottom of a post, and make it easy to display that discussion elsewhere on the web in a way that keeps itself updated. That allows people to participate in the discussion from my blog, or from the email you sent about my blog post, without privileging one over the other or requiring users to go to one location on the web or another.

There are quite a few reasons I think this is an important project to undertake, and I’ll probably try to outline a number of them as the project moves forward, but I think that’s enough for now.

Thomas – Movies on the couch… on the internet

December 16th, 2009

Some History is a project I’ve been kicking around (and prototyping off and on) for over a year. It is, at its heart, and like many great projects, an attempt to solve a personal problem. But it’s also a problem I think that many people have, even if they don’t know it yet.

A bit of background: during the summer of 2008, knowing that I was soon to move to New York City for graduate school, I quit my job and took a ten week roadtrip. I drove all over the United States crashing on friends’ couches and gorging myself on social interaction. I watched a lot of DVDs that summer, with literally dozens of people. Movies, TV shows, anything that looked even remotely interesting. And it was fun. A lot of fun. It turns out that watching things with your friends is incredibly enjoyable in a whole different way from watching things alone. Of course, I already knew this because I’d spent years watching stuff with my close friend Nikki in Alabama, and things were always more fun that way.

But that trip inevitably ended, and I packed up my tiny apartment and moved into an even tinier apartment (because, hello, New York City). While I was still settling into life in the big city it dawned upon me that all that fun I’d had watching stuff with people, and especially with those specific people, was a thing of the past. Who was I going to get to watch under-appreciated kids shows in New York? (Though, to be fair, in a city like New York the problem is more finding them.)

Still, the experience was compelling enough that I wanted to find a way to repeat it, and given the explosion in internet technologies it seemed like the sort of thing that technology could help with. The first thing I tried was the most slap-dash. I talked to some friend via instant messaging and we figured out what we wanted to watch that was available on YouTube. Then we set up a chat room and tried to hit play all at once. Rather predictably this resulted in coordination problems: no one hit play at exactly the same time, and things only got worse when we tried to pause so someone could answer the phone or run grab a drink.

It struck me that timing and coordination are, in fact, things that computers do extremely well. So it seemed obvious that software should handle coordination so that people can focus on watching and talking.

Thus was born The first prototype was built on the open-source Flash player Flowplayer, but since I’m not a Flash programmer the rest of the system was in HTML and JavaScript, and it was hard getting the two to play well together. So I put the project on the back burner.

And while it simmered away, getting a couple of semi-successful tests, a whole bunch of browsers released versions with support for the HTML5 video tag. Which is when it dawned on me that this is what I had been waiting for. With the ability to write the entire thing in a single environment, I started over, which led us to where things stand now.

The Tech Stuff

The technologies that run the current system are:

  • HTML5 video, specifically the Ogg-Theora implementations in Firefox and Chrome
  • AJAX, with a terrible inefficient 1-second polling interval to keep each client synchronized with the server
  • PHP, to handle the server-side interface with the database
  • MySQL, mediated through PHP to handle all the dynamic data storage and retrieval

AJAX polling is obviously not the most efficient way to manage this sort of tight synchronization, and there are actually a number of huge headaches that have to be managed by using it, but it’s effective and avoids Flash. (And it sets the stage for migration to web-sockets whenever they manage to take off.)

One of the other things that drove the current, inefficient design was a desire to avoid running a server-side timing process to synchronize users to. What the system does instead is pick a single user and on every AJAX polling action sends that user’s current video timestamp (along with whatever other information needs to be sent such as chat messages or system messages) to the server, and other users get that timestamp as a synchronization target every time they poll. It’s messy and latency becomes an issue, but it works.

Which, really, is what matters. It works.

Three Square Meals – Extended

October 21st, 2009

The response to my audio project was way, way more positive than I had expected it to be. I’m not sure why, either. The project was definitely fun to do, but it didn’t (and still doesn’t) strike me as particularly compelling or mind-blowing. But apparently people enjoyed it. Maybe it’s just the rambly way I talk about food, or that my brain works in some weird way that’s interesting to see revealed. I don’t know for sure. But with that response I felt like it was worth expanding to see how it would work as a much longer sort of piece. So I endeavored to record more meals. I was going to do each and every one of them, but getting into the before-and-after recording habit (mostly the before) is difficult so I only managed to get 11 over the past week or so that I’ve been focused on it. Still, that’s not bad, and some of them turned out pretty interesting. I don’t have a lot of thoughts on the project, or rather I do, but I don’t feel ready to expand on them here. Basically there was a lot about the way the stories we tell (and want to tell) about ourselves shape our decisions. This is by no means new, but it remains really interesting to me. I would like to return to the topic at some point.

Anyway, without further ado, and in chronological order…

01 – Breakfast:

02 – Lunch:

03 – Dinner:

04 – Dinner:

05 – Breakfast:

06 – Dinner:

07 – Lunch:

08 – Dinner:

09 – Lunch:

10 – Dinner:

11 – Breakfast:

Live Web Mid-term, a choice

October 18th, 2009

With mid-terms coming up in Live Web I need to pick a project. The problem is that I have two of them I might work on. The first is the project I know will end up being my final in the course, the project I came into the course planning to improve: Watch With Me. The other option is to build a flash-based version of the game telephone.

Watch With Me, at this point, needs mostly grunt work retooling. As a concept I’ve already proved it works, so any work for class wouldn’t really be about improving the concept at this stage. Still, it is work that needs doing and the project is really cool and worth executing.

Telephone, however, would be a relatively new thing for me. A project started from scratch and thus one where a lot of the design and conceptual work still needs to be done. It’s also a much smaller project, the sort of thing that can be done to my satisfaction (and not need any more work) by the time mid-terms are due. Basically it would use webcams and built in mics on laptops to create a chain of video chat users. You’re only connected to the person in front of you and behind you in the chain, so that you have to pass any messages from one person to the other. It might just be simply fun to play with too.

Which is why I’m leaning in the direction of the Telephone project. While Watch With Me is, in the long term, far more compelling, in the short term Telephone could be more fun and represents more of a conceptual stretch for me. Especially since I’m intending to use Watch With Me as my final project for the class.


Why does “live” matter so much?

October 8th, 2009

For a long time we turned to live media because it’s “immediacy” (more on the scare quotes in a second) was unsurpassed. You got literally current news from live updates. Of course “live” generally had a built-in delay. Not a huge one, but an appreciable (seconds to minutes) one. Still, this was pretty darn close to immediate, and we came to associate the concept of live media with the idea that “media doesn’t go any faster”.

Except that it does. Because, traditionally, “live” media has been filtered through the same publishing apparatus as other heavily produced media, and that has built-in delays. Modern communications technology, however, has given us access to unfiltered live media production (Twitter, FaceBook, and more media-rich applications such as live streaming from a cell phone camera). And it turns out that for raw information, text is faster, more compact, and generally more useful than the forms of media we’ve traditionally considered “live”.

With news-delivery seemingly eliminated as an interesting application for live rich media, we’re left with only one obvious significant use: simulating real-world space. This is the realm of live performance (in both the entertainment and the educational sense of the word). Lectures, plays, improv, music, etc. These are things that we understand work best in live contexts where there is potential for feedback and interaction, and feedback and interaction of the sort which impacts the performance directly requires real-time speed.

One thing worth noting here is that while a screen tends to be a great way to receive a live stream of audio and visual data, they make terribly restrictive systems for creation of that data. That is to say: most live streams are one-way. It is hard to perform while watching a screen because it restricts your movements and involves multi-modal interaction with mismatched turn-taking. (That is: most feedback systems have data incoming at the same time that a performer has data outgoing. This is in contrast to performances in live space where the tendency is to coordinate turn-taking so that data is only going in one direction at a time.)

One of the reasons this problem arises is that we still haven’t solved the turn-taking problem for online discussion. In face-to-face interaction we’ve come to intuitively handle multi-modal communication in ways that allow us to pass turn-taking information on a separate channel from the one we pass actual data-content stuff. Usually turn-taking is a body-language (visual) thing while data-content is vocal (aural). Most online interaction, however, is handled purely through visual data. Or, in the case of live voice chat, there is no good way to pass visual turn-taking data.

Which, at this stage, leaves me rather disinterested in live streaming of rich media. The point of going live should, I think, be that it enables interaction, but it’s not at all clear that interaction is enabled with current communication tools. I think it is well within our ability to create new tools which support interaction at this level, but at the moment I’m far more interested in interaction by audiences around media than in the way audiences impact performance. Consider, for instance, that watching a live stream of a concert is very different from going to that concert in person, even if you have the same audio/visual setup. The difference is that you aren’t sharing interaction space with the audience. It’s audience interaction that really drives a lot of the power of live performance, and I find that extremely compelling. But the weird thing to note is that audience interaction happens even with pre-recorded media (cinema, for instance), so there’s nothing particularly compelling about the live component there.

I realize that all of this is in extremely sloppy form, but that’s about how my thoughts run on the subject at the moment.

Three Square Meals

September 30th, 2009

When I think of audio stories I tend to think of much more conversational, more stream-of-consciousness work than I tend to with written stories. Now I know that this isn’t really the case, most audio stories are extremely carefully constructed in post-production, after all. Yet there’s something compelling to me about the more free-flowing stuff. Or, perhaps more accurately, the stuff that’s preconstructed in a more informal manner. Not with a written script, but with careful mental rehearsal and construction beforehand. That interest actually drove this particular project.

“Three Square Meals” is an audio triptych of sorts. It is a before and after of three meals: lunch, dinner, and breakfast. Since each segment is under a minute, there’s an interesting sort of time compression. Since the segments are not formally scripted they’re a bit rambly, but I actually found this to be more of a feature than a bug. There’s an interesting sort of focus in which I, as the narrator, have to construct the most important parts of a complex series of events into a few simple observations. This odd sort of compression is highly revealing about what I considered important about what was going on.

Yet while the tone is mostly conversational, there’s a strong sense that I think is clear in the recordings that I’m not involved in a conversation. My tone and construction are those of someone relating a story in isolation, outside of the context of a conversation. There’s no sense that I’m going to have to respond to any sort of feedback from the people listening to the stories. This is the case despite the fact that these may be presented in class and I may, in fact, need to respond to my listeners. I’m fascinated by the way that such audience concerns seem somewhat inherent in my user of the medium. When speaking, if there is not an immediate opportunity for response, I don’t think of it as an interactive sort of experience despite the fact that someone could record an audio response of their own and send it to me.

I’ve presented these pieces in chronological order of recording, but I suspect there could be some interesting effects to ordering them differently. Anyway, without further ado…

Three Square Meals – Lunch:

Three Square Meals – Dinner:

Three Square Meals – Breakfast:


September 24th, 2009

The idea for this project grew out of an offhand comment during class, one that Shawn suggested was interesting and one that, as I thought about it, struck me as interesting as well.

“Pichat” is an attempt to force people into creativity in two separate, and rather unrelated, ways. It is, at its core, a relatively standard AJAX-based chat interface, but instead of transmitting text back and forth, Pichat transmits images. The only text used is that necessary to identify users.

The interface is relatively simple. Using an AJAX call to google’s search API, a user may enter any valid google search string and get the first four image results that fall within a specific size class. These four results are displayed for the user to consider. If the user decides to utilize one of these images, all they need to do is click on it and it will be sent as their message to the chatroom. If none of the four images seems suitable then they must search for a different term.

There are two points of restriction here, and both of them are in some way compelling. The first is that communication is image-only. Trying to compress a thought or expression into a single image (even when given the entire internet to draw from) can be an extremely difficult and creative process, and I’d love to explore the sorts of conversations that arise in this environment. Additionally, you only get access to the top four search results from google. If none of them do what you want, rather than being able to page through to more, you must refine your search. Crafting a search string specific enough to get you what you want provides an interesting challenge as you can not simply search for something “close enough” and then dig through the pile of results manually until you find what you want.

On a down note, the code for this thing is abysmal. It’s a flat-file stored, full text-dumping PHP implementation on the back end that results in massive levels of back-end processing and bandwidth inefficiency and significant front-end inefficiency in addition to a number of interface issues that just make it ugly.

Still, it may be something you want to try out, and if you do you can play around with it at

A Single Rose

September 23rd, 2009

Maya had been told that most of the year the massive rail lines designated to serve the Citadel Military Cemetery were mostly empty. A vast testament to how many had died, and how few survived to visit them. Today, however, the trains were packed. It was the one-year anniversary of the Battle of Kar’Dathra’s gate, the beginning of the end, and it seemed that, for today at least, no one was able to suppress their need to mourn.

Maya’s uniform had won her a seat. The civilians pressing out of the way of the severely cut beige greatcoat. The rank tabs at her collar had won her elbow room, as even the others who had managed a seat in the cramped space backed away from her in a mixture of respect, awe, and fear. Sorceress-officers were rare, and those with a campaign ribbon for Al’Istaan rarer still. The sorcery corps had suffered casualties horrifying enough to be noteworthy in a battle that had killed millions.

A small part of Maya resented the space. She was just another woman who had lost everything to the war, she deserved no special treatment. Most of her, however, was grateful, for the space afforded by her uniform and rank meant she could write a letter. A letter that no one else would ever read.

Beloved Marcus,

Today marks one year since Kar’Dathra’s Gate. One year of waking in the middle of the night and reaching out to find myself in our bed alone. One year of turning to share an amusing observation with you only to be brutally reminded that you are gone. One year since your squad joined the list of those “missing, presumed dead”. I wish I had the eloquence to say what must be said. There are words bottled up inside me that I can’t express. It is said that time heals all wounds, and while the past year has proved that to be a lie, perhaps the inexorable march of time will eventually permit me to pour out what wells in my soul.

They told us that Al’Istaan would change the course of our nation, that it would set us once more on the inexorable path to greatness. In this they were half-right. The course of the United Republics of the Red Star changed, turning from mere stagnation to rapid collapse. I am surprised that we are not yet at war with ourselves, though I am grateful to whatever giver of miracles keeps us from tearing ourselves apart. There are those that fear that our nation could not survive a civil war, but that is not what haunts my troubled sleep. For our nation is doomed already, even if those in power are slow to realize it. No, what I fear is that our humanity would not survive a civil war. We are ruled by fear now more than ever, and the rhetoric from the Navy and the Party are that of strength and solidarity. A civil war would mean Al’Istaan all over again, except that we would be doing it to ourselves. Our people would be fed into the inexorable machine of war, and I’m not sure anything would be left to come out the other side.

And, yet, as much as I fear that the loss of our humanity is inevitable, I would gladly see it happen if it meant I could see you again. Just once, to catch your eye, to see you smile, to breathe in your scent. I know it is petty of me, but I would see our nation burn, and our people with it, if it meant you came back from Al’Istaan with me. Our nation is lost and unsure of itself, but I almost envy them. For I am lost and terrifyingly sure of myself. The wounds your loss has left on my soul will never close. I will never wake up not expecting your warm arms around me, or turn to glance over my shoulder without expecting to find your smile.

I find myself filling my days with duty. It is empty of joy, but it holds something that is like purpose, and while it terrifies me to admit it: I am no longer sure I have one of my own anymore. The memories of you hurt more than I can say. Each joy we shared now rubs my soul raw with loss. I hope and pray that you can forgive me, for when I wake tomorrow I shall immerse myself in my empty duty, drown my memories in a sea of mundane tasks, and I shall do my utter best to forget you.

I am sorry. I love you.

For eternity,

A few tears fell onto the page as Maya lifted her pencil, causing her to blink in surprise. She had not thought that there were any left within her. Not after the past year in which she had wept until she couldn’t anymore. Apparently the well of grief never fully ran dry. She lifted the letter and blotted it gently against the shoulder of her coat before folding it with careful precision and writing “Marcus” across it.

She stared at the name, blinking slowly, and lifted the crisp paper to her lips. It hurt to know that this was as close to kissing him again as she would ever come. Her eyes slid closed and her lips brushed across the paper in a signature more personal than anything she could have written. And she stayed that way even after the chime announcing the train’s arrival sounded. She could hear the people shuffling off, but could not bring herself to join them.

Eventually, after the sound of footsteps had faded into nothing but memory, she opened her eyes to find a single rose in her lap. Maya looked around the empty space as if that would reveal the flower’s source, but she was alone. Perhaps some stranger in that crowd of people had seen something of themselves in her grief, or perhaps it had simply fallen from one of the many bouquets and wreathes being carried today, it didn’t really matter which. Maya lifted the flower, letting it twist slowly between her fingers, and nodded to herself. Then she stood up, forced her shoulders square, and walked out into the snow.

It was time to see her husband.

Driving Forces

September 17th, 2009

I knew going in that one of the things I wanted to focus my thinking on for this course was the growing levels of social interconnection as facilitated by communications technology. So I suppose it was inevitable that the predetermined driving force I find most compelling is that connectedness will continue to rise, and rise rapidly, in terms of both demographic penetration (more people will have access) and ubiquity (people will have more and more regular access). This seems to be rather well supported by the current US administration’s push for a national broadband plan from the FCC, as well as continued infrastructure development by the major telcos (fiber-to-the-home, newer and faster cellular data technologies, bigger pipelines for cable data transfers).

This actually leads, at least in my mind, to a fascinating critical uncertainty. Will the drive toward urbanization increase in importance, or decrease in importance. One of the traditional functions of dense urban environments has been to foster high levels of interconnection. Because long-distance communication tools, especially for groups, have historically been poor, and because face-to-face contact is still the highest bandwidth method of communication in regular use, urbanization has been essentially inevitable to whatever degree our logistics systems could support it. While urban centers certainly do more than facilitate communication, it strikes me that this has always been one of the most important drivers of the movement. Thus it becomes a serious question whether or not increasingly good tools for long-distance communication will be enough to sort of ‘take over part of the market’ for communication facilitation from urban areas.

Telling stories online (an analysis)

September 17th, 2009

So last night I posted a log of my story telling experience in an IM chatroom last night. This morning I figured I might as well do some analysis.

One of the first things that stands out to me (and justifies my obsessive time-stamping) is the fact that the set up and storytelling took twenty-five minutes, and the story itself took just over twelve. Since the story probably took about three or four minutes to related in class (maybe five or six if you include discussion time), this is a rather significant slow-down. Not that this is particularly surprising since there’s always a slow-down when moving from a high-bandwidth mode of communication, like face-to-face discussion, to a low-bandwidth mode like instant messaging.

But the slow-down isn’t entirely tool-based. Or, perhaps more accurately, it’s not tied to the technical aspects of the tools. Because while I do type slower than I talk, I can type extremely quickly. Combined with the way that we tend to distill things when they shift to text (elaborating less in order to make things more compact and coherent) I probably could have whipped the story out in a minute or two. Just looking at the content of my story-telling shows how little there actually is there. The story may be a hundred and fifty words, but it’s probably less than that, and it still took significant time to compose and transmit.

So there’s clearly more at work here than text being slower than speech, and I think it has a lot to do with the social conventions of IM, which are quite deeply drilled into my head. IM is a give-and-take medium. Turn taking is indicated by message submission, so the conversation tends to pause slightly after every line in an implicit offer to everyone else to respond. Only if there is no response for a while does the thread of the story get picked back up, which provides for a sort of stilted feeling if you were to read it aloud in real-time, but seems to be a natural expression of the IM medium.

Further, IM is generally considered to be a conversational medium rather than a performer-audience one. People are expected to interject and comment, and when they do the discussion is briefly derailed as people respond to that response. We don’t really have, or at least I and the people I spend time with online don’t really have, a set of norms for non-conversational story-telling. That means that most of our stories tend to come out looking like conversations rather than a more formal sort of presenter-audience interaction.

I don’t know if there’s much more to say than that. It’s not something that bothers me, after all. In fact, I think I rather like it. It does, however, highlight two important things:

1) Various mediums lend themselves to various uses. Picking a medium that is unsuited to your intended use may be a bad idea, or it may just result in something interestingly unexpected. After all, while it doesn’t look much at all like the in-class presentation of my story, I rather enjoyed the online telling of it too.

2) Computer-mediated interactions are about more than just the technological tools being used. While there’s nothing inherent in the technology of instant messaging that prevents a story from being told in a much more traditional form (what one might call a “wall of text”), there are cultural norms about the use of instant messaging that would make that feel weird, almost like a violation.

And so, I feel that the exercise was well worth undertaking, and that it was fun, to boot.

Telling stories online (a log)

September 16th, 2009

The assignment was to tell our story in an online environment. Ironically I was assigned one of the two I was fully prepared to do already: an IM-based chat. Since I know many, many people who are on AIM, and many of them are frequent chatters, I sort of cheated and dropped into an existing (and regularly-occurring) chatroom. It also, conveniently, is peopled by total nerds. Who have questionable senses of humor. As you’ll see. I sanitized the chat logs of their SNs and then checked to see if they wanted anything else cut (which they didn’t), and here are the results (this is just the record of the discussion, my analysis will be undertaken in a later post (now linked)):

Read the rest of this entry »

Identity is what you make of it

September 16th, 2009

It hadn’t taken all that long, really, for the pronoun confusion to set in. Four short months were apparently sufficient to produce a moment of cognitive dissonance whenever someone used “he” instead of “she”. It’s two years later and it’s not so bad now, but he still occasionally looks to see who “he” is.

Identity (a six word story)

September 16th, 2009

Multiple identities make for awkward introductions.


February 5th, 2009

I hadn’t thought that my first post of the semester would be for Mobile Media, but it seems it is.  We’ve been working with the absolute cheapest (and in a lot of ways least flexible) system for building software packages that interact with mobile phones.  Basically we’re using a PHP script that reads email messages and processes them.  In the US at least you can send an MMS message from a phone to an email address.  This allows for SMS-like interactions of a sort.

Last week we just worked on getting the code up and working.  Receiving and responding and all that (with a cron script making sure the system checks for new messages regularly).  This week we worked on actual applications.

Inspired by a discussion with a roommate I realized that there is a vast unfilled niche in mobile communications: calls and messages conveniently timed to get you out of uncomfortable social situations (like bad dates).  It’s common enough to be cliched by now to have someone call/text you thirty or forty minutes into a date just in case it’s going poorly. It’ll give you an excuse to get out quick on short notice.

But often your friends are flaky and fail to call. Or sometimes you can’t arrange for someone to call. Wouldn’t it be nice if you could get a computer to do it? Well now you can.

Poorly, admittedly, but you can do it. If you send an MMS message to with the body of the message containing the number of minutes the system should wait before texting you back then after that delay you’ll get an “emergency” text message.

Isn’t that useful? Now you know what to do next time you’re having a crappy date!


Scroll Roller – Verson 1.0

December 19th, 2008

I’ve been thinking a lot about the scroll roller over the past couple of days. Tims took it back to his place on the subway, and he tossed me an email saying that there were a lot of people who were very interested in it just from the way it looked, I thought that was really neat. With the show just occurring, I have also been giving a lot of thought to what it would take to make the scroll roller really ready for public demonstration. I’ll talk more about that later, but I figured it might help to talk about its current state first.

I’m going to include a bunch of pictures with explanations. These were all taken on the ITP light table. I have to say that a light table makes everything look better. I feel as if the scroll roller is far less impressive looking than these photos make it, but maybe that’s just over-exposure to the thing. Anyway, without further ado:

The scroll roller in its current incarnation is a very simple device: a potentiometer for control, run through a microcrontroller, which runs through an H-bridge to control a pair of DC motors which are connected to the axles the scroll is set on. Tiltng the lever hooked up to the potentiometer in one direction the scroll moves one way, and if you tilt the lever the other way the scroll moves the other way. The farther you tilt the lever the faster the scroll moves, so you can have both fine control and move quickly using a single input device.

This is what the scroll roller looks like before a scroll is hooked up to it

This is what the scroll roller looks like before a scroll is hooked up to it

Scrolls are (currently) mounted on specially prepared lengths of PVC pipe which have lugs attached to them that match up with the lugs on the axles connected to the motors. The axles are held in place by sliding latches, so all you have to do to get one is side the latch open and pop out the axle. Then you slide your scroll into place and lock the axle back down.

The scrolls simply slide onto the axles so removing and replacing them is easy

The scrolls simply slide onto the axles so removing and replacing them is easy

You get a device that looks, I happen to think, pretty good. Even in its current rough-prototype stage. Part of this is the way the light table makes things pretty, but it’s just a generally aesthetically pleasing object. I especially like the way that the roller seems to be a frame of sorts, accentuating rather than distracting from the scroll.

This is what the scroll roller looks like when everything's put together

This is what the scroll roller looks like when everything's put together

While this prototype has a number of problems on the technical end, I think they’re mostly materials-based. In fact, the core design seems very sound, hurt primarily by the fact that our motors suck. They were salvaged from a printer, and they just weren’t intended to exert significant force. They draw 100mA a pieace, which is enough to move a piece of paper, but not enough to pull a scroll very well. I mean, it moves, but really only at one speed: slow. This meas that the variable speed part of the design is pretty nonfunctional at the moment. Still, it does work, and I’m pretty happy with it as a proof of concept.

There’s a lot more work to do from here though. The obvious irst step s replacing the motor and gear assembly with one that can eet the physical demands of the device. I suspect that the next step will be the most technically difficult: running a barcode across the back of the scroll and rigging a scanning system on the device so that it can tell what section of the scroll is being displayed (actually we’ll probably need two readers, one at each end of the device, so that we can monitor the amount of tension on the scroll as well as allow for knowledge of display width if we have variable-width devices). Once that’s done a lot of new options open up: we can allow for “bookmarking” of sectionsof the scroll (local and possibly shared-across-the-net bookmarks), we can let the microcontroller automatically scroll through the scroll (either through specific scenes or simply throughout the entire length). Basically we can allow the scroll to become a traditional “hands-off” art object in addition to an interactive one. In addition to the additions in control hardware and software, there are some structural changes to make to te device. Three come to mind: first the light table made things look so much better that I figure that as long we have to power the device we might as well include a backlight, second we need to figure out how to handle wall-mounting the device as a means of display (this may mean lighter materials, but perhaps not), and third we need to figureout how we want to manage the input side of a much more complex device (go wireless with a remote control? eave it on the device directly even though it will take up more room and potentislly be more distracting?). Anyway, I’m still excited about the project and can’t wait to move forward.