Archive for the ‘itp’ Category

Animating comics – The Red Star

Tuesday, December 9th, 2008

For my final project in Comm Lab I ended up doing a piece of animation using a body of artwork that I’ve wanted to work with for quite a while: The Red Star. The Red Star is a graphic novel by Christian Gossett and its art has a great sense of scale to it.

I’m so taken with the art and story of the original that I didn’t really want to do something narratively transformative with it, which left me with attempting to animate what was already there. This project ended up being an extremely educational one on two fronts:

1) This was the first and only project for Comm Lab which I undertook solo. I found that working in groups teaches significantly different things than working alone does. This is probably because we were working in new groups with new people pretty much every week, so a large part of each project was developing functional group dynamics. This meant that I spent more time doing the creative work at an intuitive level while spending my conscious mental energy on group management. This was fun and useful, but it made for a very different experience from doing solo work. I found myself much more focused on composition, structure, and style while working alone. Part of this may also be attributable to the fact that I was working on something I cared about pretty deeply, but I’m not sure if that was a significant factor or not.

2) The second thing I learned, other than the interesting (to me) revelation about what I learned when, was that taking comic images and animating them is extremely difficult when one tries to maintain the “look and feel” of the original work. The problem is that since comics are sequential in a way that guides the mind to fill in gaps of movement, making the art move is somewhat jarring. Most comic art illustrates only end positions, allowing the mind to fill in the simple transitions. The problem with this is that without art of those transitions, it is hard to animate things. End positions moving around don’t tend to work very well.

The sequences I was happiest with were, by and large, scenes with little movement. The ones that made me happiest were the opening sequence, with a single animated part that is overwhelmed by the static art, and the long shot of the cemetery in which the movement was accomplished with cross-fades that made it seem much more comic book-y.

The problem is that this sort of scene is often hard to handle with comic book sources since the art is often dominated by the parts that should move. Many close-ups on faces and such. I feel like The Red Star might be a near-ideal source of material for this kind of thing because it has so many epic-scoped images.

Without further ado:

Overall it was a fun project, and I suspect I may continue working on it. I feel like I learned a lot about animating comic art, and I’d like to keep at it for a bit.

Thomas

Animation project: Poor Andy

Friday, December 5th, 2008

As has been consistently the case in this class, it was working with Si on the animation project that was most interesting.  Of course the project itself was fun, especially since I haven’t worked with animation before, so this was the most technically informative project I’ve  worked on.

Si and I agreed to work together after class when the assignment was given, but didn’t really know what we wanted to do.  I had the idea of possibly using some hi-res scans of some comic books I have, but Si had a much better idea.  It started off vague: she had done some animaion before using Andy Warhol.  Nothing major, but enough to have a few prepared assetts.  With that as a starting point we realized it’d be fun to have Andy interact, in some way, with some of his works.

Our original storyboard involved him directly talking to his paintings, but we realized this would be extremely limiting because in the case of Jacky and Marilyn they were headshots only.  That limited the sorts of scenes we could do, so we tweaked things.

This involved more assett acquisition and preparation, but I feel that it was worth it.

While there were elements of the project that could definitely use improvement, overall I’m very pleased with how things turned out.

VIDEO EMBED GOES HERE WHEN IT IS READY

Thomas

Almost done with the scroll roller

Thursday, December 4th, 2008

As of Wendesday night, the basic construction of the scroll roller is done.  We haven’t gotten a scroll in place to test it, but hopefully on Saturday we can get started with that.  The emergency stop mechanism was easy to design, and needs both testing and software implementation.  The latter’s just a few minutes of work, and the former is true of the rest of the system: testing required.

On the construction side, things are looking good.  Tims came up with a great way to secure things (small sliding latches) and it looks like with one small exception everything is in good working order.  That exception is, unfortunately, one of our gears.  The gear on our left-side axel has a couple of teeth which have been damaged.  I suspect this was caused by me while removing it from the printer.  It’s minor damage, and almost appears cosmetic, but it’s sufficient to cause the motor to jam when that part of the gear engages.  Tims managed to do some minor repair work, but I’m not sure how well it’ll hold up under strain.  The right-side assembly works fine, and we’re looking for a replacement gear, but if we can’t find one I think we’ll be okay.

Tims is working on getting us a scroll printed, and I’m trying to figure out precisely how we want to handle mounting of scrolls to the entire assembly.  I suspect it will be something involving PVC pipe.

After the eight or so hours of work on Wednesday I think we’re less than another eight from completion, so I feel quite confident about our schedule.  I suppose we won’t know for sure until we start testing and provide an opportunity for things  to go catastrophically wrong.  I’m sure there’ll be an update for that.

Thomas

Scroll Roller, the continuing story

Tuesday, December 2nd, 2008

Progress has been slow but steady on the scroll roller I’m working on with Tims for my final (as opposed to the cube).  Mostly things come in spurts.  But as of this evening I feel that we’re in good striking distance of done.

Over the weekend I learned some important lessons:

1. TIP120s are not very well labeled for BCE.
2. The cheap little potentiometers we got with our kits are… um… not rated for 1A of power.
2a. Trying to do this will break your potentiometer in bad ways.
2b. It turns out that it is possible to cause a potentiometer to flash-combust.  Sadly I have no documentation of this fact and am unwilling, at the moment, to demonstrate it again.
3. 20mA, despite my overly optimistic hopes, is not enough to spin a DC motor of any significant size.
4. A common ground works best if you connect ground to ground rather than 12V to 5V.
5. The junk bins in the shop are filled with some amazingly nice pieces of wood around finals time.
6. There are some engineering solutions which hot glue is not wholly adequate for.  It may end up being an integral part of the solution, but it is not sufficient on its own.

Here’s the basic state of things:

I have a pair of DC motors hooked up to TIP120 transistors which are controlled via Pulse Wave Modulation from an Arduino microcontroller.  Or, more simply, I have two motors which I can spin faster or slower depending on computer commands.  I have a potentiometer, which we’re going to try to build a spring-tensioned lever onto, that is used to control these motors.  When the potentiometer is centered, nothing happens.  As you turn it one way, the motor on that side spins faster and faster.  As you turn it the other way, the other motor spins.  From where I sit the software works great.  The only thing that I think might end up tweaked is that I might use some sort of logarithmic scaling function for speed adjustment instead of a linear one.

I have two paper rolling bars from an inkjet printer which have these extremely convenient gears at one end.  Gears which happen to have teeth which mesh with the gears on my motors.

At this point nothing is really put together, but Tims and I sat down and talked and once again I think he’s provided the solutions to most of our design problems.  The only one left of any significance is specific mounting of the paper rolling bars, and I think we’ve got a general solution there.

On the to-do list:

1. Show current work in class on Wednesday, solicit feedback regarding attachment options, bring up possibility of integrating Processing applet for digital scrolling.
2. Wednesday after class, finish physical mock-up.  Drill mount holes for, and actually mount, the motors.  Decide on working display area size.  Resolve bar mounting issues and figure out how to handle bar-scroll interface.
3. This weekend commence testing.  Problems are sure to arise, but if we’re lucky they’ll be small and manageable.
4. Obtain demo scroll.  I’m okay with demoing the entire thing with toilet paper, but I fear that a lot of the impact will be lost.
5. Clean up wiring.  This is purely optional, but I’d love to see the entire thing on a simple six wire ribbon cable.
6. Start looking ahead for this project.  Ability to identify the part that is being examined?  Memory?  Integrated controls between screen and physical display?

Let’s get cracking.

Thomas

New plan for the final

Monday, November 17th, 2008

It strikes me as somewhat ironic in light of my firm and detailed plans for the final that I’m going to end up doing something completely different.

I suppose it’s not that big of a surprise since I made no secret of the fact that I’d rather work on something with someone than do something on my own, and when Tims explained his plans to try building an automatic scroll roller I was interested. It probably helped that we’d had dinner the night before and had a long discussion about art and meaning and collaboration, but the project is an interesting one in its own right.

I was unfortunately out of town for most of the week (left Wednesday after class, just got back at 2am), but we had a very productive post-class lunch meeting before I left. I still need to touch base with him, but we’ve laid out a sort of rough plan for development, and I’m pretty excited about that.

Hopefully we’ll have some data by Wednesday about paper stress in different situations that we can use to actually design the device. (Tims had the clever idea of using toilet paper for testing. It is already on a roll and it is extremely fragile, which means that it’s easy to tell if the paper is under significant stress.)

Thomas

Working in groups is a lot of work

Monday, November 17th, 2008

I talked about this a bit when I discussed the Audio Mashup project, but the craziest part of doing our short film has been the people.

I honestly can’t say where most of the ideas came from other than “the group”. Other people may have better memories, but all I know is that we were sitting around and the ideas bubbled up from somewhere. And they’ve all been awesome and exciting, even the ones we had to set aside or not do because they didn’t cohere with what we had, or were too ambitious for the time available. So it’s been super-awesome.

Of course it hasn’t been all fun and games. There’s been some tension, especially during filming. This is, I suspect, inevitable in any group doing anything, and I’m definitely not complaining. As much as the experience has been learning about producing film, the entire project has been one long lesson in compromise and work-distribution. There were so many times when I felt that a job could be accomplished with slightly less polish and still suit our purposes and the others didn’t. And we were all cranky and tired and just wanted to be done. But we managed to work things out every time, and that has been at least as important as our actual ability to make a short, film-like thing.

One of the higher stress things for me, and maybe for the others too, has been the fact that I’ve been out of town for most of the post-production week. I’d been planning a trip to Boston for this week for months, but that hasn’t really made me feel any better about being unavailable. Maybe the others haven’t had a problem with it, but I know that I’ve been stressed about flaking out.

In summary, it’s been a great experience. Not ony has the project been fun in its own right, but all the interaction and group negotiation has been immensely valuable.

Thomas

Learning by doing – over and over and over

Monday, November 3rd, 2008

When it came time to pick groups for a three-week film project, I immediately asked Sara if she’d be interested in working together since I’ve wanted to do a project with her for a while. She quickly tapped Nobu and Fillipo to join us, so we ended up with a group of four, which turned out to be rather fortuitous. We agreed to meet on the following Thursday to kick around ideas and do some storyboarding.

When Thursday rolled around we got to talking. As is often the case with good collaboration, it’s not easy to reconstruct the discussion in terms of who suggested what when. I know we got to talking about recontextualization. At first it was with an idea toward filming the same scene twice in different contexts in a way that would make the actions, while identical, very different in meaning. We played with this idea for a bit, discussing layout and order. Would we do things sequentially, or would we rather split the screen and run the two scenes in parallel.

The talk of parallel viewing got us thinking, and somehow we started talking about doing something a bit different. Instead of playing with context for narrative purposes, we’d try something more technically experimental. What we settled on was filming each of us going through the same simple scene, and then intercutting those takes to create a sort of collage. Then, at a unifying moment, we would slide the screen into quadrants and have all four of us doing the same thing at the same time in parallel.

We refined this idea a bit, but decided that it was, indeed, what we wanted to do. This resulted in some very interesting story-boarding as we tried to figure out the best way to represent quad-screen layouts.

The story boarding process went pretty well for us. We ran through the scene we wanted to record and named each shot. Then, with our list of named shots in front of us, we started doing the story boards. This helped quite a bit by providing context for where we were gong as we set up any given shot.

With the story boards done, we realized that our project was pretty ambitious in that we wanted to flim in four separate locations. If we were going to make that happen, then we definitely needed to get started early. Thus we agreed to meet on Monday, the day before we officially got our filming assignment, and get one of our locations taken care of. Which we did.

I feel that, in many ways, this particular projct is going to be an exceptionally good learning experience. By filming four separate times, we each have the opportunity to do the various tasks involved. That gives us a wider range of experience than we might have had otherwise.

Additionally, four separate locations means four entirely different instances of filming. Considerng how many mistakes we made at our first location, mistakes we want to correct, four different attempts should mean we have more chances to learn and iterate our skills.

Overall I’m pretty excited to see what happens when we meet again on Thursday.

Thomas

Planning for the final

Monday, November 3rd, 2008

Having found the dancing cube project to be surprisingly compelling, I’ve decided to use it as the basis for my final. This will entail a number of feature sets rolledout in series.

1. Reconstruction of the servo assembly. This will involve the replacement of the burned out servo motor, some shifting of the linear gear assembly to increase the range of vertical motion, and potentially a redesign of the cube’s skeletal structure for greater stability.

2. Migrating the mic control code to the servo control board. This may involve a general rewrite of the mic code, we’ll see. The ultimate goal is to get the entire thing running on a single micro-controller and to debug the audio interpretation code to properly configure itself for ambient sound levels.

3. Introducing a better on switch. This actually ends up being rather complicated since I want to do this with a cuprox switch. I anticipate using a solenoid motor to flip a bigger physical switch. This is actually a pretty complex change since it involves playing with cuprox and solenoid as well as completey redesigning the power system (so that it isn’t all regulated by the microcontroller) and working up some shutdown code so that when the cube turns off it reaches its idle state instead of simply dying.

4. Switchable face-plates. Time permitting, I want set things up so that the cube has swappable face plates. Plates have different faces on them, and correspondingly different dance patterns stored on EPROM chips plugged into them. This will involve another re-write of the code to load patterns strictly from EPROM, as well as figuring out how to work with EPROM and integrating some sort of system to detect if a faceplate is plugged in. If it isn’t, the system needs to fail to start or go to idle and shutdown (if the faceplate is pulled while the system is already under power).

5. Potential redesign of the cube base. Again, time permitting, I think I want to take advantage of the increased range of vertical motion to increase the size of the cube’s base, the part that does not move. The orignal deign called for it to be about four times its current height, and while I don’t know if I’d want to go quite that big, I certainly feel like the base should be more pronounced. Additionally, a larger base should permit all of the control and power systems to be built into it in order to keep them hidden from view.

If all goes well, I can accomplish one of these per week. Steps 4 and 5 are optional, allowing me some wiggle room if something goes terribly wrong. Additionally, steps 1 and 2 can be done with parts on hand, meaning I can get to work immediately and allow plenty of time for the parts I need to be ordered.

In theory this will allow me to have a pretty cool little dancing cube by the end of the semester.

Thomas

Lazy Sunday

Wednesday, October 29th, 2008

This past Sunday I did something that I haven’t had time for since I moved in in August. I sat down and watched DVD commentries and featurettes. Being a huge media junky, my DVD collection, even in its currently-abbreviated state, is significant. Generally I will have seen a film at least twice within a week of obtaining it, but then I tend to take a break. If they are short or limited, I will often also watch the special features during one of the viewings. However, and to my delight, many DVDs come with commentary audio tracks these days. Often more than one. Since my time is limited, I generally put off viewing these until a later date.

I tend to try to block out a day or weekend and watch a large number of films with commentary on at one time. It often turns out that the commentary is worth the price of the DVD all by itself. You learn so much about the film-making process generally, and the construction of the film viewed in particular, that the entire package becomes that much more enjoyable. Among the stack I worked my way through this weekend was the brilliantOnce UponA Time in Mexico. Rodriguez’s commentary hed some fascinating light on the process of filming this story, but what I found most fascinating was hearing an explanation of the impact of HD filming, something I m too young to really remember the transition to, on the film-making process.

Being the age I am, and having gotten interestedin film when I did, I never really had a strong understanding of the conservatism that physical film tended to impose. My mindset is that of someone used to informational abundance. I think in terms of storage so cheap it might as well be free. Computer hard drives are almost down to $0.10/GB as I write this, for instance.

While there are a number of places where I am fully aware of the implications of the shift from scarcity to abundance, I found it to be an extremely useful thing to be reminded that the fields that use information are more varied than I tend to realize. This got me thinking about just how many fields have been using information which I should try to get a better understanding of in order to fully appreciate the shift to virtually-free information storage. So I compiled this little non-exhaustive list.

  • Media distribution (music, telelvision, film) – in the face of virtually-free storage, especially as capacities continue rising, the artificial limits on offering size disappear. For years the amount of stuff a “movie” or a “CD” contained was capped at the size limit, and generally approached that limit. I suspect there are some interesting changes in the way this is approached as data storage becomes so large that virtually no one has the content to fill it.
  • Personal libraries – In a number of ways, personal libraries are a sort of two-level luxury. You must be able to afford the media that populates them, but you must also have access to available physical space to store them in. This second requirement is beginning to disappear (allowing, for instance, me to keep an extensive library in a 10′x10′ bedroom).
  • Versioning – The current trend is that you purchase the final version of anything. Storage capacities are growing large enough that it’s possible to purchase all versions of a thing. It’ll be interesting to see if anyone does anything with that.

Of course the problem with generating these sorts of lists is that it requires me to know enough about a thing to recognize some of the impact that virtually free storage will have. You’ll notice that my list is populated by media concerns, and that makes sense because I’m a huge media junkie. Media is always on my mind. It is for the very same reasons that the revelation about film never occurred to me. Further, without someone pointing it out, the film revelation never would have occurred to me. I didn’t start working with film until after things went digital. By the time I got involved, the tyranny of the cost of film had fallen by the wayside. Filming had become free in terms of supplies, assuming you had the capital and the manpower. So now I need to start poking around and see who I should meet and get to know. I’m sure there are tons of people out there who have extensive experience in fields which are and will be revolutionized. I just need to meet them.

A project, perhaps, for another day.

Thomas

Audio Mashup – Closing Doors

Tuesday, October 28th, 2008

By far the most interesting part of the audio assignment was working with Brian.  I’d done work with audio in the past.  Nothing quite like this, but some similar stuff.  Most of my audio exprience has been in cleaning and trimming recorded lectures for distribution, so while I was familiar with layering tracks, I hadn’t done much of it.

Still, from a technical standpoint this wasn’t all that new, which brings me back to working with Brian.  I think that Brian is the person I’ve partnered with in comm lab who thinks least like I do.  Far from being a bad thing, this ended up pushing my creative boundaries a bit in ways I hadn’t expected.

I’ve long recongized that my thinking is highly analytical.  I tend to be focused on breaking things down into chunks and figuring out how those chunks interact.  Brian is a much more intuitive thinker, I suspect.

Before we had agreed to work together I had gone ahead and grabbed a series of audio samples from my morning routine.  I didn’t know if we’d use them, but I thought it woulld be useful to have.  I had originally envisioned a sort of linearly-sequential audio montage of my morning as a sort of narrative piece.  But when Brian and I sat down, it quickly became clear that while he liked the audio samples, he had other ideas.

Brian wanted something more poetic, and I found it extremely educational to sort of sit back and let him take over creative direction.  The first thing he suggested was taking the audio out of its context and using it to construct something like an instrumental piece.  This actually seemed like an awesome idea, and not one I would have had on my own, so I was pretty enthusiastic.  However, my vision of our hypothetical instrumental piece and Brian’s vision didn’t really match up.

Again my analytical side kicked in and I was thinking about controlled rhythms and percussion loops, just using the sounds of my morning (a project I still think could be a lot of fun).  Brian was envisioning something less traditional.  Or, at the least, less within my traditions.  Again, curious to see where this led us, I waved for him to take the lead.

The piece we ended up with was definitely the sort of thing I would have produced on my own.  It’s cacophonous and dominated by a verbal track, neither of which are things I would have been drawn to alone.  Yet it’s also a very interesting piece.  Perhaps, in part, because it’s not what I would have done.  There was alot more “that sounds right” and “that feels right” in our execution as a team, and a lot less of the “that looks right” that I would have been guided by watching the waveforms on my own.

In a lot of ways this is what I came to ITP for: to collaborate with people who would stretch me.  And while I doubt that, even after this project, I’d do things Brian’s way on my own in the future, I enjoyed working with him quite a bit.  And without further ado, here’s the ever-so-exciting piece we did.

Closing Doors

Thomas

The digital Etch-a-sketch demo and guts

Wednesday, October 15th, 2008

I talked about the high-level stuff behind the digital etch-a-sketch earlier this week. Here’s a demo and then a breakdown to look at the guts of the thing.

First of all, here’s the very simple one-minute demo. It is… exciting.

Holding the camera with one hand meant that I couldn’t demonstrate the way that the program mirrors a real etch-a-sketch’s diagonal lines as pixelated. That part was cool.

Anyway, let’s break this down. Step one was mounting my two control potentiometers to a fixed surface so that they could be turned without needing to be held down. I snagged some crappy stuff out of the scrap box and drilled some holes. Then, using some nuts and washers, secured my potentiometers through the holes.

Mounted potentiometers

Mounted potentiometers

I had previously attached headers to these potentiometers, so it was pretty easy to simply plug them into the bread-board. Ditto for my accelerometer. The vast majority of this project was in software, so the circuit is super-simple.

Basic circuit

Basic circuit

With that done, all that was left on the hardware side was to wire it to my Arduino. Since I wanted the etch-a-sketch controller to be relatively free-standing, I got some lone wires for this. (Note that I’m only drawing data off one pin of the three-axis accelerometer.)

With wires!

With wires!

I had to decide what data I wanted to process on the micro-processor, and what data I wanted to process in the applet. I decided to simply send the raw potentiometer data to the applet so that t was easier to resize the applet window and still keep maximum resolution on input data. I also decided to simplify the shaking/not-shaking decision enough to allow it to be calculated on the micro-processor and then simply forward a boolean value.

I also needed to calibrate the accelerometer to account for gravity. I wanted it to calculate from a value of 0 while at rest, and different people might hold it at different angles. Here’s the exciting Arduino code:

int baseZ;

void setup()
{
baseZ = analogRead(2);
Serial.begin(9600);
}

void loop()
{
int leftPos = 1024 - analogRead(0);
int rightPos = 1024 - analogRead(1);
int zDiff = analogRead(2) - baseZ;
Serial.print(leftPos,DEC);
Serial.print(',');
Serial.print(rightPos,DEC);
Serial.print(',');
if(zDiff > 150) Serial.println(1,DEC);
else Serial.println(0,DEC);
}

Note the if-else at the very end of loop(). This handled sending the correct value to the applet about shaking.

I had planned to have a variably-sized window in the applet with coordinate values scaled at start-up, but I got lazy. I decided that since I had a resolution of 1024×1024 on the potentiometers, I would have a drawing resolution of half that: 512×512. I defined my blank-state for the applet and wrote a function to generate this state. The function was called once during setup() (instead of duplicating the data in setup() itself) and then was called any time the shake boolean from the Arduino was true. Here’s that code:

import processing.serial.*;

Serial serialInputPort;
int xPos = 5000;
int yPos = 5000;
int[] inputValues;

void setup()
{
serialInputPort = new Serial(this, Serial.list()[0], 9600);
size(544,544);
clearBackground();
}
void draw()
{
}

void clearBackground()
{
background(250,0,0);
fill(250,250,250);
rect(15,15,514,514);
// background(250,250,250);
}

void serialEvent(Serial serialInputPort)
{
String inputString = serialInputPort.readStringUntil('\n');
if(inputString != null)
{
inputString = trim(inputString);
inputValues = int(split(inputString,','));
println(inputValues[0] + "," + inputValues[1] + "," + inputValues[2]);
int inputX = (inputValues[0] / 2) + 16;
int inputY = ((1023 - inputValues[1]) / 2) + 16;
if(inputValues[2] == 1) clearBackground();
else
{
if(xPos == 5000 || yPos == 5000)
{
xPos = inputX;
yPos = inputY;
}
else
{
stroke(10,10,10);
strokeWeight(2);
line(xPos,yPos,inputX,inputY);
xPos = inputX;
yPos = inputY;
}
}
}
}

And that’s that!

Thomas

McLuhan, hot and cold, and context

Tuesday, October 14th, 2008

It’s been a while since I did anything with McLuhan. He wasn’t really part of any of my previous academic traditions so I haven’t pored over him as I have with other people. (Though, perhaps ironically, I found myself defending his prescience the other day.)

Still, I find that McLuhan is one of those people who challenges me in different ways each time I approach him. In general I feel as if he’s, at the macro level, confused. For instance, I think he correctly identified the importance of “electric speed”, but ened up improperly understanding why it is important. That most certainly does not render his observations any less useful.

This read-through I found myself, unsurprisingly for those familiar with the thinking I’ve been doing about fiction over the past couple of years, struggling with McLuhan’s differentiation between “hot” and “cold” media.

For some time I have felt that McLuhan’s classification of one medium as hot and another as cold to be massively and problematically arbitrary. This led me to dismiss the entire classification system as utterly useless. On my most recent reading I decided that his classification was indeed arbitrary, but that the categories themselves are potentially very useful. As with many things in McLuhan’s work, it’s hard to know precisely what he means by “hot” and “cold”, and I tend to find myself going through five different interpretations in as many minutes. However, I find that most (if not all) of the ways I look at it circle around a single theme. The fact that I’ve been interpreting a lot of things as circling around this theme may mean that I’m reading into things, but I’m okay with that.

It all comes down to context for me. I take it that “hot” media are those which contain more context within the media itself. Movies are, to a strong degree, self-contained. You come to a film with no exterior context and the film provides everything you need to understand it. (This is not entirely true since clearly there are cultural and genre assumptions at work, but it is relatively true.) “Cold” media, on the other hand, is full of gaps. Gaps that the reader/viewer has to fill in in order to get the message encoded within the media. This is the realm of conversation and commentary. In order to understand such things you must come to them with far more context than is required for a “hotter” medium such as film.

If this reading of “hot” and “cold” is close to what McLuhan intended, then it suggests that his classification of various media as one or the other must be arbitrary. Because there is nothing inherent in the technological form of film that necessitates that it provide its own context, and there is nothng inherent in the technological form of television necessitating that it leave gaps. In fact, it seems that television has shifted to be “hotter” than not. I suspect that there are technical aspects of various media that make them better suited to hotness or coldness, but nothing that constrains their use in the opposite mode.

McLuhan properly identifies hotness and coldness as a continuum rather than a dichotemy, but he fails to recognize that any piece of media’s position on that continuum is flexible.

Thomas

Hearken back to the days of your youth

Friday, October 10th, 2008

I attempted to get a head-start on this project last week by hooking up a pair of potentiometers and doing two inputs. This ended up breaking my perfectly fine single-input project. However, it was not for nothing. Looking at the control setup I had, even if it wasn’t woring, made me think of an Etch-a-sketch. Since I was going to have to make a multiple input project anyway, I figured that this would be the wy to go.

The first thing to do, of course, was figure out what inputs I needed. A regular Etch-a-sketch has a nob each for X and Y position, that much is obvious. But it also erases itself when you shake it. The knobs were obviously going to be potentiometers, but I had some options with the shaking. I could abstract it out to a simple button, or I could spring for an accelerometer.

The three-axis accelerometer available at the bookstore isn’t precisely cheap, but knowing my interests I decided that I was pretty likely to want to build something else using the thing later, so I grabbed one.

With my two potentiometers and my three-axis accelerometer in hand, I went about planning my project. First of all, if I was going to do a controller like this, I wanted it to have an actual housing. I mocked it up with a piece of paper folded a few times to give it some sturdiness. I poked holes for the potentiometer knobs to go through and secured the pots with small nuts and washers. I tested the setup to make sure that thepots wouldn’t spin in place, and once satisfied I set off to find some more serious materials.

To the scrap bin in the shop with me! I found some very thing wood laminate. I wouldn’t use it as an actual construction material normally, but it was sturdy enough to hold a pair of pots in place. I drilled a pair of holes, had to widen them with my pliars, and then scured the pots with nuts and washers again. This would serve as the front/top of my controller. Since I had to do some wiring, I figured that my small breadboard would provide a great back for the controller.

I got the pots wired down, and then set the accelerometer directly into the breadboard. Since I only really cared about shaking, I just hooked up a single axis from the acceleromter to my micro-controller. I picked the Z-axis because I figured that most shaking woul default to “up and down” motion. The pots, when pressed against the breadboard,left just enough space for my wires to have wiggle room on the breadboard. I cut five long pieces of wire (power, ground, pot1, pot2, and z-axis) so that I could have the controller free of the arduino. I then plugged the wire in and wrapped it around to the back of the breadboard where I taped it down so that there would be some stress reducution. Wires secured, I then taped my wood laminate top down to my breadboard and completed the controller.

With the hardware thus finalized, I went to work on the software. I started with the micro-controller software.

First, I had to ensure that my hardware was properly assembled. I set up a simple series of print commands to check for inputs. Everything worked. Next up I had todo two things. 1) Set up some on-board code to determine when the controller was being “shaken”, and 2) Serial output the data from my sensors.

The shake sensing was actually pretty easy, but worth explaining. Since I had worked with value-ignorant sensors before, I knew the first thing to do with the accelerometer was to establish a baseline. When you power up a value-ignorant sensor like an accelerometer it begins providing data right away, but that data is rarely 0. In order to compensate for this I established a global variable which I set equal to the z-ais input during setup. Subtracting this value from future z-axis inputs would offset the input and result in 0 if conditions were unchanged.

Then I sat and watched my serial monitor while I shook the controller. Using this highly scientific method I determined that the z-axis differential was about 150 (or-150, depending on which direction the controller was moving) when I felt like I was shaking it hard enough. Since shaking was two-directionaly I figured that just checking for the positive value would be good enough.

With my shake value established, I went to set up the serial output. I decided on a comma-separated, line-break-terminated schema. Pot1-comma-Pot2-comma-SHAKE. Each of these was output with a print command except for SHAKE. As a simple boolean it would be 0 when the controller was not being shaken, and 1 when it was. This was accomplished with a simple if conditional so that if z-axis differnential were greater than 150 the system would println 1, otherwise it would println 0.

This took care of the micro-controller coding, and some careful watching of the terminal during testing proved it functional. It was time to move on to the actual application in Processing.

The first thing to do was to make sure that the application could properly receive and parse serial data. I skipped straight to the serialEvent code. A standard readStringUntil(), if != null, trim(), and split() combo produced my input array. Since there wouldn’t be any drawing to be done if there was shaking going on, the first thing to do was to check my shake boolean. If it were 1, reset the background, otherwise draw stuff. I had intentionally had my shake boolean in the final position of my serial string. That way if only a partial message were received, the array would hold a null value in the final position (thanks to split) and not erase things since null != 1.

With the shake and erase functions set up, it was time to figure out how the drawing would happen. I knew that what I wanted to do was draw a line from the last cursor position to the current one. That meant I needed variables to track the previous position (as the serial input wasn’t providing them). Global variables were the obvious choice. Then a simple line command using the previous X and Y data and the current X and Y data on every update would produce the lines wanted, and then resetting the global variables to the current position made sure the proper line would be drawn next.

This ended up setting up a potential problem. The first time the serial event function ran the “original” X and Y values would be 0 since that’s what they initialize to. This would result in a long line drawn from the corner to whatever position the pots started out in when the application started.

There were two possible solutions to this problem. The “good” one and the “lazy” one. The good one would be to have the setup function call for a serial input and to set the initial X and Y variables to that. I didn’t really want to deal with writing a hand-shake protocol, so I went with lazy. The lazy solution was to set the initial X and Y values to numbers too high for the pots to ever report (I picked 5000, though any number smaller than 0 or larger than 1023 would have worked). Then, down in the part of serialEvent function where the drawing happened, I wrapped the relevant code in an if-else set up. If X or Y were 5000, then just set X and Y to the current reported values. In all other cases do some drawing. Since the pots could never make X and Y 5000 again, that part of the conditional would only happen once. (Though, if I had it to do over again, I’d probably go with -1 instead of 5000.)

And there you have it! A thing that’s sort of like an Etch-a-sketch. I hope to have some nice demo videos up soon.

Thomas

Stop-motion is hard work

Monday, October 6th, 2008

Zach and I ended up working together on our stop-motion project. We tossed some ideas back and forth, and at first we were thinking about doing something involving time-travel by using onion skinning in the actual video file.

After some more discussion we decided to do something different. Zach had done some stop-motion work for fun a while back, and he had used aluminum foil for his characters. This seemed like a pretty cool idea so we sat down and did that. I’m rather pleased with the results:

A couple of things that were interesting here:

1. Stop-motion is incredibly time-consuming. We shot just under 500 frames, and it took us just over three hours to do. It probably would have been slightly faster with a better pre-production setup, but most of that time was taken up by actual posing work so I doubt it would have been much faster.

2. Since neither of us had Macs, I used a piece of open-source software called StopMojo. It’s nothing fancy, just a simple Java-based image capture program that tracks frames and allows for simple onion-skinning. The frames are stored as JPG images. For post-production we simply copied that folder of JPGs and imported them into iStopMotion in order to duplicate frames as needed, and delete or move things show out of order.

3. We had assumed at first that we would be using 15 frames per second, but once we took a look at that, it felt too fast, too smooth. We ended up dialing things back to 12 frames per second which felt a lot closer to what we were going for.

4. I was shocked by the size of the final video file. The 500 frames we ended up using constituted about 4MB of space on the disk. The just over 40 second .mov file we ended up with was over 600MB. With no audio track. I have to assume that it got saved as an uncompressed MPG2 format file or something because I’ve never seen 40 seconds take up that much space before.

Thomas

Comics and closure

Saturday, October 4th, 2008

This is a somewhat belated update, but I felt it needed to be documented.

For this past week our comm lab assignment was to read Scott McCloud’s excellent Understanding Comic and use that reading as the basis for a 4 to 10 panel comic strip.

This was a paired assignment, and I ended up with the excellent Catherine White as a partner. Having read McCloud before, and having reread the book before our first meeting, I had some things to suggest. Once again demonstrating that I am not an artist, my idea was to play with closure. Closure is the way that the mind fills in the parts of the narrative not shown. When the murderer raises his knife and stabs it down and the scene cuts, you mind provides closure, filling in at least a general idea of what happened. In comics closure is a big deal because it happens between almost every set of panels. The mind must fill in what happens between one image and the next.

Given the specific way I tend to be interested in narratives, playing with expectations like this was a no-brainer. My specific idea, which interested Catherine, was to start with a simple two-panel comic which suggested a simple, uninteresting transition. We went with an uneaten bagel in frame one and the same bagel with two bites taken out of it in frame two. Hopefully the audience would fill in the mundane eating of the bagel for us.

Then, once the audience has this story in mind, we reveal the true sequence of events. Clicking on the strip reveals a longer, 7 panel, strip wit the same first and last panel. However, the expanded comic reveals a much less sequence of events that lead to those two bites being taken.

There were some mistakes in execution, of course, but I was rather pleased with things overall.

Of course I’m not an artist, or to whatever degree I m, my medium is analytical writing, so I feel like I’ve conveyed my point at least as well in ths explanation as I did in the comic, but it was still an interesting and satisfying project.

For those interested, the strip is here. Click on it to transition between the two versions.

Thomas

Mechanical reproduction, another thought-provoking reading

Sunday, September 21st, 2008

So for Comm Lab we’ve had a number of interesting readings. This week was Walter Benjamin’s “The Work of Art in the Age of Mechanical Reproduction”. Actually, I’d read this one before, but it was quite a while ago. In fact it was waaaay back in one of my earliest philosophy classes: Dr. James Shelley’s “Meta-aesthetics”.

It’s interesting to come back to it so many years, and a sociology B.A., later. After all, the piece is incredibly Marxist, and the analytic school of philosophy isn’t all that Marxist. But all that sociology reading gave me a much better grasp of Marxist ideas and direction than I had before. I actually found this reading more thought-provoking as a result.

Benjamin actually has a lot of ideas packed into this tiny little essay, and he expresses them in a very Continental way. It’s very strongly about politics, and the ideas themselves are argued in historical rather than analytic terms. There are many references to French thinkers too, and that’s generally a good tip-off. Of course with it being such a Continentally-styled piece of writing it’s hard for me to judge whether Benjamin actually misses the key to all of his discussion, or if he just says it in a way that I don’t parse well. I suspect the former. (As with Ong, I’m being uncharitable and assuming a mistake in thinking rather than one in communication.)

While Benjamin presents his argument as being about mechanical reproduction generally, I suspect that he’s really concerned with a very specific set of mechanical reproduction: film, and perhaps, recorded music. Sure, he pays lip service to the changes introduced by other forms of reproduction, and these shouldn’t be taken lightly for they are certainly significant, but most of his arguments don’t seem to be about those in general, but about film in particular.

He talks a bit about aura (a poorly chosen piece of specialty jargon), and that’s certainly a general concern for all mechanical reproduction, but it seems that the real thrust of his argument is about the nature of viewing. What he seems to want to call the shift to “political” viewing of art. (I’ll talk a bit more about aura later.)

The problem with all of this is that he seems to attribute both more and less than he should to film. He hails it as the first medium of its kind, and justifies this by pointing to its wide audience intention and sort of requirement of a passive, rather than engaged, audience.

I actually don’t feel like I have the time or energy to do a deep analysis of all thism, so I’m going to break it down to a couple of quick points.

1. Film audiences operate at two scales: everyone watching a specific screen (and thus able to directly interact with one another) and everyone watching the film on any screen (no matter how separated).

2. The key to film’s differences is, as Benjamin brushed across, the way that it mediates time for the audience. This makes it like music more than like any form of static art.

3. The other key actually is mechanical reproduction. It is because film must be a mediated experience, it can not interact with or adjust to its audiences, that makes it so different.

4. Combining 2 and 3 actually does give us a new art form of sorts in that prior to film, mediated art was static. (There might be a very interesting exception here in recorded music.)

5. Benjamin’s thoughts about architecture are brilliant. The suggestion that its an art-form properly appreciated “tactiley” (I’d say kinesthetically) is extremely good and opens up a number of new questions.

6. He does a terrible job of justifying his ideas about art becoming “political”. In fact I feel like he has too much unexamined Marxism in his thinking in general.

That’s it for now.

Thomas

Electronics are apparently primarily info-centric

Tuesday, September 16th, 2008

When we were discussing precisely how to spend our time observing people interacting with technology Neo suggested, and I quickly agreed, that walking up Broadway from ITP to Times Square would likely be pretty interesting. So that’s just what we did. We had planned to hit the road by about 15:00 on Sunday and catch the R train back to the program around 15:50 or so. We both ran a bit late, so we didn’t actually startwalking until just before 16:00.

The assignment was one that I had thought would be interesting when it was assigned. I like watching people, after all. But I was surprised by just how enlightening the whole thing was in some ways.

Earbuds, everyone has them so you don't notice that anyone does

Earbuds, everyone has them so you don't notice that anyone does

1. Earbuds are so ubiquitous as to be invisible. Both Neo and I had digital cameras and we sort of aimed to get shots of everyone using electronic devices as we walked. We realized that this would be difficult, but initially I had thought it would be due to simple volume. I was surprised to find out differently. See, the real difficulty came from the fact that I, personally, have reached a point where I simply don’t notice earbuds. This isn’t because they’re unobtrusive. After all, so many of them are that high-contrast white. No, what has happened is that so many people wear the tings that I’ve stopped seeing them. I find this revelation to be a fascinating sort of commentary on the way ubiquity becomes invisible as well as a sort of insight into my own psyche.

Notice that there are no watches here

Notice that there are no watches here

2. Watches: generational gap. Early in our walk Neo and I talked about how people didn’t seem to be wearing wrist watches. This makes sense since people are so likely to be carrying an important device that incidentally tells time (like a cell phone) that there’s no need for a dedicated time-telling device. There were two major exceptions to this trend toward fewer watches: the elderly, and the professionals. The elderly, I suspect, have wrist watch use so deeply ingrained in their habits that continued use is almost inevitable. It’s simply another instance of a generational/technological gap. The pprofessionals were something else altogether. It was quite fascinating to observe all their very nice watches. Watches that were really more fashion accessory than time-telling tool. This suspicion seemed to be pretty well confirmed by the observation of a businessman with a rather nice watch who pulled out his cell phone to check the time.

Visual information broadcast

Visual information broadcast

3. Electronics as inforation sources. By far the dominant use of electronic devices of all sorts observed were intended to transfer information, generally to the user. Cell phones and MP3 players are obviously informational devices, but it turns out that many other devices are too. Traffic lights, both vehicle and pedestrian, are, in an important sense, purely informational. They don’t actually control the flow of traffic, however we may say it. After all, they’re simply colored lights. What they do is signal to everyone in visual range precisely what rules people are expected to follow in an area. Other overtly information-conveying devices include the digital readouts on buses declaring their routes and the various electronic advertisement signs.

Private aural space

Private aural space

Of special interest is the division of public and private information devices. The majority of private devices are primarily aural: MP3 players, cell phones, and the like. While the majority of public devices are vsual: signs, signal lights. The aural component is more intimate and intrusive while being more easily distorted by distance and the visual component is less intrusive while being less distorted by distance so this makes sense. That said, I feel that there’s some interesting things to play with in this sort of aural/visual-private/public divide.

Money is an oft-overlooked form of information technology

Money is an oft-overlooked form of information technology

An interesting revelation that this line of thinking led to is that the ATM is also primarily a sort of information dispensing machine. Beyond the obvious functions involving relaying your bank balance to you, cash itself is really an information carrier rather than a good. A highly specialized inforation carrier, to be sure, but still just encoded knowledge.

An island of electronic silence

An island of electronic silence

4. The subway is a surprising zone of electronic silence. Not that I hadn’t noticed this before, but I had never given it much thought: a lot of the quiet that one finds on the subway is due to the fact that there is no cell reception down there. In any similarly crowded public space with cell reception you are virtually guaranteed to be unwilling participant to at least half a dozen conversations taking place on cell phones. There are numerous people on the subway with books, newspapers, and magazines. That much physically printed material is rare these days outside of a bookstore. After all, your average coffee shop, once a bastion of reading the printed word, contains more glowing-screened laptops than books these days.

Overall the observation exercise was an interesting one. Recogizing the dominance of electronic devices used for information has gotten me thinking quite a bit about what other possible uses we might be missing out on. Clearly we know that these things are good wiith data, but surely there’s more to them than that.

Social interaction online

Monday, September 15th, 2008

For comm lab this week we were given three articles to read. The articles in question were “The Trolls Among Us” and “Brave New World of Digital Intimacy” from the New York Times Magazine and “CIA, FBI push ‘Facebook for spies’” from CNN.com.

All three articles deal with the intersection of real social life and the internet. I don’t really have much of a reaction to the articles. They were interesting, to some degree, but as someone who spends far too much time reading and thinking about social-technological interaction there wasn’t really anything new there for me.

The trolling article was really more of a human interest piece on Jason Fortuny than a real attempt to understand trolling. In fact, both of the articles from the New York Times Magazine had the same conversational tone to them. They were fine for explaining things for people who had no background, but there was no real academic rigor or serious theoretical thought.

Also, one of the articles capitalized danah boyd’s name, which is one of those things that always bothers me.

Basically the articles were ddecent overviews, but for people with serious interest in the subjects they dealt wth, there just wasn’t much there.

Thomas

More work with analog and digital

Sunday, September 14th, 2008

When last we saw our hero (me), we had a sort of sketchy resistometer. The code is set up poorly and the LEDs don’t work properly. Let’s see if we can fix that.

The first thing we need to go over is the interaction of parallel and serial electric circuits. In our previous design the LEDs were all in parallel, and that parallel array is connected to a single resister. It looks something like this…

Diagram of parallel LEDs

Diagram of parallel LEDs

Here’s the problem: because behind each of these LEDs is the micro-controller, and each of the micro-controller pins provides separate power source, as each LED comes online the current flowing through the parallel circuits adds up as it hits the resister. While the voltage remained constant in theory, the increasing current across a single resister (thanks to Ohm’s law) pushed the voltage up. The single resistor creates a bottleneck that reduces the proportional power across the circuit as each LED comes online.

The solution is to give each LED its own resistor. Because I’m lazy and didn’t really want to cut a whole bunch of resistors, I scaled the design back from 8 LEDs to a mere 5. This meant rewriting the code with new intervals, but I had to do that anyway due to the greater-than/less-than confusion in the last version. So here’s my redesigned LED array:

LED array

LED array

Then I reconnected my input device, the potentiometer. Which excitingly looks like this:

The exciting new circuit

The exciting new circuit

Then the code needed to be rewritten, which was pretty easy:

void loop()
{
int inputValue = analogRead(5);

if(inputValue < 200) digitalWrite(6,HIGH);
else digitalWrite(6,LOW);
if(inputValue < 400) digitalWrite(5,HIGH);
else digitalWrite(5,LOW);
if(inputValue < 600) digitalWrite(4,HIGH);
else digitalWrite(4,LOW);
if(inputValue < 800) digitalWrite(3,HIGH);
else digitalWrite(3,LOW);
if(inputValue < 1000) digitalWrite(2,HIGH);
else digitalWrite(2,LOW);
}

And then we boot it up and we get this lovely little thing:

Exciting, right?

From here on out I'm going to be working on getting this thing converted from a resistometer to a deciblemeter which reads the volume of ambient noise. The output, and even the code, will probably be mostly the same. The hard part, or at least complicated part, is figuring out how to use a voltage-producer instead of a variable resistor as an input. I've got some emails out about that. We'll see what happens.

Thomas

Analog input on digital circuits

Sunday, September 14th, 2008

Digital circuit design is a really useful skill for someone who intends to be building electronic devices. Understanding the logic of binary states is something you certainly need if you want to do anything with a micro-controller, and if you want to integrate complex software controls with you tiny devices you do want to use a micro-controller.

However, most of the world is analog, not digital. While there are certainly a lot of interfaces you can design that are purely digital, there are many sorts of interactions that are analog only. Thus, in order to do some of the really interesting stuff in electronics design you need to understand analog inputs and how to handle them.

The first thing to understand is that while you may be using analog devices, if they are connected to a micro-controller then their inputs are being converted to a digital format. So while the number of states an analog device can have are actually infinite, when building circuits they are limited by the precision of your micro-controller’s analog inputs. The Arduino uses a 10-bit analog input system, which allows for 1024 (0-1023) possible states. Roughly: the Arduino can measure analog inputs to within about 0.1% precision.

In order to play around with this I decided to set up a series of LEDs (that is, multiple outputs) to indicate variable resistance levels. More LEDs light up as resistance increases.

Step one is setting up the LEDs. This is pretty simple. You might notice that I’ve got a single resister on the ground connector rather than one for each LED. The idea here is to use the sequencing features of electric circuits to save on resisters. This should result in the same effect as a separate resister for each LED. (Note: This is not actually true, as we’ll see later on in the post.)

LED array

LED array

Next we need to hook up our input: a variable resister. In this case we’re using a potentiometer. Like all variable resisters we need three connections: power, ground, and the input pin for the micro-controller. Here’s the potentiometer set up on the breadboard.

Add the potentiometer

Add the potentiometer

Now that we have our circuits put together, we need to connect them to the micro-controller. Remember that since each LED is lit separately, it needs its own pin on the board.

Connect it all to the Arduino

Connect it all to the Arduino

Now all that’s left is to do our code. It’s pretty simple: the Arduino can sense 1024 possibilities, round that off to 1000 for ease of use. There are 8 LEDs which gives each one an equal integral of 125 values. Simply divide them up and have a series of non-interfering IF statements.

void loop()
{
int inputValue = analogRead(5);

if(inputValue > 125) digitalWrite(9,HIGH);
else digitalWrite(9,LOW);
if(inputValue > 250) digitalWrite(8,HIGH);
else digitalWrite(8,LOW);
if(inputValue > 375) digitalWrite(7,HIGH);
else digitalWrite(7,LOW);
if(inputValue > 500) digitalWrite(6,HIGH);
else digitalWrite(6,LOW);
if(inputValue > 625) digitalWrite(5,HIGH);
else digitalWrite(5,LOW);
if(inputValue > 750) digitalWrite(4,HIGH);
else digitalWrite(4,LOW);
if(inputValue > 875) digitalWrite(3,HIGH);
else digitalWrite(3,LOW);
if(inputValue > 1000) digitalWrite(2,HIGH);
else digitalWrite(2,LOW);
}

Observant people will note that my code has an unfortunate little error: while my circuit theoretically measures resistance, lighting more LEDs as resistance goes up, what this code actually does is light more LEDs as resistance drops. This is easily fixed by swapping all greater-than symbols for less-than symbols in the IF statements.

Let’s watch this baby in action.

The key thing to note here is that as more and more LEDs are lit, they all get dimmer. This little problem had me banging my head against the wall until I thought it must be a simple power-drain problem with not enough current to light all the LEDs. Except this thing is running on a 500mA power supply, and there’s no way these things need 100mA a piece. The problem is actually one of the interaction between parallel and serial circuits, which I’ll talk about more in the next post.

Thomas