Author Archives: ayman

Huffington Post: I know you watch TV News and I know you like it

I don’t actually read the Huffington Post all too often. Usually, I follow it and other reporting agencies like it through social network streams like Facebook or Twitter. If you know me already, you know I have issues the use of the term social there, but it’s time we paid it some due attention. Social streams let us know what our friends want to share (and converse with us…yes Naaman, I still say social media is about conversation). Additionally, we can also follow trusted sources, celebrities, or agencies we want specific news from. I personally enjoy this filtration, which accounts for probably 40% of my news consumption; the rest comes from TV and general news (paper or web) reading/viewing. TV and videos themselves are highly social activities—weather we watch them together or just ask our friends if they saw it later. This is what lead me to create Zync for Yahoo! Messenger and what leads others to create similar technologies.

This weekend, CNET Senior Editor Natali Del Conte, who I follow on Twitter, posted a link to the following “fair/unfair” story about broadcast news from the Huffington Post. This caught my attention, so I followed the link to the article.

The article starts in a rather pointed tone:

American television news is returning to its roots as an information wasteland. Pretty faces with largely empty heads read teleprompters and mug for the camera. A dollop of information surrounded by a thick sugar coating of Kewpie doll. The major difference between the evening news and Jeopardy is that Alex Trebek is probably better informed.

Which leads one to think this is an op-ed…remember when those were on the last page of the paper? I’ll let this point slide for now. The author continues:

Television is still the dominant source of news for most Americans.

Immediately you can tell the author (Brian Ross) is upset. From some recent studies he points out, half of us get our news from the TV first. If we find something of interest, 29% of us will hit the Internet to learn more. But actually, 48% of us will watch more TV for followup reports. My guess is if you want to follow up a month later, you’d likely hit the web. In the chance you want late-breaking news online, you’ll hit Yahoo! News way before you check the HuffPo.

All this reads well and fine to some extent, but actually, he is upset that many great reporters cannot survive on TV. At the same time he’s cites TV News as a degrading trend of pandering to the ignorant TV audience “rather than trying to lure back the hard-core news junkies”. There’s an interesting slight of hand in this argument (I think, more formally, cum hoc ergo propter hoc).

He describes TV news agencies as “in a live feed where news is breaking, they buck-and-wing while research staffs scramble to Google up information to make them look a little less piteous….as Jon Stewart so aptly point out in his recent rip of CNN, that they don’t even bother doing any fact-checking”

Which he then illustrates using a Daily Show clip. Yes, a Daily Show clip. In his rather long argument about print (or web rather) being collected, thought-out, and real, he embeds an 11 minute and 33 second video from a comedy TV program to support his argument. Way to go. I’ll let you read the whole rant which is worth the look. It seems his account just falls apart despite a nice collection of sources (Natali’s point is correct, it is a mash of the fair and the unfair…I’ll just point out that the threads are orthogonal at best).

This article did lead me to think about is TV and its social nature. I wonder of the half of us who watch it for news…why? do we actually want fodder for hard-core news junkies? or do we want the mix and balance we get? more so, are we watching this news with other people? do we ask our friends “did you hear what happened in Gaza?” as equally as “did you see Bon Jovi on the Today show?” The Internet or even print won’t magically become a primary source without a real social presence (and I don’t mean add a ‘tweet this’ button to your article either). But maybe there is a more effective way for the HuffPo to increase that 29% followup if only there was a socially viable method.

Maybe people reading this blog don’t watch TV or maybe you know what just happened in the mideast and you saw Bon Jovi on Today. What do you do when you want to follow up from a TV news story?

Google Wave: one ? transition at a time

Like Naaman, I was excited to hear about Google Wave. I signed up for the Sandbox access to hack on it. I signed up for ‘Wave Preview‘ to see a more stable version. Finally, once things ironed out, I decided to start building widgets.

Having worked on synchronous web interactions for sometime, I was happy to find the overall API to be pretty clean. The overall idea is simple. When you make a gadget that does something, have it submit that event as a change in state (they call it a delta). Quite simply, if you click a button and that button increments a shared counter. The OnClick handler for that button should call something like:

wave.getState().submitDelta({'count': value + 1});

Then you implement a function, lets call it onStateChange() which will check for the value of count and set the counter accordingly. Each delta makes a playback step, which in their own API words:

Work(s) harmoniously with the Wave playback mechanism

So, if somebody wants to playback the wave, they start at time 0 and drag the slider to time 20. The onStateChange handler will fire, and the counter will be set to whatever the value was at that point. Something like:

div.innerHTML = "The count is " + wave.getState().get('count');

Pretty neat right? Well not exactly. This works for a simple example. However, if your gadget does something more complex (such as load and unload flash objects), this will cause you some trouble if you aren’t careful. Lets take this example:

  1. I start a wave and add my gadget
  2. The gadget loads some flash
  3. I interact with the flash object
  4. The gadget loads a new piece of flash (overwriting the previous)
  5. I interact with the new flash object

If I play back this wave and jump from step 1 to step 3, I have to perform step 2 and then step 3. Some what similarly, if I jump from step 1 to step 5, I have to perform step 4 and then step 5. This is because if we just jump to step 5, there is no flash object loaded to interact with; the wave will be in an undefined state (and will make the JavaScript from step 5 quite unhappy as it references a null object).

The solution here is to make sure your wave.getState() object has all the information it needs to optimally reconstruct any arbitrary state. So, from our past example I’ll list the state as {key:value, ...} pairs:

  1. {}I start a wave and add my gadget
  2. {load: object1}The gadget loads some flash
  3. {load: object1, action: action1}I interact with the flash object
  4. {load: object2, action: null}The gadget loads a new piece of flash (overwriting the previous)
  5. {load: object2, action: action2}I interact with the new flash object

Each step now clearly contains everything it needs to rebuild the world, without running through all of history again. Also notice step 4 clears out any action that is not applicable to the newly loaded object. This will add some considerable code to your stateUpdated() function (especially since Flash loads asynchronously, you’ll have to wait for a series of callbacks to properly restore the state) but then you’ll get harmonious playback.

If you want to do something fancy like maintain a stack or a more so Turing-complete series of tapes, you’ll have to talk to @hackerjack60 if you can.

Still watching the TV?

Really, does anyone actually care to watch the TV anymore? The latest influx of TV becoming social is bringing a variety of apps and funky visualizations. Take in point the MTV VMA. Why attend & watch when you can tweet? Presumably in the breaks when we were fetching a drink or trying not to spill the sour cream of a crisp. Couple this with iJustine and a soda pop like viz of people and well:



This lets people (including iJustine the hostess) pick people floating up and see terms-live. I would have liked to see this filtered using Eigenvector Centrality; one could find the salient people in the conversation easily.

But, if you are like Naaman, you are probably either talking about yourself or want to hear what people are broadcasting. Got an app for that? Well Frog kinda does, its called tvChatter (coming soon):

You don’t have to configure your favorite tweet app with filters for # tags. Just find the show and follow the tweets, or tweet away! As people chatting about tv media becomes more and more real time, it actually shapes and changes what we know about people using Twitter (remember when it was a social microblogging platform like what two years ago?).

So is this all becoming that all too intrusive computer interface for ‘learning about artwork’ that they give you on a hand held when you probably should just be looking at the painting? Or is everyone happy typing while watching? Naaman? You get a TV yet with actual channels?

The new social face of multimedia tagging.

I’ve never been too concerned with definitions—early in my graduate career I realized they were more often used for turf wars. Just as George Carlin fought to get a definition of what he could or couldn’t say, he showed us a description can be way more powerful. Lately, I’ve been describing quite a few things around people tweeting while watching TV or when at a concert. Currently, there are several great studies characterizing Twitter users. Less concerned with this, I was wondering, “if everyone watching the superbowl tweets what they think about whats happening, what does that say about the sporting event itself” (from a classic Multimedia perspective).

Using a sample of tweets captured from the first presidential debate, I began to investigate if conversationally, people behave the same way as they do when they watch TV. It turns out they do; my colleagues (Lyndon and Elizabeth) and myself were able to topically segment the the first presidential debate and identify the main people in the video, all by looking at a collection of tweets captured from the 90 minutes of the debate.

There are many gritty details (including the usage of Newton’s Method and Eigenvector Centrality) in the full paper to be presented at ACM MM’s first workshop on Social Media. Aside from methodology, we are suggesting there is more to media annotation than explicit tags on Facebook or Youtube. In fact, if Naaman tweets “I miss Paula #idol” while watching American Idol, he is leaving a comment/annotation on the media…despite there being no proper URI where Idol exists (yet!).

Recently, I was invited to speak at Stanford’s MediaX workshop on Metrics. At first, I was curious why I was there, I don’t think of metrics in my day to day life. I think about people and experience and stick figure drawings depicting the negotiation of meaning.

However, if we think about social behaviors and media (and now they relate to uncaptured events in the world): the methodological research becomes an exercise in metrics. What is happening? Is there a singular source event (or a plurality of events)? What do we measure? What does it mean to the source event?…I could go on. But you, gentile readers, can just read the paper or say hi at ACM MM in a few weeks or wait till I post more details about the work.

Understanding the Creative Conversation

In October, I’m running a workshop (with Dan and Kurt) at Creativity and Cognition 2009. This workshop builds on the workshop I ran with Ryan two years ago at C&C07. In 2007, we had a great collection of artists, dancers, musicians, educators, ischoolers, and cs folk. This year, I hope we can further strengthen our focus:

This workshop is aimed at describing the elusive creative process: addressing models of creative practice, from art to craft, from dance to education. In particular, we wish to discuss creative models that are conversational: connect the creator and the consumer via the creative act or artifact. We invite researchers and practitioners from any domain to enter into the conversation about the design and process of the creative act.

Do check out the call for participation and we hope to see you Berkeley.

PS: I’ve located Naaman and have begun a creative conversation with him that I’m sure you’ll soon read about.

The Rediscovery of the Web

Oh you’re on Twitter now? Really, this has gotten insane.  Every TV show.  Every News source.  Every post. Over the last 6 months, while we were following everyone on Twitter from the NSF’s announcements to JPeterman’s prose, we’ve seen an explosion in something called the ‘real-time web’. This brings the rise of people beginning to discuss good questions like how do these systems like twitter help people organize protests? or what can we learn about H1N1 by following where people are mentioning it in under 140 characters. If you detect any sarcasm here, there is a little. Two things to remember. First – I heard this before except ‘twitter’ was replaced with ‘sms’ and ‘friends’ were replaced with ‘address book contacts’. (Think back to the protests in France several years ago.) In fact, much of the work from CSCW that we’ve seen over the past 10 years shows everything from design constraints to their social concern. Second – this is about as ‘real-time’ as adding a buddy on MySpace is actually a ‘friend’; more on real social interactions vs adding buddies later.

As Naaman knows, I have been working on Zync, a real-time synchronous sharing system, for a few years now. Google Wave seems to be also pushing on this quite a bit as well. Before that You-Tube made a ‘pretty pointless’ attempt. Wave and Zync share a similar beat. The act of sharing is a first class design consideration; this is to say we start with the point ‘I’d like to share this with Naaman now’ (for example)…this really says ‘I want to spend some time with Naaman’. Otherwise, I’d just email the video or the map or whatever. I wonder if our nouns and verbs are evolving to match the pivot to real-time?

Spin me up, Spin me down.

As Naaman turns in his grades and contemplates pushing the limits of the web’s asynchronicity, I’ve been rather quiet.  Mostly I was distracted.  You see I was doing two things: studying how people stream their performances online with in-browser webcasting tools and launching (now the third version of) an instant messaging video sharing tool.   More on the latter later.

Almost a year ago, I was wondering why many performers were choosing to webcast themselves.  Why not get a paying gig?  Or invite some friends over for an open mic night?  At that time, two colleages and friends of mine (Nikhil Bobb and Matt Fukuda) were working on Y!Live (RIP).  Live had several DJs who would regularly broadcast sets of house, hiphop, reggae – you name it.  After some prelimiary data studies and several MySpace emails, Elizabeth and I had conducted a round of field interviews via phone calls, meetups, and got lost several times in South Oakland.

DJ Doolow

While the details of this study have many implications on communities online, performances, and webcasting, you can read all that in the 2009 Communities and Technology paper yourself (or catch me live at the conference).  Or read Elizabeth’s complementary account. I would like to talk about ecology for a moment.  Turn to Slide 43.

All of the DJs we talked to mentioned this club ecosystem. 1) They get people on the dance floor. Once the floor is filled, they stick with a genre to keep people dancing. 2) People get thirsty so they head to the bar, they will now try new tracks and styles to get a second wave on the floor. 3) Wave 1 goes to the bathroom, Wave 2 goes to the bar. They now search for Wave 3. 4) Repeat with Wave 1. But what do they do in a webbrowser?

In Y!Live, slide 45, you find the overall view count (embeds + chat room). Every DJ we spoke to, pointed right there and said ‘that’s my dance floor‘. Once that count is high, (next slide please) they turn to the chat to see the volume of conversation and maybe the topic. And finally, (next slide) they look at the viewers – checking for that head nod or hand tap. The DJs were quite realistic, knowing ppl are sitting on the sofa or at a desk.

DJs, who make their life around wiring and routing sound, had no problem using webcasting systems. I believe this is because it fits into their craft. Similarly, the scanning pattern and ecology of ‘gauging the club’ or ‘how am i doing?’ translated online from the DJ booth to the webbrowser. So, as we build tools for creative people, many people tell you ‘dont make tools that require individuals to deal with more things’. I’ll say something to the contrary. Feel free to add cognitive load, as long as it fits into the practice of their expertise.

Talking Tomorrow and the Day After

In case you didn’t talk to me enough at CHI or didn’t get your fill during my brief cameo on the Make blog – I’m talking in Tech tomorrow during the CogSci brown bag at Northwestern (gcal). It has been three years since I walked away with my PhD from NU after a great commencement speech by Barak Obama.  The day after I’m at UIUC…also talking. I’m hoping Naaman will attend both talks.  What are the talks about you ask? Enjoy this title and abstract:

(we need to talk): conversations, media, and social relationships

Most internet media sharing sites like Flickr and YouTube support asynchronous content sharing through commenting and forwarding capabilities. However, as social scientists, we have known for a long time that social relationships are fostered and deepened when people truly share experiences together–that is, when they experience and reflect on things at the same time. In this talk I will talk about two projects that address the real time sharing of internet media experiences, and how relationships are fostered, maintained and deepened through these technologies.

The first project addresses the question: Given that DJs typically connect with the engagement of their audiences by monitoring the dance floor, how do DJs broadcasting online invite and stay connected with the engagement of remote audience members? Specifically I will present the results of a fieldwork case study of House and Hip Hop DJs who connect with other DJs, and with audience members by broadcasting their music sets online while chatting in real time using synchronous chat rooms. Our study revealed the ways in which DJs maintain awareness and gauge audience engagement, and how their audiences affect the performance of sets while broadcasting online.

In the second half of the talk, I will present the architecture and early results from Zync, a tool embedded in an Instant Messaging environment that allows synchronous sharing of video content coupled with real time chat capabilities. Through analysis of people’s sharing practices, I will address the question, what keeps your friend from walking away from a chat window when you share a video from YouTube? I will also discuss some early results from our more than 10,000 people a day using Zync, and briefly discuss our plans for future data analysis as we address the nature of conversational media sharing.