Tuesday, July 22, 2008

Technology and Values


What motivates you to do your best work? For me, I want to be working on something I believe is helpful to the modern world we live in. Just listen to NPR for twenty minutes and you'll hear a great variety of social problems that can be addressed by clever technological designs - for instance, how can we encourage people to bone up their gas mileage by changing their driving behavior? How can we motivate children to eat better and move their bodies more? How can we get people to start growing their own food, shopping for local produce, or lessening their carbon footprint? How can we encourage a feeling of community? Those are the kinds of problems (though not necessarily the exact ones) that would motivate me, and they are all questions that can be addressed by technological designs.


In a nutshell, I love using my creative and analytical skills to solve *very real* problems. And you need to talk to people and go out in the real world to find out what those problems are, and how to address them.Here's a question for you. In The Human Factor, Kim Vicente says "technology is value neutral," and it is how the technology gets used (i.e. the cultural factors) that determines whether it is used for good or for ill. How do you feel about this? If this were true, it could mean that those people who strive to understand - and even attempt to design - appropriation, adoption and interactions with technology have perhaps even more important work than those who build the technology itself. What do you think?

Monday, June 16, 2008

Revisions 2


2. Suggested Changes to Notebook Icons
a. Audio playback icon

We find the record, pause, and stop buttons to be so intuitive because of their consistency with other computer and real-world players. Therefore, we’re stumped as to why there isn’t a play button for paper-based control of audio records. While there are handy icons for scrolling within a record, it’s not initially semantically clear what the purpose of this icon is for.

Putting a “play” triangle somewhere on this image would greatly assist users in figuring out how to operate the pen using paper. Above we show our suggested change.

LiveScribe User Feedback

1. Trouble Using Menu System

a. Hard to Find 'Play Session’

Many of us experienced the same problem trying to play back audio or initiate ‘Paper Replay’ using NavPlus. According to the Smartpen User Manual, in order to record a session, the following set of actions must be executed.

- Double tap center of NavPlus to reach the main menu.
- Tap the down arrow until Paper Replay is displayed.
- Tap the right arrow to select the Paper Replay application menu. The first item is Record Session.
- Tap on the down arrow until you reach Play Session.
- Tap the right arrow and use the down arrow to scroll through sessions in the pen's memory.

Three of us made the mistake (several times) of getting to the Paper Replay menu and then accidentally pressing the top right arrow again, which initiates recording. This created several “dummy files” (i.e. accidental recordings) that then had to be erased once the pen output was downloaded. Why is Record Session the first item in the Paper Reply menu? Wouldn’t record session be better placed in another sub-menu? Paper Replay is conceptually about playback, not recording, and so it is counter-intuitive to place Record Session in this sub-menu, especially as the first choice. It is also not easy to know that there is another option in the Paper Replay sub-menu below Record Session to choose at all, because of the visual layout of the menus (only one option at a time can be shown on the pen screen).

Our recommendation is to move Record Session from the Paper Replay sub-menu and instead make Record Session an option in the Main Menu before or after Paper Replay.

e.g.

Current Options in Paper Replay
>Record Session
>Play Session
>Delete Session

Suggested Revisions to Paper Replay
>Play Session
>Delete Session

b. Recording Mistakes Easy to Make

In addition, it is too easy to begin recording when one doesn’t mean to. It would be useful to have another option between Record Session and the actual command, much like currently exists with Delete Session. For example, an intermediate menu item that reads:

\Record Session? >

Selecting the right arrow then confirms recording to begin. This could prevent recording “dummy sessions” by giving users another layer of insulation from an error.

Friday, May 30, 2008

Latest Ethogram


An image of the latest ethogram.

What the field researchers want...

From Whitney, a great summary of the work we've done so far on the LiveScribe pen project.

"I think we've come up with some really great design ideas, and pinpointed opportunistic places for users to have control of the data/the ability to code. Examples that stand out to me are: The buttons are a fantastic solution to time stamping a behavior, and I really like being able to make more "record" buttons for voice notes. Having access to data points, as well as being able to code per region allows for flexible use of the space (er, not to mention the making of the buttons)."

Whitney's take on more LiveScribe field research ideas from the Johnson research group and others:

"As far as other projects/interest: I know Chris is itching to find a way to integrate the pens on the bonobo project, and the bioacoustician I've been working with at SeaWorld thinks they might be the holy grail of field research - she took down the name and model of the pen, and I've promised to keep her posted on the pen's integration.

Friday, May 23, 2008

.:The Arduino LilyPad and Wearable Electronics:.

A concise review of the microcontroller for wearable electronics and light-up clothing. Gives a balanced view of the benefits and design issues of using this system to build your own costumes with embedded microcontrollers.

read more | digg story

Wednesday, May 21, 2008

Wonder how we change our behavior?

One really interesting question is how we adapt our behavior knowing what the LiveScribe pens are capable of. Whitney said,

"I've gotten used to the pen, and I'm not taking as many notes. Yeah, and it was funny 'cause I was like, oh recording, you know, that's like cheap, right? Recording's cheap and you can just do it and upload it to the computer, but the paper is finite ((laughs))"

Another Productive Brainstorming Session

Today was another productive Wednesday meeting for the Beluga group. Although Prof. Johnson could not be in attendance, I feel like we made a lot of progress on the ethogram design. I recorded our meeting, and found the Paper Replay function to be very useful for writing up summary notes.

Whitney is very excited about the possibility of using pictures, icons, and symbols to separate the ethograms. She said, "I think it's so cool. I think what we first saw with the pens was like, 'We can do things in pictures now! This can make things more...human...rather than us having to do things in a computer way.'

She brought a prototype meant to have blocks of space cut out below a picture. The whole thing can be printed on paper and laid over Anoto paper, so the paper can be marked on and strokes timestamped. I suggested using vellum that can be purchased for ordinary laser printers.

I feel pretty enthusiastic about the surface, diving, orientation and proximity categories we sketched out. We're still a bit unsure about the event sheet and how best to build an ethogram.

Some of the remaining challenges - should states vs. events be on separate sheets, or separated by some other feature (health behaviors like breathing and nursing, versus synchrony data)? Are we staying 'true' to DCOG, like should we only care about the dyad and not individual behaviors?





"I don't think the baby can really spyhop. That is something that the adult can do but not the baby, it's not something the calf is physically able to do."

Friday, May 16, 2008

Brainstorming with Jim

During our meeting yesterday, Jim and I talked about the power of the LiveScribe pen for collecting ideas, and helping re-establish context after time goes by. We're interested in looking more deeply at how the pen can support some of the creative/brainstorming processes that are an essential part of doing science. I mentioned how the class of activity I've witnessed at the Beluga group's last set of meetings is common to many, if not all, scientists who do observational research. You have to figure out what you are going to look for, what you can see, how you can record data - and feed these constraints into the design of a data sheet.

Wednesday, May 14, 2008

Meeting Notes, 5/14


Today's meeting felt quite fruitful. We started out with an update from me - I basically outlined for the group what kind of data we should be able to access sometime in the future, once the SDK is available and more accessible. I also told the group how the pen timestamps strokes. Chris asked about how fine-grained the timestamp is.

Whitney then reported that she was still very unsatisfied with the current state of both ethograms. She is also not thinking that they are fully taking advantage of what the pens can offer and record. She said she had gone back and reflected deeply on the kinds of questions the project is trying to answer.

Chris led the group through a very careful outline of all the data that needs to get recorded. We started out with the All Occurrences sheet. What behaviors need to be captured?

Breathing - rate important for SeaWorld to know animals are healthy
Nursing - again, indicates health of infant
Floating - too much could indicate unhealth
Bubbles - indicates breathing, and can pinpoint who vocalized if unknown

Chris pointed out that both events and states are recorded here. Separating them can be useful.

Possible States:
-Floating
-Static underwater
-Swimming
-Floating
-Spyhopping

Possible Events:
-Breathing
-Nursing
-Surfacing (B only, M only, B-->M, M-->B, Synch.)
-Diving (B only, M only, B-->M, M-->B, Synch.)

Chris also noted that it is important to know what state an animal is in when it performs an event. The timestamped pen data can help delineate this. There may also be a work-around (/) to point out state changes during a time block.

_________________

Interaction/Relative Dynamics Sheet

We had a discussion about the importance of recording proximity information. Whitney had outlined a proximity scale, and we discussed/refined definitions.

0 - touching/in contact
1 - slipstream (baby or less width apart)
3 - proximal (adult to baby's width apart)
5 - other

We also talked about the importance of relative orientations. Chris sketched out several possibilities (shown on attached sheet). We brainstormed about putting pictures of postural configurations along the top of the sheet. In the current version of the sheet, we can assign values to the orientations. In the future, we can assign Anoto address space to these icons, and touching the pen in the box will trigger a recording event.

The quality of the contact between mother and baby is important too. We decided what kind of touches are important - all, or just a subset? Is fluke to fluke important to know, or is just a touch in general enough to know about. We decided that the only important touches were touches at the mammary area of mother, and rostral touches. Rostral touches are touches where the ecolocation organ is facing the target - might indicate that some kind of spatial mapping development is going on? Therefore, the touch categories are:

On mother: On baby:
Rostral Rostral
Mammary Body
Body

Play behavior is also important to record, as it may indicate imitation activities.

**The researchers also pointed out an additional feature of using the LiveScribe pens to pilot this study - they can record meta-level observations of data collection while simultaneously taking data, i.e. "We really should add another column for xyz..."

____________________________

Remaining questions:

How fine-grained is the timestamp?
What is the human error between two scorers?
Can we mock up something for next week, using transparency sheets, and??

Tuesday, May 13, 2008

LiveScribe Data Update


A big question we've been asking is what kind of access we'll have to pen data, and what kind of data the pen is recording. Jim recently got (some) access to information about LiveScribe's SDK, so we have more of an idea of what we're dealing with.

First of all, what the data looks like. In other Anoto pens, the pen records and timestamps all address information during a stroke, kind of like tiny video frames of what the camera is seeing during a stroke. Data is recorded 72X/min. In the LiveScribe, data is only timestamped at the onset and offset of a stroke.

What this means is that we wouldn't be able to get real-time data if we were recording whale paths through the tank as a continual line. We need to pick the pen up often to get stroke information timestamped. This should feed into the design we choose. The good news is that Whitney's design using specialized symbols induces several strokes by design, so we should get good data from this kind of ethogram.

Another question we've had is about encoding special regions of the paper to become icons or buttons to register certain behavior events. Jim said that this is what the SDK should allow us to customize. The bad news is that we don't yet have access to the full SDK. But prototyping or mocking a few things up to try out in the field should give us a good idea of what we'd want eventually.

Thursday, May 8, 2008

3 Laws of D-COG Analyses

From Chris Johnson's 4/30/08 lab meeting:

1. Interaction as a unit of analysis
2. Consider multiple time scales
3. Attend to configural change

LiveScribe Beluga Project


Yesterday's meeting was quite productive. We spoke at length about ethogram designs that were scientifically sound and also technologically feasible. Two that really stood out were being able to trace/code behaviors on a top-view version of the tank, or creating a document with pictograms of behaviors. Touching the pen over a pictogram would enter a data point into a spreadsheet at that time code.

Yesterday Jim said he had gotten some documentation from LiveScribe on the SDK. Hopefully in the next few days we will be able to have a better idea about our ability to support those designs.

In other news, apparently the beluga whale Ruby has "dropped", meaning her baby might be due sooner than expected!

Thursday, May 1, 2008

TOOLS FOR ETHNOGRAPHERS

As with any science, doing ethnography involves creating cascades of representations. Below are general cascade levels many folks have expressed a need for.

First-pass tool

Many of us do a general "first-pass" through the data and create an annotated Table of Contents describing events within a video. It would be nice to have a program that would a) allow for rapid creation of "chapter headings" that could be integrated with video record, and b) feed in easily to other levels of analysis.

What should the form of the Table of Contents look like?
Timeline? Spreadsheet?
Perhaps integrates the LiveScribe pen?
Multitouch table or stylus?
Directly on the video (e.g. dots on the slider)?

Coding tool
As one develops increased familiarity with the content, categories of activity emerge. A video can then be coded for categories.

What should the form of the coded data be?
Spreadsheet?
Timelines? (Multiple)?
Integrated with the LiveScribe pen?
Superimposes coded categories on the video?

Event table tool
Timeline of events in a smaller video segment (Chapter)
What should it look like? Many of us currently use Excel. What is the advantage/disadvantage?
Should we incorporate transcripts of other kinds (line drawings, cartoons, etc.)?

Monday, April 21, 2008

Reestablishing Context of Activity

One thought: this might be part of LiveScribe's power - it easily preserves and presents a visual and audio record of activity.

Two Experiments to look at this question

1. Film 102C groups as they work on projects. Provide them with various kinds of representations or not - perhaps one as an audio record only, one with audio and digital images, one with video and audio. Which groups are faster to get back on task? Or which representations help a group get back on task more quickly (this a possible design only if there will be multiple weeks with similar activities taking place, e.g. writing).

2. Film vs. photos + podcast of 102C lecture.
Which students are better at recalling details of lecture, or which kinds of representations help the same students better recall details of lecture - e.g. what we went over last week.

Jim: Very natural to be describing it at a somewhat abstract level while watching my screen capture. Hal Pashler - does he know literature about that re-envoking context or aiding memory.

Edit snippets from the videos and we have to say me, not me.

Make up a web task, can you tell me not me
Don't finish up
Next week show a speeded up version of what they did
Reload context - isn't in the stuff but are in inferences
Making recommendations about printers
Quantify interruption cost
Group 1 - does entire task
Group 2 - gets interrupted
Group 3 -

Talk to me about what you were doing here - someone else's video vs. your own


Ed's Cascade of Representations

Rough table of contents - time codes, what's going on

"Event Table" - Spreadsheet with more events broken down - columns for Event/Timecode/Speaker/Speech/Gesture/Framegrab

How are decisions made about a publication?
How are final transcripts created?

Superimposing transcript over a map - works for a route

Tuesday, April 15, 2008

Cartoon Creator Application

A cartoon creator application would be a nice way to explore tasks of summarizing video content; navigating and annotating/transcribing videos, selecting specific "interesting" portions, composing a set of frames, fine tuning a sequence and framing for a comic strip. This project will allow us to explore a variety of interaction techniques and have a level of directness that is not possible in conventional interfaces.

----- Styli

Any stylus will work. Ordinary burnishers from art stores have nice ranges of rubbing areas depending on how it is articulated; very fine and delicate rubs to wide and bold rubs. Many varieties of custom
"brushes", with or without embedded LEDs, are very simple to construct.

Another idea would be to have the line thickness controlled with the non-dominant hand by moving along a slider (as in Photoshop). This brings up the interesting question of the feeling of directness - is it better to have the line thickness controlled directly (WYSIWYG) or via a slider (which involves a level of abstraction).

A particular stylus that I have been thinking about uses three IR leds for positional when not in contact with the surface. By positional, I mean all six degrees of freedom {x,y,z,pitch,yaw,roll}.
I'd be interested to understand more. How would a stylus that does not need to be in contact with the surface be of use for making cartoons? Any possible functions?

----- Framing tool

A frame, a rectangle or other arbitrary shape, is scaled and rotated as desired by dragging it at two points. But rather than placing it over an image, an image is dragged into it. While the image is scaled,
rotated, and moved with the frame, the portions extending beyond the frame are ghosted. When not manipulated, the portions beyond the frame become invisible.

Its the same old "using two fingers to move, scale, and rotate", except now two objects (frame and image) are interacting with each other. It should be very easy to implement.

A sequence of cartoon frames remain interactive for an ethnographer to further tune them as a gestalt of the cartoon emerges.

A sequence of cartoon frames also becomes a means of navigating the source video. For example, press on two frames can play the portion of video spanning them. Alternating pressure between two cartoon frames can fast forward or rewind, back and forth, across the span of video between
the two cartoon frames.

YES. This is one of the most exciting and important aspects of the application. It would be wonderful to use for both analysis, and to be able to "export" some kind of file with embedded video links. (Yes, this can be built to some degree in Adobe software like Acrobat Pro, but it is darn tedious. It would be so nice if it was automatic, not requiring the user to do it over again.)

-----Bootstrap Applications

Could we use the Photoshop API for the line drawing maker? There are so many features in Photoshop that I would love to explore using multitouch. I am enthusiastic about the level of directness and the large canvas size that the multitouch enables. Can Photoshop handle multitouch? How would it deal with such a thing? Can we modify it to do so? I can imagine having the non-dominant hand control settings while the dominant hand is drawing would be quite useful. Also, think about how easy tasks like erasing will be with multitouch!

I'm also quite taken with iDive as a digital video storage application. It seems like it would be quite useful as a manager for the video files and easy selection of representative frames.

I think the Quicktime API can "talk to" other applications so it's possible