Fabrication of a Bike Navigation Module Prototype

This is a prototype enclosure for the Haptec Bike Navigation System I am presenting with my team for the ITP Winter Show.  The angular design fits well inside a bike frame and suggests forward motion and directionality. 

IMG_2539.JPG

Some features: an inset & interior support for a power switch.  Slots for straps to affix to the bike & I'm especially pleased with the pressfit insets for ceramic disc magnets for the removable lid.   

The circuit design is still in process so more openings for a battery charger and a bluetooth module are in order.  

Things I would do differently or will do further: create contoured layers to constrain the interior components/circuit as well as external contours to acommodate the main bike frame bar this will enclosure hang from. 

This design is an improvement on a few previous prototypes and is based on/related to the LED circuit enclosure. 


Haptic Bike Navigation Project Update #2

Timeline for prototype completion: 

http://bit.ly/1sflFQi

Bill of Materials: 

http://bit.ly/10kJsXg

Our team (Sam Sadtler, Marc Abi-Samra, Catherine Rehwinkel) is currently designing the pseudocode and user interface aspects of our haptic bike navigation system.  We are currently considering creating a program which includes two basic navigation/routing options for the user: one provides the cyclist with the most economical route on destination input from the user and then uses a simple code of vibration feedback to signal the rider to turn.  The second, which we recently concieved of are referring to as 'True Destination' (time allowing paired in this iteration with a 'True North' functionality), gives the user a range of degree based on each turn decision the rider makes.  On destination input (a geolocation point, an address or simply a Cardinal direction) the rider can start the journey in any direction and receive a range-expressive signal which guides the rider to take a turn in the best direction.  This method differs from the first in that it doesn't prioritize route efficiency or any specific series of street turns because it prioritizes cyclist safety. The cyclist may turn when he/she feels safety is optimal and still arrive at the destination using a deliberate route. 

Below is a map which was generated using JavaScript and the Google Maps API. We are in the process of making a JavaScript web application which, in this scenario, will pull a user's GPS location and then produce a tone when it is time for them to turn. Ideally the tone will produce enough voltage to turn on a set of LEDs. If the headphone port does not provide enough power we will send the tones to an Arduino Mini which will have its own power supply be able to control the electronic components.

And here is a sketch of our "True Destination" safety routing option. 


We continue to refine our user survey strategy.  We have been talking to CitiBike users at stations to gather feedback and insight about design priorities.  One major insight has been that CitiBike users seem to be bikers with a routine or else are tourists or non-serious cyclists.  A pattern which has arisen is that casual cyclists do not seem to consider that there could be another option to the dangerously distracting audio-visual feedback from a smartphone naviation system.  We are adding bike shops to our user research because we have decided to prioritize safety and believe that serious recreational or professional cyclists may be our initial target user to design for. 

Post by Catherine Rehwinkel & Sam Sadtler

Physical Computing Final Project Brainstorm

A few ideas: 

1) A capacitive net (or wall) which blankets a person and via LEDs communicates changes in body energy.  I'm interested in the idea of qi and other bioelectromagnetic fields produced by organic tissues.  Could be a way to trigger lighting an occupied portion of a room or hallway like a reading lamp or a candle.  Maybe incorporating some kind of EEG sensor or theremin-like ambient sound element. Pros: seems like magic. Cons: possibly too ambitious and large scale.

 

2) Stereotypical Cube: A skin-tone and gender-guessing cube.  Muffled/diffused light and sound comes in and hits microphone and simple camera and compares against societal averages.  Output accuracy is questionable due to extraneous variables.  Need more research.  The concept would be centered around playing with stereotypical generalizations and making the user expose his/herself to vulnerability of being judged by a cube, a piece of "tech." 

 

3) An experiential blackout booth (glass?) supplied with several hidden sensors of different types which give the user an information about information collected about them while they where inside.   How they moved. What they said. Where their smart device was located on their body. This still-vague idea is aimed at raising awareness about surveillance.  

4) Working on a GPS & true north haptic navigation system project with Sam Sadtler from another section. Prototyping and adding haptic feedback sensors and interface for bike rider.

5) Mind Map: combined with ICM final this is a dynamic quandrant-continuum map designed to connect ITP people with similar interests, projects, skills, experience together.  This was originally conceived as an ICM and or Networked Media final as well as a utilitarian gift to our class.  I am wondering about mapping a physical space to share input. 

6) Sound bed - I'd like to explore making bedding with imbedded transducers so that people can be surrounded by bass as they sleep.  Could combine with brainstorm idea 1 - the capacitive net or wall. 

7) Halflife Hour Glass. The idea is to explore individual and human experience of time-passage.  A timepiece which tells the user using exponentially incremented luminance and tone-shift outputs from the time-piece when half of the remaining set time has passed, and then another iteration at half of that time and so on, until the final second is so minutely divided that the tone and luminance pulse/change becomes imperceptible and is perceived instead as a continuous state.  Exit condition is allowed at this point by manual interference from user.  Interaction comes from a) the user's inability to stop the countdown  b) the user's ability to mix and match combinations of increasing or decreasing luminance with ascending or descending frequency or c) a layered audio playback of recorded ambient noise begins at the first halftime (e.g. 30 minute marker) and then is layered under the next halftime recording so that in the end there is white noise or room harmonics.  Recommended to me by Arlene Ducao, Alvin Lucier's interative sound work I AM SITTING IN A ROOM is a strong reference point for this last interactive element.  A big difference is that the user can decide what to layer into the recording and move the timepiece from place to place if they wish - changing the resulting harmonics.  



More background research is needed on all. 

INTERACTIVE OCTOPUS :: Midterm Project Final Documentation

We set out to create an animated mini-animal avatar in Processing.  The concept was to encourage the user to experience a degree of transference to another type of being.  Initially we were considering a quadruped - like a polar bear.  But contemplating our hands we switched to an octopus because the look and feel of our fingers undulating in air was ripe for this kind of transformation. 

Doing a little background research; our project seems similar in its fundamental concept to to Karolina Sobecka's work with animal facial expressions.  

Initially considering using the lerp() function in Processing we stumbled upon Keith Peter's elegant segmented arm.  We spent a lot of time working to understand the sketch's trigonometry and structure so that we could manipulate it to suit our needs for our animation. 

We created a wearable controller using a kitchen glove, gaffer's tape, two flex sensors and an Adafruit Flora microprocessor. 

We tested a few iterations of our Interactive Octopus and it's physical interaction. 

We tested a few iterations of our Interactive Octopus and it's physical interaction. 

Here is a rough of our Flora circuit schematic. Originally we tweaked our sensor mapping with 5V power but since we were forced to use 3.3V we had to adjust our values - especially since we were dealing with sin/cos/atan values in our oscillating sketch objects. 

Here is our Interactive Octopus in action. 

Here is our Arduino code. 

Here is our Processing code as modified from Keith Peters. 

import processing.serial.*;
Serial myPort;

//DECLARE
Arm[] arrayArm=new Arm[8];
Arm myArm;

int numSegments = 50;

float[] x = new float[numSegments];
float[] y = new float[numSegments];
float[] angle = new float[numSegments];

float segLength = 12;
float targetX, targetY;

float xpos;
float ypos;
float xa;
float ya;

float armpos;
float armA;

 

void setup() {

  println(Serial.list());
  String portName = Serial.list()[0];
  myPort = new Serial(this, "/dev/tty.usbmodem1411", 9600);
  myPort.bufferUntil('\n');

  //noCursor(); 
  size(1440, 900);

  //INITIALIZE
  myArm = new Arm();

  for (int t=0; t<arrayArm.length; t++) {
    arrayArm[t]=new Arm();
  }
}

void draw() {
  background(0, 180, 195);

  fill(255, 120, 90);


  //ellipse(width*3/5, height*2/5, 220 + 10*sin(millis()/100), 220 + 10*cos(millis()/100));

 

  for (int t=0; t<arrayArm.length; t++) {
    pushMatrix();
    translate(width*4/6, height*2/5);
    ellipse(width-width, height-height, 100 + 10*sin(millis()/100), 100 + 10*cos(millis()/100));


    rotate (PI*t/13);


    arrayArm[t].display();
    for (int i=0; i<x.length; i++) {
      arrayArm[t].segment(x[i], y[i], angle[i], (i+1)*2);
    }
    popMatrix();
  }


  myArm.reachSegment(0, xpos, ypos);


  for (int i=1; i<numSegments; i++) {
    myArm.reachSegment(i, targetX, targetY);
  }
  for (int i=x.length-1; i>=1; i--) {
    myArm.positionSegment(i, i-1);
  }
}


void serialEvent(Serial myPort) {
  String myString = myPort.readStringUntil('\n');
  if (myString != null) {

    myString = trim(myString);
    int sensors[] = int(split(myString, ','));
    
    println();

    if (sensors.length >1 ) {
     
      
      xpos = map(sensors[0], 200, 988, PI/9, PI);
      ypos = map(sensors[1], 190, 467, 293, 971 );
      //armA = map(sensors[0], 115, 142, 0, height);
      
      
        
        
        
      }
    
    for (int sensorNum = 0; sensorNum < sensors.length; sensorNum++) {
      print("Sensor " + sensorNum + ": " + sensors[sensorNum] + "\t");
    }


    }
  }

class Arm {
  
//int sensorVal;
//int getSensorVal(){ return this.sensorVal; }
//int setSensorVal(int sensorVal){ 
//this.sensorVal = sensorVal; }
//
//for (int i = 0; i < mySensors.length; i++){
//         myArmArray[i].setSensorVal(mySensors[i]);
//}

 


  //CONSTRUCTOR
  Arm() {
 
  }
  
  //FUNCTIONS
  
  void pulseBody(){ 
    fill(255, 120, 90);
    
    //need to add accelerometer sensor values to body pulsing
   
 
  }
   
      void display(){
        
        strokeWeight(20.0);
        stroke(255, 120, 90);
        
       //set (x,y) to zero to perform transform and then rotate 
      x[x.length-1] = width-width;   // Set base x-coordinate
      y[x.length-1] = height-height;  // Set base y-coordinate
     
      
 // x[x.length-1] = width/3;   // Set base x-coordinate
 // y[x.length-1] = height/2;  // Set base y-coordinate
    
  }
  
  void positionSegment(int a, int b) {
  x[b] = x[a] + cos(angle[a]) * segLength+xpos/2.0;
  y[b] = y[a] + sin(angle[a]) * segLength - xpos/4.0;
}

void reachSegment(int i, float xin, float yin) {
  float dx = xin - x[i];
  float dy = yin - y[i];
  angle[i] = atan2(dy, dx);  
  targetX = xin - cos(angle[i]) * segLength;
  targetY = yin - sin(angle[i]) * segLength;
}

void segment(float x, float y, float a, float sw) {
  strokeWeight(sw);
  pushMatrix();
  translate(x, y);
  rotate(a);
  line(0, 0, segLength, 0);
  popMatrix();
}
  
}

 

Creative Tone & Servo Lab/ Analog Inputs, Digital Outputs

For my creative expansion on the servo OR tone labs I chose to make a combination of the two.  I used an analog photoresistor input to map the analog tone output for the piezometer standing in for a speaker.  

Next I wired the servo to digital pin 3 and mapped the servo angle output to a potentiometer's analog input into analog pin 5.  

Then I added a fan-blade or light blocking "flag" to the servo motor's blade/axis and positioned it over the photoresistor/piezometer to modulate that circuit without needing to go through the Arduino.   

servotonelab

 

 

Labs: Switches & Setting Up a First Breadboard Circuit

Today we created a breadboard circuit closed via a push button switch - power was USB fed through the USB power supply.  I used a photoresistor to create a "normally on" switch which disconnected the circuit when light was completely blocked.  I decided to 'experiment' with a 9V battery with a regulator as a power supply instead of the USB power we tried in class. 

 

 

 

 

Public Interactivity Observation

I visited the W4 street station's interactive touchscreen MTA kiosk. 

Overall I think it's a useful tool - riders can check their route/directions, get detailed destination neighborhood walking directions and layout, as well as catch updates for the various train lines arrivals.

I observed about 7 people using the kiosk - not as many people as I would have at first thought.

When I tried it out myself I realized a couple of things:

Not many people where using it because they probably already know where they are going.   Especially since peak tourist season is ebbing.  Regular commuters don't need that level of detail.

The most useful and desired function in my mind after using MTA for 6 years is train arrival timetable.  It really relieves tension and everyone I know loves it when the red LED tickers are installed and working in one of their usual stations. However, this info kiosk only intermittently displayed this information - the user was unable to call it up voluntarily - which sort of defeats a large part of the overall objective.  Annoyingly there was more screentime devoted to advertising.  A solution to this would be to have the arrival time table on one side of the kiosk at all times paired with brief full screen or constant banner ad - this way everyone on the platform could have access to the timetable while some one was using the otherside for directions.

Other concerns where of a sanitary and privacy/safety nature.  The kiosk is a giant touch screen and so required constant tapping.  No one really relishes touching any surface down in the subway stations or inside cars.  Especially not something that strangers are constantly also touching with their fingers and who-knows-what-else.  Reducing the requirement for this and improving the efficiency of the flow is a possible solution. Maybe it's zany and impractical but also an antimicrobial UV light or mist or wiper could help.

The privacy concern occurred to me after I left the platform on my own train home at night.  I had seen a woman use the kiosk and there were a couple of people behind her waiting to use the kiosk or just watching.  What if a vulnerable tourist or other passenger tapped through the different levels of details for his/her exact route home while a predator waited and looked on? The predator would then know exactly where their prey was planning to go - possibly even all the way to the street they where planning on ending up on - and then on to their hotel or home. I noticed that users have to 'tap out' of their directions to erase this evidence.

IMG_5638.jpg

With regards to this safety issue - an improvement could be one of the angle-of-view narrowing microlouver films or even a function wherein the extant recessed camera at the top of the kiosk could sense when a user has moved away from the screen and automatically refresh to the 'home' screen. 

 

 

What Is Interactivity?

As Chris Crawford instructs: interactivity is ongoing conversation between at least two actors.  In a semi-metaphorical breakdown he uses speaking, thinking, and responding as three basic steps that are exchanged in a dynamic, continuing way.  In more concrete terms these seem to stand for Actor A's output; input to Actor B who processes which and generates output which in the interface between the actors becomes Actor A's input which is processed and then produces a new output and the cycle repeats itself.  In the current sense this involves digital and analog sensors and processors, which can either interface with like, or, with an organic system— such as a vertebrate's CNS.

Playtime with my cat or calling a friend on the phone or playing a computer at chess is interactivity with wildly varying degrees of physical engagement.  As Crawford remarks, a tree branch falling and your reaction to it are not interactive— unless you lived in the Wizard of Oz universe where you could have a retaliatory apple throwing fight.  It's a simple and good point, however any interactivity experience with an actual tree would be on a timescale that Crawford might find unsuitable if one goes by his reference to the first commercial computers which took hours for a perceptible response.  Plants have recently broached the scientific conversation as active, aware organisms and are capable of interacting with each other in decent time (e.g. potent interplant and species pheromones or an auditory/vibratory response system to predators like aiphids) - it's more about what other kinds of inputs we can detect; and in turn use to build corresponding sensor interfaces.  To speak to Crawford's other point,  it's true that a traffic-schematic children's rug, or a fridge door light sensor is not interactive, since it's a monosyllabic exchange which ends and never changes and is only ever initiated by a human actor. 

The physical aspect of interactivity is vital when considering the mind and body as a cohesive entity— which Western culture, crippled and maladapted to reality as it is— often ignores.  The degrees of interactivity Crawford and Victor each outline essentially relate to the input sensor resolution, and the depth, breadth, and dynamism of the processing, as it is adapted to and exemplified by the natural evolution of the animal body (humans) as a basis for the logic and processes of interactivity. Interactive technology is perhaps best seen as glove that fits the mind and body as a unified hand.  A human tongue has some millions of tastebuds - a complex and dynamic set of sensors.  Hands, nose, eyes, and other nerve termini are sensors/output receptors.  To create the ideal interactive physical experience you might create a system, or an artificial "actor" with a higher resolution of sensors to keep up with the conversation with the entire, complete organic/animal/human actor - a complete organism as it were. 

If we look at a physical interactivity-continuum of the possible, Bret Victor seems to say that the plethora of screens we pour ourselves into is basically the antithesis of physical interactivity, and physical interactivity is it - as our bodies are an extension of our brains, and there is much more left to imagine and actuate. Bret Victor's 'Rant' on the current fad of screen swiping as the main mode for interactivity is spot on, as well as disturbing (e.g. "finger-blindness")— especially relevant as the heavily brand-incorporated AppleWATCH presentation video was unveiled yesterday. On the one hand, it may pull the consumer populace farther from physical interactivity with their world and into a myopia of wristbound tiny screens - creating widespread stilted, narrow skeletal and social positioning. On the other hand, as it is motion-oriented and equipped with various inbuilt sensors— it seems like a rich tool for developers, makers, and everyday consumers to build upon in expanding the interactive sphere. 

My understanding from life, my narrow sample core of human experience, knowledge and thought before me, and as recently as supplemented by Crawford and Victor's writings, tell me that physical interactivity has the potential to bring body and mind together, and to bring this body-mind-entity in closer contact with the reality of the planet and human and animal systems around us - a compensation for recent millenia of 'progress' which has brought us so far to start from the beginning again.  Physical interactivity could be viewed as a rejection or workaround or heavy supplementation of the mostly verbal interface which acts as a barrier between us and the truth of concrete reality, consciousness, mindfulness, balance, and pure experience (Attribution: science fiction, ITP atmosphere, the zeitgeist, yoga, my brain, Crawford, Victor, et al.).   Good physical interactivity, given our body is a collective of physical termini, can help us learn (play is learning), communicate, community build, explore, express, empathize, and problem-solve because it feeds our brain-body with rich dynamic information and in turn receives as much as it can handle and process and act on from the brain-body.  Let us be in touch with the planar surface of perceptonium which we share with all self-aware matter.  

 

 

 

 

 

XP DUB: Experience Duplicator

A spatio-temporal experiential replicator: User A moves through life in "broadcast" or "pick-up" mode - allowing the wearable brain-computer interface to assemble a geo-tagged set of discreet signals comprised of input from various brain areas.  

Emotions, physical sensations, and sentiment are transmitted to User B's wearable when she happens upon the scene of an experience User A chose to broadcast at a previous time in the same location.

Example scenarios:  

A waterfall - User A has broadcast feelings of pleasure and the sensation of water droplets on his skin.  When User B visits the same waterfall a month later she can access the index of experiential data from User A's broadcast.  

Or,  perhaps User A experiences a rancid candy at an established chocolatier while touring Paris. When User B visits the same shope the next day she can choose to receive the experiential 'review' before going in or making a purchase. 

The idea for the XP Dub developed as the result of a discussion touching on time travel.  This is a work-around of sorts as the directness of the brain-computer interface plays with individual experience across space and time. 

photo 2.JPG



Collaborators: Renata Kuba and Hugo Luce.