Sunday, May 27, 2007

Presentation Screen



This is a few photos of the pseudo wall for tomorrow, hope all is going well with everyone else. Have spoken to Sandra and she is working on the poster.

Wednesday, May 23, 2007

Decisions

We decided at our last group meeting to concentrate on our original breakdown of workload, and leave people to have that area covered.

The jobs for this final week are:
Sandra is doing the poster (and we are sending her relevant information to include, over and above what she has).
David is coding and adding the final animations.
Kathryn is finishing the finer points of the animation.
Petra is building the framework to display our presentation.

Update

This is an update on the progress, or lack of over the past weeks. There are limitations with our chosen technology and we have been trying to overcome these problems. Our initial trials allowed us to activate our Director projector using left mouse clicks which were interpreted by our interactive animation (scripted into the program). The technologies appeared to be interacting with each other.

David told us that the hotkeys on the underpinning technology was not being activated by Flash. As I did not understand this comment, I have spent time trying different scripts in Director as I do not know Flash. Hotkeys were added to the original trials for interactivity, these worked with manual key presses but would not be interpretted by Director as a hot key, the same result as Flash that David tried!! (At least now I understood Davids comments)

So at the end of the day we have limited functionality between the two programs:
-run program (allows us to activate our program)
-left mouse clicks (per motion detection with in a zone)

Functionality that could be superimposed, but not actually be working between the programs:
- Hotkey functionality (keyboard imput or sensor pads)
- Sound output (from motion detection program)

Ideas that were discussed:
-using 2 monitors (this could be intergrated, but the idea was discarded)

Thursday, April 26, 2007

Light Sensors

Light Sensors

A light sensor is a sensor that measures the amount of light that it sees. It reports the amount of light to the RCX as a number between 0 (total darkness) and 100 (very bright). from: http://www-education.rec.ri.cmu.edu/multimedia/lightsensor.shtml
Light Sensor - Collecting Light Data and Sending it to a PC
A light sensor tutorial : http://www.iguanalabs.com/light.htm
Controlling the Hardware: Sensors - http://lejos.sourceforge.net/tutorial/essential/hardware/sensors.html

Light Beams - http://www.bircheramerica.com/automatic-door/beam-sensors.htm




Just something I found ... more to come soon.

Tuesday, April 24, 2007

Things Working

Quick note, the web cam software is working, detecting two hotspots which activate seperate interactions while running an interactive animation. This in theory should allow for multiple interactions.

Tuesday, April 17, 2007

processing.org

Lango pointed me in the direction of this site and it is really interesting. It allows you to produce small applications, I have only tried the examples at this point in time. That is really easy and worth a look. (PS I downloaded to a Mac)

Am going to spend a couple of hours trying it out, and am going to
make short notes. What I am trying to achieve is:
- get the camera recognised by the program
- capture one frame from the camera
- have the program identify the pixels of that frame
- capture a second frame
- have the application differenciate between the 2 frames
- set a ratio of difference between the 2 frames
- if the ratio is above the threshhold to open an image

This should visually allow me to see if it is working.

Quick notes on Processing as I read through
- open source
- tested on mac and window platforms mostly
- Processing code is converted to java when "run" is enabled.


Exporting:
Can be exported as a Java Applet to run on the web, can also
be exported as an application to run on window, linux and mac.

- if it is exported as an applet, it can be run on the web,
now while I don't want to use this on the web, the fact that
it exports with an index page makes me think that I can modify the code further on as
I am comfortable with html. (an alterenative to using processing is flash,
which I don't know either, so David wants to use Flash and as he is our
coder, all is good). I just want to know how to do it, and I
am allocated to Reseach and documentation, and this allows us
to all see alternate routes we could take.

- any changes to the indes are lost when a sketch is exported SO
- copy the applet html file from Processing, libraries and export to the root of the sketch folder

- applets are easier than applications, hence guess what I will trial
- applets have security restrictions built in, can only connect to the computer running the program unless it is "signed"
to find out more on this I will need to go to Sun's documentation (will do this later)
- need to avoid people pressing escape as it will be passed, so it is necessary to avoid this.


A Cartesian coordinate system is used, originating from the upper left corner.
e.g. a coordinate of x and y values will start from the top left.
Processing also allows for 3D drawing and the z position at 0 is in the centre of the screen
and minus positions will move the position of the object backwards.

3 levels of programming modes
Basic (beginners), Continuous (looping)

http://processing.org/exhibition/index.html
for images

MOVING OVER TO

http://webcamxtra.sourceforge.net/

COMPUTER VISION FOR ARTISTS
Shadow Monster
was the first example that I open and it uses hands to create monstors on a screen, really cool but takes a while to download!!
by Phil Worthington at the Royal College of Art
http://www.worthersoriginal.com/viki/#page=shadowmonsters


The Legible City by Dirk Groeneveld
Manhattan version (1989), Amsterdam version (1990), Karlsruhe version (1991)
This allows a visitor to ride a bike through a representation of a city.
This site allows you to watch high or low resolution videos.

http://www.jeffrey-shaw.net/html_main/show_work.php3?record_id=83


This is a really informative site, if only I had downloaded the webcam extra for Director!!!
http://sweb.cityu.edu.hk/sm3117/index.htm


OK this is where my fun finished!! Downloaded jMyron for video capture etc and I can not find a way of copying it into the processing folders.
Hum.... hoping that someone out there is reading this and can help.
(Ps this is a few hours work)

Sunday, April 15, 2007

Video Camera Tracking

Visual Intelligence: How We Create What We See
http://www.socsci.uci.edu/cogsci/personnel/hoffman/vi.html
Donald D. Hoffman
W. W. Norton & Company, Inc.
released October 1998

This article covers how we percieve visually.

___________________________________
http://www.tigoe.net/pcomp/videoTrack.shtml

There are two methods you'll comm,only find in video tracking software: the zone approach and the blob approach. Software such as softVNS or Eric Singer's Cyclops or cv.jit (a plugin for jitter that affords video tracking) take the zone approach. They map the video image into zones, and give you information about the amount of change in each zone from frame to frame. This is useful if your camera is in a fixed location, and you want fixed zones of that trigger activity. Eric has a good example on his site in which he uses Cyclops to play virtual drums. The zone approach makes it difficult to track objects across an image, however. TrackThemColors and Myron are examples of the blob approach, in that they return information about unique blobs within the image, making it easier to track an object moving across an image.

At the most basic level, a computer can tell you a pixel's position, and its color (if you are using a color camera). From those facts, other information can be determined:

One simple way of getting consistent tracking is to reduce the amount of information the computer has to track. For example, if the camera is equipped with an infrared filter, it will see only infrared light. This is very useful, since incandescent sources (lightbulbs with filaments) give off infrared, whereas fluorescent sources don't. Furthermore, the human body doesn't give off infrared light either. This is also useful for tracking in front of a projection, since the image from most LCD projectors contains no infrared light.

When considering where to position the camera, consider what information you want to track. For example, if you want to track a viewer's motion in two dimensions across a floor, then positioning a camera in front of the viewer may not be the best choice. Consider ways of positioning the camera overhead, or underneath the viewer.

Often it is useful to put the tracking camera behind the projection surface, and use a translucent screen, and track what changes on the surface of the screen. This way, the viewer can "draw" with light or darkness on the screen.
___________________________________________________
http://itp.nyu.edu/~dbo3/cgi-bin/wiki.cgi?ProcVid

code samples for motion tracking using java
not sure if this is really what we are looking for but it is a start, for the idea of what we are looking for.

Research "How stuff works"

Ok, some research, not sure if it is of use to us, but I found it interesting and thought I would give you all the opportunity to see what I have been doing.

Different ways to create motion sensors:

LIGHT SENSORS
A beam of light crosses a space, and is detected by a photo sensor which rings a bell if the light source is interrupted.

RADAR
http://www.howstuffworks.com/radar.htm
Echo and Doppler Shift



An echo is created by sound bouncing off a surface and returning in the direction it came. The time of the return of the echo is determined by the distance of the surface creating the echo.




Doppler shift is the difference in tone of a sound as it approaches and when it is past you. Eg a car or even a plane.
You can combine echo and Doppler shift in the the same manner as the echo bouncing off an approaching object, to create motion sensors. The echo of a sound can determine how far away something is, and Doppler shift iof the echo determines how fast something is going.



Ultrasound
Ultrasound is used instead of sound radar so as not to disrupt everyday use of
Things like police radar, satalite mapping etc. (often used in medical procedures as
Well) Sonar is “sound radar” but sound does not travel very far, and everyone can hear
Sounds, submarines use sound radar or sonar.
Radar uses radio waves instead of sound. Radio waves travel a long way arn are invisible to
Humans and easy to detect even when faint.
Radio waves transmit data invisibly, through the use of wireless technologies, using
Radio waves to communicate.
http://computer.howstuffworks.com/question238.htm


About radio controlled toys:
Basic principles
Transmitter sends radio waves to the receiver (sends signal over a frequency, using
A powersource such as a 9volt battery for the power and transmission). Basic
Consumer items use 27MHz or 49MHz, eg RC toys, garage doors openers etc. More
Sophisticate RC model planes use 72 or 75 MHz. The idea with this is that two can
Be operated at the same time without interference between two transmitters.
Receiver relies on an antenna and circuit board receives signals from transmitter and activates motors
Motors can cause wheels to turn lights to flash etc.
Power source eg batteries.
http://www.howstuffworks.com/rc-toy.htm
Remote control toys etc have a wire connecting the controller and the toy.
Radio contol is always wireless.



Note on Radio Frequencies:
A radio wve is an electromagnetic wave propagated by an antenna.Radio waves have
Different frequencies, and by turning a receiver to a specific frequency you can pick up specific signals
http://electronics

Tuesday, April 10, 2007

Visit to Ipswich Art Gallery

This blog can also be found on my own blog but I though I would also put it on our collaborative space.
http://comp3000petra.blogspot.com/

Tuesday April 10, 2007

Today David and I met up at the Ipswich Art Gallery to look at the interactive Exhibition called “EXPERIMENTA VANISHING POINT”.

Spotter
– Hiraki Sawa, Japan, 2002 (represented by Ota Fine Arts)

This exhibit shows people observing planes, which are commonplace in todays society, but in this exhibit they are presented as wild creatures trapped in a domestic space.
The technology behind this appears to be overlays of video with manipulation of the size of the planes to fit within domestic spaces.

1 Parking
111 Crossing
- June Bum Park, Korea, 2002

This exhibit has two projected images on the wall. The display shows our everyday world of cars, which are being manipulated by a persons hands in a car park. The hands carefully place the cars and people walking in the space.
The technology again appears to video of a car park taken from a higher position, so that we are looking down on the car park. The hands are cleverly filmed as if they are positioning the cars.
What also held interest for me is the way this display had built boxes for the projectors that would also allow for a computer tower to be hidden inside. The power source was tapped to the floor and along the skirting. Different coloured tape was used and was very unobtrusive. The box was fully enclosed with an air vent at the back for the heat to escape from the projector.

The Shy Picture
- David Maclend and Narinda Reeders, Australia, 2005

The display is set with a screen within a frame set on a false wall set out from the real wall, allowing for the computer to be hidden. Mounted above is a video camera, which acts as a motion sensor. When someone comes within range the people in the picture run away and hide. They pop their heads out to see if you are still there.
The technologies here appeared to be LCD screen, computer, video camera, and customised software, and a video that loops.

Journey to the Moon
- William Kentridge, South Africa, 2003

William Kentridges hand drawn charcoal artwork is combined with Georges Milies who is an experimental filmmaker, video to provide an eerie film to experience.
The technology integrates both drawings, which have been animated with video and post production techniques to produce this work which was projected onto a wall.

Waterfall
- Duk-euin Ji-Hoon Byun, Korea, 2003

A waterfall of light particles, which are activated by standing in front of the projection and your shadow, produces the waterfall of the particles around the shadow. Moving of arms or body parts causes a change in the particles.
The technology for this relies on the tracking of the shadow and movement of the projected image.

Front Porch
- William Wegman, USA, 1999

This exhibit is of a dogs head on a mans body on the front porch, reading a newspaper. The dog appears to be more interested in looking around than actually reading the paper. This is a well produced idea.


Another exhibit displayed several different concepts on the one screen and they follow each other in succession They were:

“Some want it all”, which was of a sparkler running along a black wall and appeared to disappear into a person’s ear and come out of the other and continue on.

“15 Excavator” which is video of the an excavator which is being manipulated and positioned by hands, similar to a previous exhibit.

“Dog Duet” is also an exhibit by William Wegman (1974) and two very well trained dogs are watching something moving out of the range of the camera, and the dogs are intensely following the movement.

“Elevator No. 4” is a video that portrays an elevator that opens like a zip, and the people entering or in the elevator are distorted and pixelated. An interesting exhibit.

“Line Up” is a purely textual exhibit of the size and speed of the text is consistent with the emotion being written about.

Tools Life, Minim ++, Japan, 2001
This exhibit is of a table, which has various household items standing upright, if you touch one various projected; images are viewed on the table.
The technologies involved with this project appear to be touch sensors, projection and customised code for the interaction. This exhibit also allowed for multiple participants to use the space at one time.

Paraphysical Man, Shaun Gladwell, Australia
This exhibit was of a break-dancer against a wall. His reflection was above him, but it appears that he is upright.
The technologies involved with this are video that has been manipulated and projected upside down to give the illusion of him floating in the air.

Train No 8
This one I am really not sure how they have achieved the effect. I stood and watched this for quiet a while to try to work out how many layers of video had been used. The timing of the different layers moving and was warped at different speeds. I realised that the forefront was probably taken from a train, but that is about as far as I could understand what I was watching. Obviously a postproduction video editing program has been used but I will need to ask others as to how it was done.

House 11, the Great Australian Basin, Pennsylvania, USA, 2003
This exhibit was fascinating as both David and I were looking to see where the video looped and were unable to pick it. We watched for quiet a while before the video finished and restarted, hence no looping we had been looking for. Really interesting concept.

If I have misspelt anyone’s names or the details are not exact, I apologise sincerely and please leave a comment and I will change my mistake. I am having problems reading my notes.

This was a great exhibit and it was really interesting to see how much of our Studio work is aligned with the work being exhibited at the Experimenta Vanishing Point Exhibition.