Thursday, August 13, 2009

Just because everyone deserves a clean screen

Click here to clean your screen

Tuesday, August 11, 2009

Holograms and Tangibility

The University of Tokyo has brought a scene from the movie Minority Report even closer. Here's a link to the innovative technology that was sent to me by Professor Schull. Enjoy!

Insignificant significants

Tiny little details that might help you:

If the operating system on your machine is Windows, to get processing to work with the capture class, you will need the WinVDIG software. This basically is a quicktime plugin to get your usb camera to function properly. If you are on a Mac quicktime would do all the work for you though. I've had trouble with versions higher than 1.0.1 but you are most welcome to experiment.

Another glitch I encountered was finding the right camera. Processing would always crash on me and I had almost given up hope. Try to stray away from using the same exact camera, meaning don't have two exact models for the XY and YZ plane. I was using two Creative LiveCam Pros but processing couldn't distinguish one from the other on the capture stack, so I resorted to using a Creative LiveCam Pro along with a Creative PCCam. I haven't had any luck with using two types of cameras either. A creative and a logitech crashed processing for me on several occasion. The pair up as of now works best for me but it might be up to you to experiment and find out what works best.

New Hit Test

I have appended a new hit test that will help users tell the difference between size and shape, although taking only two shapes into consideration (a cube from a sphere)

Here's how the new function works:

The new function gets passed the following parameters:
  • Type of object
  • Object's world coordinates
  • Object's size (depth/width/height for cubes and radius for spheres)
  • My camera position

The function then takes the camera position and determines if it is within the given object in the world.

First it looks at the type of object; if it is a cube, it takes its world coordinates (x,y,z), which we shall call vector1 or V1, and adds the width, height and depth to them. The resulting would be a vector in the form (x,y,z), which we shall call V2. As long as my camera position, which is also in the form of a vector (x,y,z), which we shall call V0, falls in between V1 and V2, we are hitting a cube.
If V0 <= V2 and V0 >= V1 we are within that cube, so return true and send the serial trigger



If the object type is a sphere, we still set V0 to the camera's coordinates. V1 would be set to the sphere's world coordinates, which is the center of the sphere. Then we use the formula

V2 = (4/3) * pi * (radius^3)


Now taking into account these are vectors, meaning they have both a value and a direction we test to see if our position (V0) falls within +/- V2, so we have a test somewhat like this:

If V0 <= (1)*V2 and V0 >= (-1)*V2 we are within the sphere's volume, so return true and send the serial trigger
This part of the function is undergoing testing on my part, I shall keep you updated on how it works, but I hope this provides some insight

The above mentioned logic was irrational and I do apologize, a new revised hit test function for detecting a sphere has been implemented.

To do this, you have to use Processing's PVector class. Create two vectors, namely V1 and V2, and set V1 to the sphere's center, which is the sphere's coordinates. Set V2 to your camera position. PVector.dist() is method of the vector class which computes the Euclidean distance between two points. This is exactly what you need, so the syntax would be similar to:

myDist = V2.dist(V1);
if myDist<=myRadius //in this case the radius of your sphere
//you are within the sphere
trigger Serial;
Hope this information comes in handy.

Processing / Arduino Pseudo-code

Listed below is the gist of how my code functions. Hope it comes in handy, the rich text editor took away all the indentations, but the curly braces should help.


The processing aspect:

main
{
setup variables for
capture devices XY, YZ //two camera objects to track the position
brightest X, Y, Z //values to track which pixel is the brightest
myX, myY, myZ //user's position
serial port //for communicating with your arduino board

draw primitives //place objects in the world
get world cordinates of objects //store your objecrts' world cordinates as opposed to
//cordinates of their own transform matrix
//these will be used in the hit test function

}

loop
{
start capture devices //turn on both cameras and load images into buffer
load capture buffer into an array //must happen twice for each capture device
loop //from start to finish of array
{
set first pixel as brightest by default
move to next
compare each pixel to previous value and check if it is brighter
if yes
{
set new brightest value //myX, myY, myZ, depending on the plane
}
else
{
move to next
}

}

draw primitives //objects you want displayed in your world
//you will have to do this in a loop to keep refreshing their
//cordinates with respect to your movement

set camera values as myX, myY, myZ //the camera function in processing
//takes the following arguments as params (x,y,z, and eyeX, eyeY, eyeZ)
//x, y, z are the camera's world cordinates and eyeX, eyeY, eyeZ
//simply tell the camera where to look
//i just add a few pixels to myX, myY, myZ and set them as the eye values
//making the camera point forward


checkHit (myPos, myObjspos)
{
check to see if i am hittin and object
if yes
{
send trigger via serial port to arduino board
}
}
}


The arduino aspect:

main
{
setup variables for
serial port
pins for pager motors
}
loop
{
listen for trigger via serial port
if yes
{
set pin to high //this causes your glove to vibrate when
//the hittest function is triggerd
delay(1000) //wait for a second
set pin to low //turn off the motors again or else they'll keep vibrating
//don't worry if your glove hits something again, the motors will come back on
}

}

Sunday, August 9, 2009

Schematics and Gallery

I shall be adding pictures soon. You may read up on my previous posts on what/how I am doing.

A More In-Depth Methodology

If you've read the previous post and was wondering what I was talking about, I have a more detailed description here.

To start off, my capstone project involves a data glove. For all the non tech savy readers, a data glove is simply a glove like input device used with virtual reality environments. The whole point of me making this data glove is to eliminate the use of a mouse and keyboard as predominant input devices in the future of computing.

Things you may need to pursue such a project:
A glove
An interface for communicating between peripherals
A method of tracking the user's orientation
A method of stimulation (actuators, heating elements, etc.)


For the purpose of my capstone, I am using the Arduino Diecimila microcontroller to communicate between the glove and my computer. I am using pager motors as a stimulating element and infrared light emitting diodes along with two web cameras as a means of tracking the user's position. A simple program is written in Processing.

How it works:
Two web cameras are placed at ninety degrees to each other, both facing the user. If you were the user, try to picture one camera facing you head on while the other camera was facing you from your right.

The glove is affixed with tiny motors and infrared(IR) lights. Since IR is not visible to the naked eye, you wouldn't be able to see it being lit up, however the cameras are able to see the light and use them as a reference point for tracking your hand movement.

As of now, Processing draws primitives (boxes/spheres/planes) and moves a virtual camera as it corresponds to your hand. Given an instance, where you collide with an obstacle in this virtual world, a hitTest function is triggered which basically talks to the glove, via the microcontroller and starts vibrating.

Users are able to distinguish sizes (test results show) between objects just by tactile feedback alone. I am in the process of implementing functionality for users to be able to distinguish shapes as well.

Wednesday, August 5, 2009

Capstone and Progress

In brief this blog shall illustrate how I progress through my capstone project as partial fulfillment of my degree of Master of Science in Information Technology.

The Vision
As a child playing Contra on my Sega, I wondered what it would be like to be totally immersed in the game. Would it appear to be like an episode that came straight out of the television show Soldier of Fortune? Pondering on the thoughts, I went on living life always asking myself if there was something more to technology than just pretty colors and sound.

What would it be like if Walt Disney's production of Fantasia was never released? Would we be still experiencing monophonic sound productions. Fantasia, has in my opinion, given rise to the immersion of an audience with multiple channels of sound output, that we refer to as surround sound. (If you have a pair of headphones handy, take a look at this 3D holophonic sound immersion)

By the time I was tackling Contra, stereophonic sound had already been developed, but sound alone was not enough to impress me; I needed more. I had almost forgotten about the whole idea of user immersion in virtual reality until I came across the Logitech G25 steering wheel. It was a dream come true or so I thought. The wheel featured force feedback incomparable to any of its competition. This was the turning point in my life where I knew this was a field I had to get involved in.

Upon starting school at RIT, I decided on pursuing this and came about the whole idea of getting a device set up to provide tactile feedback. Hopefully immersing an audience entirely in a virtual/augmented reality without the need for any specific input devices (a mouse/keyboard, etc.)

Similar Projects
A list of various related projects are listed under the "inspirations" tab to the right

Methodology

The data glove has both a software and hardware aspect to it. The hardware compromises of a glove affixed with tiny pager motors and infrared light emitting diodes (LEDs). They are connected to arduino micro controller which is hooked up to a computer.

The software of data glove is written in Processing, an open source programming language and environment, based off of Java.

Apart from the glove there are two web cams involved as part of the hardware. They sit at right angles to each other facing the user. The cameras track the glove by means of the infrared LEDs. The software uses the LEDs as a pointer and relate it to a position in a 3D virtual world. Two cameras are required to capture the user's position in all three dimensions.

The virtual world consists of primitives (cubes/spheres) and moves the camera in the virtual world with relation to the user's hand movement. If he or she were to encounter an obstacle, the user's hand would feel a vibration. User's are able to distinguish size of objects with touch alone and, as a work in progress, will be able to feel a sense of shape as well.