Wednesday, September 30, 2009

Usability Testing (The Second Iteration) – Results

In order to get a more accurate end result, the same twenty two testers who went through the first iteration of testing were tested again; the results however were not to my liking.

The first test called for the users to do the same test they had taken the very first time. I did not change a thing. They viewed the screen while interacting with the glove. 18 out of the 22 said they could distinguish size of different cubes based on touch which gives an 82% success rate. This number did not change from the first iteration of the test. Like the size test, the shape test did not result in a change from the first iteration of the test either. 16 individuals said they could tell a cube from a sphere apart, resulting in a 73% success rate.

I ran the same test again, however this time around, not letting the participants view the screen. The test was one on one and I was talking to them throughout the test. Whenever the tester came in contact with a virtual object he or she would describe what they were feeling. This is where the test yielded surprisingly bad results. 20 of the 22 testers said they could NOT distinguish the tactile feedback at all. The 2 who could distinguish shapes said they felt cubes but could not tell a difference when they felt spheres. I did not mention which object they were interacting with, but just asked if they felt anything when they came in contact with something.
The next round called for me switching the sensations for a cube and a sphere, i.e. simulating a cube while interacting with a sphere and vice-versa. Like I had predicted, the results for this test were very much similar to that of the first test. The same 16 individuals felt they were able to tell a cube from a sphere apart, however not one of the testers could tell me that they felt a cube while touching a sphere in the virtual world or vice-versa. This resulted in a 0% success.

To evaluate if the visual aspect of the program was influencing its users, I set up another blindsided test. Users would interact with a virtual world where there were cubes and spheres which would trigger haptic feedback. Apart from the cubes and spheres however, there were empty spaces in the virtual world which triggered haptic feedback as well. Like the first blindsided test, only 2 testers were able to distinguish shape. They mentioned that they felt the sensation of a cube when they interacted with a virtual cube or an empty patch and yet again could not tell the difference when they interacted with a sphere. To avoid bias on my part, the empty patches that triggered the haptic simulation were created randomly. I was unaware of exactly where these patches were.

I conducted the same test again, this time letting users view the screen. They were not aware of any empty spaces that triggered haptic simulations. Yet again the same 16 individuals said they could distinguish cubes from spheres. 2 of the 16 said they felt cubes within the empty spaces that triggered haptic simulation as well.

I tried to revise my “hitTest” algorithm to provide a better feel to the tactile feedback but could not come up with any other solutions as it was. After pondering on how to improve my program to better simulate a more accurate haptic feel, I came to the conclusion that without being able to restrict the motion or movement of an end user, it was not possible. I also came to the conclusion that maybe the visual aspect of the program should go together with the tactile feedback, even though it may create an illusion, as long as the experience was pleasurable*.

*NOTE: 15 of the 22 expressed enthusiasm towards the glove and 17 said they were most likely to use it again.

No comments:

Post a Comment