Wednesday, September 30, 2009

Usability Testing (The Second Iteration) – Results

In order to get a more accurate end result, the same twenty two testers who went through the first iteration of testing were tested again; the results however were not to my liking.

The first test called for the users to do the same test they had taken the very first time. I did not change a thing. They viewed the screen while interacting with the glove. 18 out of the 22 said they could distinguish size of different cubes based on touch which gives an 82% success rate. This number did not change from the first iteration of the test. Like the size test, the shape test did not result in a change from the first iteration of the test either. 16 individuals said they could tell a cube from a sphere apart, resulting in a 73% success rate.

I ran the same test again, however this time around, not letting the participants view the screen. The test was one on one and I was talking to them throughout the test. Whenever the tester came in contact with a virtual object he or she would describe what they were feeling. This is where the test yielded surprisingly bad results. 20 of the 22 testers said they could NOT distinguish the tactile feedback at all. The 2 who could distinguish shapes said they felt cubes but could not tell a difference when they felt spheres. I did not mention which object they were interacting with, but just asked if they felt anything when they came in contact with something.
The next round called for me switching the sensations for a cube and a sphere, i.e. simulating a cube while interacting with a sphere and vice-versa. Like I had predicted, the results for this test were very much similar to that of the first test. The same 16 individuals felt they were able to tell a cube from a sphere apart, however not one of the testers could tell me that they felt a cube while touching a sphere in the virtual world or vice-versa. This resulted in a 0% success.

To evaluate if the visual aspect of the program was influencing its users, I set up another blindsided test. Users would interact with a virtual world where there were cubes and spheres which would trigger haptic feedback. Apart from the cubes and spheres however, there were empty spaces in the virtual world which triggered haptic feedback as well. Like the first blindsided test, only 2 testers were able to distinguish shape. They mentioned that they felt the sensation of a cube when they interacted with a virtual cube or an empty patch and yet again could not tell the difference when they interacted with a sphere. To avoid bias on my part, the empty patches that triggered the haptic simulation were created randomly. I was unaware of exactly where these patches were.

I conducted the same test again, this time letting users view the screen. They were not aware of any empty spaces that triggered haptic simulations. Yet again the same 16 individuals said they could distinguish cubes from spheres. 2 of the 16 said they felt cubes within the empty spaces that triggered haptic simulation as well.

I tried to revise my “hitTest” algorithm to provide a better feel to the tactile feedback but could not come up with any other solutions as it was. After pondering on how to improve my program to better simulate a more accurate haptic feel, I came to the conclusion that without being able to restrict the motion or movement of an end user, it was not possible. I also came to the conclusion that maybe the visual aspect of the program should go together with the tactile feedback, even though it may create an illusion, as long as the experience was pleasurable*.

*NOTE: 15 of the 22 expressed enthusiasm towards the glove and 17 said they were most likely to use it again.

Wednesday, September 16, 2009

Usability Testing - Second Iteration

As a means to further test the effectiveness of the ‘hitTest’ algorithm used with the data glove, several iterations of testing took place. This section will illustrate how testing was carried out. To determine if the haptic simulation was accurate, various types of tests were conducted.

First, users were left to interact with the program as they pleased. Testers were looking at the screen while they interacted with the virtual objects. Once they were done, each tester had to complete a questionnaire where they answered how pleasurable their experience was. So as to avoid any bias in the test, I avoided talking with any of the participants.

The second iteration ran the same exact test; however testers were not allowed to view the screen. When they came in contact with an object, I would ask them if they felt a cube or a sphere. Each time they interacted with an object, I made note of the object they were actually touching and their reaction to it.


The next iteration of the test called for users viewing the screen to interact with virtual objects. The program was slightly changed however. In this case, the function simulated a cube when the user interacted with a sphere and vice-versa. Similar to the first test, to avoid bias, I didn’t communicate with the testers while they were interacting with the data glove.


The next test involved the same program with virtual objects that triggered haptic simulation as well as empty space that triggered haptic simulation as well. Like the blindsided tested, users were not allowed to view the screen. When testers felt they were interacting with an object, they would let me know and their responses were noted down.


As an attempt to avoid bias on my part, an advisor suggested that I run a test at random from the previous two iterations. The last iteration of testing called for running random tests where neither the tester nor I knew which version of the program was running until the test was over. User reactions were noted down and at the end of the test, which version of the program was also taken into account.



Clever Hans Phenomena

I had convinced myself that the prototype was flawless and my testing had been carried out perfectly until Professor Schull pointed out the obvious. I was introduced to the Clever Hans Phenomenon which seems to be the case in my project as well. The clever hans phenomena basically states that all testers told me exactly what I wanted to hear. The bias was most likely generated as a result of me observing the testers or what I had expected of them.

I conducted three informal tests again, just to make sure that this was the case and it seemed to be true. The tests conducted were blind sided tests where users were unable to view the screen while interacting with the prototype. They later described the tactile feedback they felt.

Tally for the first iteration of testing

Tally Summary for final test

Twenty-two (22) individuals were interviewed and fourteen (14) of them were males and eight (8) females

Four (4) users considered themselves to have expert knowledge in computing whilst the remaining eighteen (18) were average computer users.

After been shown the glove, fifteen (15) individuals expressed enthusiasm towards the product whilst seven (7) thought it was rather tacky.

However after a brief explanation of what the product entailed, all twenty-two (22) found the product easy to use.

When asked if they would go through the whole experience again, seventeen (17) said they were most likely to use the glove again while five (5) were not sure if they would

Overall twelve (12) interviewees said they would recommend the product to a friend or peer and ten (10) had mixed feelings about it

When overall satisfaction of the glove was questioned, sixteen (16) interviewees thought the product was innovative and unique while five (5) had mixed feelings about it. One individual thought the product was nothing new to him. I am not aware if this result was biased or not.

Eighteen (18) individuals felt immersed in the computing experience and three (3) felt they were somewhat immersed, while one single individual said he was not immersed at all.

Of the twenty-one individuals who were at least remotely immersed, everyone had some sense of direction/orientation in the world; nineteen (19) found the visual element to be key in identifying direction while two (2) said tactile feedback alone could have helped them.

Ten (10) users found the immersion very interactive and eleven (11) pointed out that the experience was somewhat interactive. The eleven that answered “somewhat”, explained further, that being able to move or alter size and shape of the virtual objects would make the experience very interactive, in their open ended questions.

Nineteen (19) found that the visual element played a lot in the experience while two (2) considered it did not play that big of a role

Ten (10) found the tactile element added to the experience greatly while twelve (12) thought it only mattered somewhat

Eighteen (18) interviewees were able to distinguish sizes of objects, and fifteen (15) of them were able to tell sizes apart, where as three (3) interviewees were unable to tell size at all

Finally, sixteen (16) individuals were able to distinguish the shape of an object (i.e. distinguish if the object was spherical or cubical in shape)


In conclusion

I answer the following questions as demonstrated in my proposal

- Can people get a sense of location by feeling objects around them?

- 21 out of 22 individuals said they could (95% success rate)

- Can people distinguish sizes based on touch?

- 18 out of 22 individuals said they could (82% success rate)

- Can people distinguish shapes based on touch?

- 16 out of 22 individuals said they could (73% success rate)

Usability Testing - First Iteration

Once users had a chance to experience the data glove they were given the following questionnaire.


Tangibility and Computing – User Questionnaire

Please take a few minutes to complete the following survey, Thank you. Circle the answer that is most appropriate

  • Gender:
  1. M
  2. F

  • Age:

  1. Novice
  2. Less Than Average
  3. Average
  4. More Than Average
  5. Expert

  • What is your impression of the data glove?
  1. Dissatisfactory
  2. Indifferent
  3. Enthusiastic

  • How would you rate the ease of use of the data glove?
  1. Difficult
  2. Neither Easy nor Difficult
  3. Easy

  • How would you rate the time needed adjusting to use the data glove?
  1. Required more time than needed
  2. Marginal
  3. Required less time than I had hoped

  • How much likely are you to use it again?
  1. Not Likely
  2. Not Sure
  3. Most Likely

  • Would you recommend the glove to your peers/colleagues/etc?
  1. No
  2. I have mixed feelings about it
  3. Yes

  • Overall how pleased were you with the end product:
  1. This is noting new, I could have done the same
  2. Indifferent
  3. It was very cool
  • How immersed were you in the computing experience?
  1. Not at all
  2. Somewhere in between real and imaginary
  3. I was in cyber space

  • How easy was it for you to get a sense of orientation for objects in the virtual world?
  1. I needed a compass
  2. Somewhat alright
  3. I knew exactly where everything was

  • Which aided your sense of orientation more?
  1. Touch
  2. Visuals

  • How interactive would you consider the experience?
  1. Not Very
  2. Somewhat
  3. Very Interactive Indeed

  • How much did the visual element help in your experience?
  1. Not At All
  2. Somewhat
  3. Very Much So

  • How much did the tactile feedback help in your experience?
  1. Not At All
  2. Somewhat
  3. Very Much So

  • Were you able to distinguish the size of an object?
  1. Not At All
  2. Somewhat
  3. Very Much So

  • Were you able to distinguish sizes between two objects?
  1. Not At All
  2. Somewhat
  3. Very Much So


  • Were you able to distinguish the shape of an object?
  1. Not At All
  2. Somewhat
  3. Very Much So

  • Were you able to distinguish between two different shapes?
  1. Not At All
  2. Somewhat
  3. Very Much So


  • Describe in your own words what you disliked about the glove, either the hardware or software aspect of it.

  • Given you had the opportunity to add to this device, what suggestions would you provide?