In this assignment, I will be giving advice to a team that is creating a new website. The design team has created a first prototype of a website for this specialization. Before fully implemented, they wanted to be able to get usability feedback on this early prototype. They decide to bring several participants into the lab.
1. The team asks you whether they should videotape these participant sessions. What is your recommendation and why?
Yes, I would recommend them to videotape the sessions and if possible, to record the screen that participants would be interacting with. Because there might be some details neglected at first during the session that can be spotted while reviewing the videotape.
2. The first participant arrives. The facilitator briefs them by explaining, “I’d like to show you a new design that I’ve created. I’d like to see how well you perform with this design.” What are two problems with this introduction?
The facilitator shouldn’t tell the design is created by himself/herself because that will cause the “please the experimenter” bias. Also, by saying “I would like to see how well you perform…” is adding pressure to the participant that probably would interfere with the reaction and performance of the participant.
3. Rewrite this introduction to avoid those two problems.
“Hi, thank you for your time to participate in this experiment, that means a lot to us. In this session, I am going to show you a prototype of a website and observe you using it to help us learn the usability of it. This is not a test for you so don’t be nervous about it. Let me know if you are ready.”
4. The experimenter continues, “I’d like to get a sense of what you’re thinking about as you go through this site. As you go through the following tasks, please think aloud. Whatever’s on your mind, share it vocally.” If the experimenter uses this think-aloud protocol, what should they not do?
Asking specific questions should be avoided because it will not only slow down the experiment process but also hide what users’ actually think about the design from the answer that they provide trying to help you out.
5. Usability feedback in hand, the development team creates two alternative home pages for the course. They want to see which one encourages more users to sign up. If they compare these two alternatives in a controlled experiment, what is the null hypothesis?
Null hypothesis means it is the same in sign up rates between the two alternative home pages in the experiments.
6. One team member suggests that all participants first see one design, and then all participants see the other design. What is a problem with this approach?
This approach is not consistent enough so that there might be prejudice when checking out the second design as the first one has already left an impression in participant’s mind. Additionally, the participants might get tired after seeing the first design thus affecting their performance on the second design.
7. The team agrees with you. The developers propose a between-subjects design. Participants who sign up in the AM will be assigned the first condition, those that sign up in the PM will be assigned to the second. What is a problem with this approach?
This approach is not random enough. There could be bias in the sign-up timing, personal attributes and the testing time of the day.
8. What would you propose that the development team do instead?
Run the experiments randomly and parallelly on simultaneous versions to avoid problems like the behaviours are affected by certain time of the day or personal attributes.
9. One hundred participants are exposed to each condition. In Design A 36 participants sign up. In Design B 24 participants sign up. Consider what the chi-squared value is for each condition. Is the difference significant at the p < 0.05 level? To help you get started, the expected sign-up rate is 30% ((36+24)/200). You should also refer to critical values for the chi-squared variable.
Condition A:
Observed sign up rate: 36x100%=36%
Expected sign up rate: 30%
Signup X2 = (observed-expected)2 / expected = (36-30)2/30 = 1.2
Observed non-sign-up: (100-36) x100% = 64%
Expected non-sign-up: 100%-30%=70%
Non signup X2 = (observed-expected)2 / expected = (64-70)2/70=0.514
X2(A) = 1.2+0.514=1.714
Condition B:
Observed sign up rate: 24x100%=24%
Expected sign up rate: 30%
Signup X2 = (observed-expected)2 / expected = (24-30)2/30 = 1.2
Observed non-sign-up: (100-24) x100% = 76%
Expected non-sign-up: 100%-30%=70%
Non signup X2 = (observed-expected)2 / expected = (76-70)2/70=0.514
X2(B) = 1.2+0.514=1.714
Chi Square Value = 1.714+1.714 = 3.428
Degrees of freedom(df)=1
According to the critical values for the chi-squared variable, p-value is above 0.5, which means the difference is not significant between two alternatives.
10. Imagine that instead, 50 participants were exposed to each condition. In Design A, 18 sign up; in Design B, 12 sign up. The sign-up ratios are the same as in the previous question. Would the p-value increase, decrease, or stay the same, and why?
The p-value will increase. Because the sign-up ratios are the same as before, while the overall participants sample size decreases, the possibility of the same ratio happening increases.
Comments