Today I finished the first draft of our research paper and emailed it to Lizzy and Susan. Lizzy checked it over and fixed some of the data tables. Susan has not responded back yet.
I spent the rest of the day helping VIc with her project. I did the same thing I did yesterday, using GazeTag to code fixations. However, we discovered that we accidently coded the several different objects under the same name for the first trial we did, so I will have to redo that if I have time. I have 24 videos to code, so I have a lot of work ahead of me. One of the videos I have to code is a video of a person playing Fireboy and Watergirl. The actual video game screen looks like this:
However, the video of the videogame was taken by a portable eyetracker, which has really bad video quality. Thus, the video I was trying to code looked like this:
Yes, that’s the same level as in the other picture. I’m supposed to tell the program where the subject is looking at (where the two black lines meet). For this fixation, I said the subject was looking at Watergirl (the blue smudge). All the videos have this bad quality. However, I found some interesting patterns. For example in the video games, I noticed the subject mostly fixates ahead or between the characters. For the pictures, I notices the subjects mostly fixated on people’s faces, especially the people in the center, even when there were interesting details going on in the background. However, when shown a picture of a car with a face, the subjects mostly fixated on the hood of the car, not the car’s facial features (eyes, mouth etc.).
I also had to reload our experiment into BeGaze because for some reason, our loaded experiment did not show up on the list of loaded experiments. It wasn’t a big deal, but it was annoying because it took 40 minutes to load.