Everyone’s presentations were amazing! We all definitely improved between today and yesterday. I can’t believe it’s over. These six weeks have gone by so fast. We had one last barbecue at the Thrasher’s place. It was a lot of fun, but sad at the same time. I’m going to miss everyone I have met on this internship. All the professors, college students, lab members, and interns were so helpful and friendly. Thank you everyone!
In the morning, we had a peer review of our presentations. Everyone was so enthusiastic about their project! I received a lot of good feedback, and spent the rest of the day revising my powerpoint and combining my powerpoint with Lizzy’s.
Lizzy had an epiphany today, so we revised the order of our presentation a little bit. I’m mostly nervous for the questions tomorrow. I am not an expert on neuroplasticity or some of the other background topics, so if some one asks detailed questions about those, I might not be able to answer them. However, I have a good amount of references that I could refer people to if they want more information. My other worry is that I’ll get up on the podium tomorrow, and forget everything. Hopefully that won’t happen.
Today was all about the presentation. First Lizzy, Vicky and I did our presentations for each other and gave each other feedback. After lunch, we had a meeting with Jeff and Susan where we showed them our presentation. They gave us a lot of very helpful feedback. Now, all there’s left to do is to keep practicing. I wrote out everything I’m going to say, but I don’t know the material well enough to do a smooth run-through (I keep on pausing and saying filler words and its really annoying).
I coded a couple more videos today, and briefly discussed some of my issues with the different types of videos with Jeff. The math videos are really hard to code mainly because they are hard to see and they are really boring.
I also did the corrections on the research paper that Susan told us to make. I still need to add a graph, but I plan on doing that at home, on my PC, because it is more difficult to use Microsoft Office on Macs.
I made some final adjustments to my powerpoint, and I did some more background research. I think I know what to say for each slide, but I need to practice a lot more.
First, I showed Jeff the Excel spreadsheet Lizzy and I made for our first project. Then I spent most of the day coding fixations for the videos. I got through a large chunk of them today. I finally finished all the picture videos, and I finished all the video game videos as well. I also did one math one and one website one. the math one was difficult because one of the problems was really hard to see because the font looked really faint in the video. For the website one, I’m only supposed to code the fixations made while certain questions were asked. She gave me the questions, but not the time in the video that those questions were being asked. I tried to find the questions within the video, but I couldn’t hear the video that well. I blame Macs. Anyways, due to the nature of the video, I decided in was easier to code the entire video rather than to keep looking for those specific questions. So, I coded ass the fixations in the video. When Vic returns, she should be able to isolate the parts she wants.
After work, I went to the Mees Observatory along with everyone else. It was cloudy, but we were able to see a couple stars of the big dipper through the telescope. Overall, it was a great experience. Also, we watched Hercules (Disney version) on the ride back. It was awesome.
I was planning on going to the Undergraduate Research Symposium to see Isaac and Lee (undergraduates in my lab), and Maryam, Rory and Jimmy’s respective presentations. However, I had a lot of work to do, so I stayed at the lab.
For my own project, I finished watching all the scan path videos, and I counted all the regressions. I now have regressions data! Yay! Regressions increased as the level of difficulty of the reading increased, which is what I expected. My data analysis is once again out of date, so I exported the new data so I can finish new data analysis over the weekend.
I spent most of my time working on Vic’s project. I’m almost finished with the picture videos, which I’m really happy about because the picture videos are definitely the hardest. Ok, well first of all, there are 4 types of videos:
- Video games (like the screen shot from the previous post with Fireboy and Water girl-subject plays video game)
- Math (subject looks at complicated math stuff with recursives and weird logic symbols)
- Website (subject answers questions about a website while looking/exploring it)
- Pictures (subject examines and describes various pictures)
I spent most of my time trying to figure out where the subject was looking at in this image:
That’s how the image looks in real life. However, in the video, it looks like this:
The pictured fixation is easy. It is clearly looking at the center woman. However, it is much harder to see the details of the picture, like the coin in the right woman’s hand, the chain the center woman is cutting off, the left woman’s face, the coin purse the pickpocket is stealing, etc. These are the important details we want information about. Through experience, I now have a good grasp of where these details are on the picture, and I can tell if the subject is looking at them or not, but it still takes a while because I want to be as accurate as possible. However, the images in the other video types are not as complicated, so they shouldn’t take me as long.
After work, we had a Star Wars movie night, which was awesome. We watched A New Hope, the original and the best one in the saga.
Today I finished the first draft of our research paper and emailed it to Lizzy and Susan. Lizzy checked it over and fixed some of the data tables. Susan has not responded back yet.
I spent the rest of the day helping VIc with her project. I did the same thing I did yesterday, using GazeTag to code fixations. However, we discovered that we accidently coded the several different objects under the same name for the first trial we did, so I will have to redo that if I have time. I have 24 videos to code, so I have a lot of work ahead of me. One of the videos I have to code is a video of a person playing Fireboy and Watergirl. The actual video game screen looks like this:
However, the video of the videogame was taken by a portable eyetracker, which has really bad video quality. Thus, the video I was trying to code looked like this:
Yes, that’s the same level as in the other picture. I’m supposed to tell the program where the subject is looking at (where the two black lines meet). For this fixation, I said the subject was looking at Watergirl (the blue smudge). All the videos have this bad quality. However, I found some interesting patterns. For example in the video games, I noticed the subject mostly fixates ahead or between the characters. For the pictures, I notices the subjects mostly fixated on people’s faces, especially the people in the center, even when there were interesting details going on in the background. However, when shown a picture of a car with a face, the subjects mostly fixated on the hood of the car, not the car’s facial features (eyes, mouth etc.).
I also had to reload our experiment into BeGaze because for some reason, our loaded experiment did not show up on the list of loaded experiments. It wasn’t a big deal, but it was annoying because it took 40 minutes to load.