Day 30

Everyone’s presentations were amazing!  We all definitely improved between today and yesterday.  I can’t believe it’s over.  These six weeks have gone by so fast.  We had one last barbecue at the Thrasher’s place.  It was a lot of fun, but sad at the same time.  I’m going to miss everyone I have met on this internship.  All the professors, college students, lab members, and interns were so helpful and friendly.  Thank you everyone! 

Day 29

In the morning, we had a peer review of our presentations.  Everyone was so enthusiastic about their project!  I received a lot of good feedback, and spent the rest of the day revising my powerpoint and combining my powerpoint with Lizzy’s.  

Lizzy had an epiphany today, so we revised the order of our presentation a little bit.  I’m mostly nervous for the questions tomorrow.  I am not an expert on neuroplasticity or some of the other background topics, so if some one asks detailed questions about those, I might not be able to answer them.  However, I have a good amount of references that I could refer people to if they want more information.  My other worry is that I’ll get up on the podium tomorrow, and forget everything.  Hopefully that won’t happen.  

Day 28

Today was all about the presentation.  First Lizzy, Vicky and I did our presentations for each other and gave each other feedback.  After lunch, we had a meeting with Jeff and Susan where we showed them our presentation.  They gave us a lot of very helpful feedback.  Now, all there’s left to do is to keep practicing.  I wrote out everything I’m going to say, but I don’t know the material well enough to do a smooth run-through (I keep on pausing and saying filler words and its really annoying).  

Day 27

I coded a couple more videos today, and briefly discussed some of my issues with the different types of videos with Jeff.  The math videos are really hard to code mainly because they are hard to see and they are really boring.

I also did the corrections on the research paper that Susan told us to make.  I still need to add a graph, but I plan on doing that at home, on my PC, because it is more difficult to use Microsoft Office on Macs.

I made some final adjustments to my powerpoint, and I did some more background research.  I think I know what to say for each slide, but I need to practice a lot more.

Day 26

First, I showed Jeff the Excel spreadsheet Lizzy and I made for our first project.  Then I spent most of the day coding fixations for the videos.  I got through a large chunk of them today.  I finally finished all the picture videos, and I finished all the video game videos as well.  I also did one math one and one website one.  the math one was difficult because one of the problems was really hard to see because the font looked really faint in the video.  For the website one, I’m only supposed to code the fixations made while certain questions were asked.  She gave me the questions, but not the time in the video that those questions were being asked.  I tried to find the questions within the video, but I couldn’t hear the video that well.  I blame Macs.  Anyways, due to the nature of the video, I decided in was easier to code the entire video rather than to keep looking for those specific questions.  So, I coded ass the fixations in the video.  When Vic returns, she should be able to isolate the parts she wants.

After work, I went to the Mees Observatory along with everyone else.  It was cloudy, but we were able to see a couple stars of the big dipper through the telescope.  Overall, it was a great experience.  Also, we watched Hercules (Disney version) on the ride back.  It was awesome.

Day 25

I was planning on going to the Undergraduate Research Symposium to see Isaac and Lee (undergraduates in my lab), and Maryam, Rory and Jimmy’s respective presentations.  However, I had a lot of work to do, so I stayed at the lab.

For my own project, I finished watching all the scan path videos, and I counted all the regressions.  I now have regressions data! Yay! Regressions increased as the level of difficulty of the reading increased, which is what I expected.  My data analysis is once again out of date, so I exported the new data so I can finish new data analysis over the weekend.

I spent most of my time working on Vic’s project.  I’m almost finished with the picture videos, which I’m really happy about because the picture videos are definitely the hardest.  Ok, well first of all, there are 4 types of videos:

  • Video games (like the screen shot from the previous post with Fireboy and Water girl-subject plays video game)
  • Math (subject looks at complicated math stuff with recursives and weird logic symbols)
  • Website (subject answers questions about a website while looking/exploring it)
  • Pictures (subject examines and describes various pictures)

I spent most of my time trying to figure out where the subject was looking at in this image:

better stuff

 

That’s how the image looks in real life.  However, in the video, it looks like this:

stuff

 

The pictured fixation is easy.  It is clearly looking at the center woman.  However, it is much harder to see the details of the picture, like the coin in the right woman’s hand, the chain the center woman is cutting off, the left woman’s face, the coin purse the pickpocket is stealing, etc.  These are the important details we want information about.  Through experience, I now have a good grasp of where these details are on the picture, and I can tell if the subject is looking at them or not, but it still takes a while because I want to be as accurate as possible.  However, the images in the other video types are not  as complicated, so they shouldn’t take me as long.

After work, we had a Star Wars movie night, which was awesome.  We watched A New Hope, the original and the best one in the saga.

Day 24

Today I finished the first draft of our research paper and emailed it to Lizzy and Susan.  Lizzy checked it over and fixed some of the data tables.  Susan has not responded back yet.

I spent the rest of the day helping VIc with her project.  I did the same thing I did yesterday, using GazeTag to code fixations.  However, we discovered that we accidently coded the several different objects under the same name for the first trial we did, so I will have to redo that if I have time.  I have 24 videos to code, so I have a lot of work ahead of me.  One of the videos I have to code is a video of a person playing Fireboy and Watergirl.  The actual video game screen looks like this:

 

VideoGame

However, the video of the videogame was taken by a portable eyetracker, which has really bad video quality.  Thus, the video I was trying to code looked like this:

photo (1)

Yes, that’s the same level as in the other picture.  I’m supposed to tell the program where the subject is looking at (where the two black lines meet).  For this fixation, I said the subject was looking at Watergirl (the blue smudge).  All the videos have this bad  quality.  However, I found some interesting patterns.  For example in the video games, I noticed the subject mostly fixates ahead or between the characters.  For the pictures, I notices the subjects mostly fixated on people’s faces, especially the people in the center, even when there were interesting details going on in the background.  However, when shown a picture of a car with a face, the subjects mostly fixated on the hood of the car, not the car’s facial features (eyes, mouth etc.).

I also had to reload our experiment into BeGaze because for some reason, our loaded experiment did not show up on the list of loaded experiments.  It wasn’t a big deal, but it was annoying because it took 40 minutes to load.

Day 23

Today was another productive day.  We had a couple more test subjects, but I spent most of today writing the paper for our first project.  Lizzy and I made a lot of progress on it today.  In fact, we are almost done with the first draft of the paper-we just have to finish the conclusion and the abstract.  Since this is my first real research paper I’ve ever written, I’m a little worried, but when we finish it, Susan will help us revise it.

I also helped Vic with her project.  I used the BeGaze program to “code” certain key areas in various pictures.  I told the program what the subject in the video was fixating at, and eventually, the program is supposed to recognize what the subject is fixating at based on the previous information it was given.  However, the program was not very good at guessing where the subject wa fixating at.  I think this is due to the poor quality of the image in the video (in the video, the subject is looking at a picture and the picture is hard to see), and the small size of the image in the video.

I also looked at some of the scan path videos for my individual project and counted the number of regressions.  I have regression data for 4 test subjects and we have over 20 test subjects and counting.  Counting the number of regressions is very time consuming, but I hope to get regression data for all the test subjects before my presentation.

Lizzy and I confirmed that we will do the background information part of the presentation together because our background information is basically the same.  We also had Susan look over our abstracts and I send a finalized copy of my abstract to Bethany, although the results may still change as we are still collecting data.

Day 22

Today was actually super busy.  Lizzy and I had a lot more test subjects largely thanks to Bob sending out a mass email to the building (thanks Bob!).  We have about 20 data sets we can actually use, and about half of those are adults, so I can compare teens and adults in my presentation (yay).

Susan also told Lizzy and I to write a lab report on the first project we did this summer.  We started it, but we still have a ways to go.

I also analyzed the data I had so far.  The trends are the same as I expected, but they might change because we are still testing people.  I submitted my abstract, but I’m worried that it’s not good enough, mostly because my data isn’t ready yet, so the results portion of my abstract may change between now and the presentation.

Susan also suggested that I look at regressions in addition to fixations and saccades.  Regressions are when a people makes a backwards saccade, and are generally measures of how difficult a reading is.  According to Keith Rayner, about 15% of saccades are regressions in a normal reading.  I really liked that idea, but the problem is that the BeGaze doesn’t count regressions, so to get regression data, I would have to play each scan path video and count all the regressions myself.  Although I may not have time to count all of them, I will definitely look into it and mention the forimg pattern in my presentation.

I taught a person in my lab how to use the Gazetag program (the program Jeff showed me yesterday).  I tried to teach the program to recognize a doctor, and his two patients-a man and a woman sitting next to each other.  I got the program to recognize the doctor and woman pretty consistently, but it kept thinking the male patient was the woman.  Silly program.  I will be working with this program morein the future, so I’m pretty excited about that.

Day 21

Today, I look at some research by Keith Rayner, who is known for his studies on eye tracking and reading.  The font was really small, so I had a hard time getting through it.

Later, Jeff showed me a program where you can make it recognize what a person is fixating at by organizing a library of pictures for it to compare the fixation to.  Lizzy and I also determined that we need more test subjects.  We would like about 50, although that number may not be plausible given the time frame.