First Impression Post; Research Methods

--Original published at olivyahvanek

Within the topic of psychology there are many different topics that interest me, one of the most interesting would be phobias. To research phobias I would begin with the question of, how do different types of phobias occur in people of all different backgrounds? Meaning what are the different causes of certain phobias and how to they become relevant and start to affect the human if they hadn’t in the past?

My hypothesis would be to say that humans are only affected by certain phobias if there was a previous trauma that caused them to fear certain aspects of life, whether the phobia is directly related to the or not. With this being said, phobias can be caused by things that had happened to people during their childhood, or during younger years of their lives.

To test this a group of people with one type of phobia would be gathered and a test on their past could be made by asking questions about what made the phobia come about. These tests would include questions relating to their home life, childhood, and what trauma they experienced in their lives so far. These results could be compared to the results of a group of people who do not have phobias.

The results of the experiment could help to show that phobias are created by things that had happened in the past and it can also tell whether or not phobias are caused by trauma.

Mythubusters: Madness Behind the Method

--Original published at Sherika's Psych Blog

In one of the mythbusters most iconic episodes where they compare whether the usage of a cellphone while driving compares to the dangers of drunk driving. In order to test hypothesis, the team set up a controlled experiment where the subject was both sober and wasn’t using a cellphone in order to be used to the course they had set up for the experiment.

The second experiment featured the subject being distracted due to a cellphone call. For the last variation of the experiment, the subject drank some alcoholic beverages and had their blood alcohol level examined with a breathalyzer test before driving the course again.

While these three different experiments were set up, there were of course, smaller criteria that existed within the experiments as detailed in the first controlled experiment that the team established. Within that experiment, they had the subject accelerate to 30 miles per hour and stopping at a marked point. After that the subject had to parallel park and avoid accidents. On top of this the experiments were timed as well.

During the cell phone portion of the experiment the subject was given different mental tasks to complete while also operating the car. Whether it was answering factual questions or more mathematically challenging questions. For this portion of the experiment, giving subjects mentally challenging tasks during the simulation of distracted driving due to cellphone usage was a weakness. Distracting the subject with factual or mentally stimulating questions doesn’t accurately recreate drivers everyday who are distracted due to the usage of their cellphones. Whether that be because the driver themselves are using their phone while driving or are getting distracted due to a phone call.

Instead of distracting the driver with mentally stimulating questions, it would have been more accurate for the subject to be holding a phone conversation instead that simulates the phone conversations that everyone has every day.

Another critique to be noted as that the blood alcohol level of the subjects tested wasn’t ever mentioned in the experiment. So it’s hard to determine if any of the subjects had the same blood alcohol level when they engaged in the test. This seems like a weakness as the two subjects in the study, because of age, sex, and weight could consume the same number of alcoholic drinks yet have differing blood alcohol levels that affected them differently. Instead, there should have been some sort of effort in order to get both subjects to either consume the same number of drinks or have blood alcohol levels that were close to each other.

Even though at the end of the video the mythbusters hypothesis was proven true that driving drunk and driving with a cellphone had similar margins for accidents. Whereas they had a larger margin of error for driving with a cellphone. Another thing to critique is that there was never any point of the video in which the mythbusters themselves addressed the criteria they were using to determine whether driving drunk was dangerous than a using a cellphone, vice versa, or with both. Instead, these criteria should have been mapped out at the beginning as it’s hard for the viewer to accept such things at face value.

Chapter 1 First Impression

--Original published at Noah'sPSY105blog

For my research study, I would be interested in finding if there is any correlation between students residing on a college campus, and exhibiting reckless behavior such as binge drinking. I would word my question as the following, “Does residing on a college campus increase the chances of students participating in destructive behaviors, such as binge drinking?”

My hypothesis is that students who live on college campuses are more likely to participate in dangerous behavior.

My procedure would be very straightforward, I would first start by asking students from different colleges to take a simple survey. The survey will be anonymous, but the students will have to include which college or university that they are attending so it will be easier to interpret the results. The survey would also ask the students if they live on campus, live off campus, or commute to their college or university. Lastly, the survey would ask the students how frequently they partake in dangerous activities such as binge drinking. Utilizing the results from the surveys will then help determine whether this hypothesis has some truth behind it, or is completely false.

First Impression Chapter 1

--Original published at JVershinski's Blog

Does Weaving Through Traffic Actually Get You To Your Destination Faster?

There were both some strengths and weaknesses when it comes to this experiment. First, one strength of this experiment is that it was tested in the real world. This is a strength because it shows the applicability of this experiment to everyday life. A weakness in this experiment is that it was only tested once. This does not help the experiment because testing an experiment one time does not prove if it was successful or not. To fix this, the people testing the experiment should have tested it multiple times and collected the data and maybe averaged the numbers. Another strength of this experiment is that I believe the same cars were used for both variables, ensuring that one car does not have a physical advantage over the other. A second weakness to this experiment is the people only used one road to test it. This is a weakness because maybe that specific road is an outlier in which the results are not accurate to what the majority of the results would be when compared to other roads. While this experiment had both its weaknesses and strengths, I do not think that it would be a reliable experiment based off of one test.

Chapter 1 First Impression Post

--Original published at Courtney's College Blog

Question: How does summer vacation affect the intelligence of children from different social statuses?

Hypothesis: I would expect children from wealthy families to increase their knowledge, and children from poorer families to decrease their knowledge because wealthy families have the resources to allow their children to expand their education during the summer.


  1. Establish three groups of children: those from wealthy, middle class, and lower class families. The more children involved in the experiment, the more reliable the results will be. To keep variables controlled, children should be in the same school and grade level.
  2. Administer an educational test at the end of the school year, and record each child’s score. The test should be made up of mathematics, reading, writing, science, and social studies.
  3. At the beginning of the next school year, have the children take a similar test.
  4. Evaluate the results to see if social class has an affect on educational improvements over summer break.

Chapter First Impression Post

--Original published at Phil's College Blog

Research Question: Does eating breakfast affect overall mood?

I chose this research question because I also noticed different moods in the morning at my high school. Personally, I have noticed my mood change when I do not have breakfast. My mother used to say that breakfast was the most important meal of the day. I truly never understood why until I noticed that I acted differently with or without breakfast. However, my brother never ate breakfast in the morning, and he always had the same mood. With this study, I would like to figure out if this was true for people in the community and not just for me.

Hypothesis: If a person misses breakfast, then there is a decline in mood.

Procedure: To find the samples for the research, the researcher must send out a survey to  ask if people wanted to be included in the study. Once the results are returned, the participants that answered “Yes” are split in half into two groups. The groups will be randomly assigned to the either Group A or B.  Group A will not eat breakfast for an entire week and Group B will eat a breakfast consisted of bacon, eggs, and toast for the same week. The mood of the participants will be reported by colleagues at work and other students at the participants schools. After the first testing week, Group A and Group B will switch from eating breakfast to not eating breakfast. This will ensure that it was not the personalities of the subjects that effected the research.  The mood will be calculated by the participants reaction to social interaction. The colleagues and students will write down the participants reaction in a notebook. Once the research is complete, the researchers will determine the mood of the participants. By how many smiles and friendly reactions are given throughout the week.

Do Hand’s Free Devices Promote Safer Driving?

--Original published at Grace's College Blog

The Myth Busters episode, “Do Hand’s Free Devices Promote Safer Driving?”, tests whether it is safer to drive while holding your phone to your ear with your hand or having it on your dashboard, hands-free. They tested first by the host driving both ways, talking while holding your phone and without holding. The test yielded similar scores. They then went to Stanford University to test 30 people using a driving simulator. They had 15 hold their phones to their ears and 15 were not holding their phone, but all were still talking on the phone. The end results were mostly failing because they drove the wrong way or they crashed and two people passed.

A strength of this test was that they showed how truly unsafe driving with your phone as a distraction is. Only 2 of the 30 participants passed the test with the simulator. Most of the participants failed to drive safely while being distracted by their phone.

A weakness of the test was that they chose two ways of unsafe driving to compare, rather than one unsafe way and driving normally without any phone distraction. They should have performed two different experiments, one comparing hands free driving to driving without distractions and one comparing holding your phone to driving without distractions.

They also failed to start with a hypothesis which is what all experiments should start with after you have gathered information about the topic. When testing the host of the show, they had some sort of point system, but failed to describe where the points came from or what affected them.

This was a flawed experiment, but adequately showed the dangers of distracted driving.

First Impression Prompt

--Original published at Ariana's Blog

Do Men Really Find Blondes More Attractive?

I watched the episode Do Men Prefer Blondes?  In the episode the researchers were testing whether men are more attracted to blondes, when compared to brunettes and red-heads. They selected nine different women and had them each put on a wig in all three colors. Nine different men came in for each of the three trials and speed dated with the nine women. Then the men rated the women based on attractiveness and likability. The researchers concluded that hair color did not matter. 

One of the strengths of the study was that they had three trials to ensure the results were accurate. Multiple trials are important because having only one trial can create biased outcome, which can lead to inaccurate conclusions. They also had each girl change their hair color which eliminated biased opinions of the women. 

A weakness of this research is that there was no hypothesis. Without a hypothesis the audience did not know what was being tested. The hypothesis should have been a statement of what they thought the outcome of the experiment was going to be. If they believed that the men would find the blondes were more attractive the hypothesis could have been “Men prefer blonde women over brunettes and red-heads”.

Another weakness was that the researchers are testing two variables. They had the men rate the women on attractiveness and likeability which creates inaccurate results. Instead of men rating on a single quality they rated the women on two qualities. For this reason, the audience did not know which quality swayed the men’s overall rating of the women. Some men may have valued likeability over attractiveness. Each man had three minutes to talk with the girls, which would be enough time to judge personality, and additional features, such as facial characteristics, could influence their choices, not just hair color. Overall, there was no way for each individual to justify their specific rating. Overall, attraction goes beyond hair color. To isolate hair color, the men should have rated the women upon walking into the room. This would have eliminated personality and likeability factors.

Research Methods: Are Women Better at Reading Emotions than Men?

--Original published at Olivia's College Blog

Myth busting videos are fun and entertaining. It’s easier to understand the experiment’s results if you take a look at its research methods. The topic I chose experimented on the assumption that women are superior at reading emotions over their male counterpart. The MythBusters took photos of themselves displaying emotions of sadness, anger, happiness, and confusion. They were cropped to show only their eyes. The experiment was designed with a small group of male and female participants who were asked to interpret the emotions displayed in 17 photos.

They recorded the data as participants guessed the emotions from the photos of the eyes. Their data showed that men had a 9.6/17 guessing accuracy, while woman had a 10.6/17 guessing accuracy. From this they concluded that women were in fact superior to men at reading emotions.

My first critique of their methods is the small participant pool. If there was a larger participant pool, the data would be more accurate because it would be more representative of the overall population. This contrasts with the MythBusters approach, because they used only a handful of participants to draw conclusions that supposedly apply to the entire male and female population.

Next, the MythBusters accepted the results of the experiment without replicating.  They could improve on this aspect of their experiment because their results were so close that they should have reattempted the experiment. Had they replicated several times and the women always came out on top, it would be more acceptable to make the claim that women are better.

One of the strengths of their experiment was how they interpreted their results. A notable difference that emerged from the experiment was the speed at which women recorded their responses; women were much quicker at deciding than men. It was a strength of their experiment to analyze and interpret their results for any patterns or red flags. Paying attention to research methods can be useful when trying to find credible studies.

My Research Study

--Original published at Victoria's Psych Blog

A research experiment I would conduct would be on mental health in middle school and high school students. I want to answer the question, “does a neglectful school administration correlate with mental illnesses in teenagers?” My hypothesis is “if an administration is neglectful of the conditions of their learning environment, then students are more likely to have a mental illness.” The definition of administration in my hypothesis is any discipline advisor, principal or counselor that can make a change in a student or school body environment. An example of a neglectful administration is being aware of an issue and choosing not to fix it, does not follow up with consequences written in a handbook, unaware of activities happening in the school and does not engage with students or staff. If an administration is neglectful, then they are fostering an unhealthy learning environment. Then, it would affect the mental health of students. For procedure, I would start with looking at the statistics of the school. For example, how many disciplinary actions, bullying reports, incident reports and communication between staff and administration. Then I would take mental health assessments on kids sixth through twelfth grade along with a survey on how safe and proud they are of the school. Then I would also send the survey to the parents of the students and teachers asking how they feel about the school environment. Comparing all of the results, I will compare what students, parents, teachers and administration says. If they say similar things, then the administration is not neglectful. If there are large gaps between what each group says, it is a neglectful environment. For example, the administrations say that there is no incidents of violence and students feel save in the environment, but parents and students report that majority of students feel unsafe at school, it is a neglectful environment. I will also look at the mental health status of students and compare it to how neglectful the administration is. I would make a scale for each section and survey that is filled out. It will all be averaged out, then compared. That would then be used as data for evidence of my hypothesis. �y@�