Ellen, Ellen, Quite Contrary, How Does Your Research Design Grow?

I think I’ve discovered a new academic hero. Ladies and gentlemen, let’s hear it for Dr. Ellen Drost of California State University, who in 2011 decided to make the world a little brighter for people like myself.

Ok, I confess that tying Drost’s (2011) article about all things validity and reliability to heroics and to an old nursery rhyme (as in the title of this post) is a bit of a stretch. But hear me out: this is the first time validity, reliability, random errors, and measures of consistency have actually made sense to me! So, Drost is either a caped superhero standing watch over hapless doctoral students, or she has the magical ability to take complex concepts and make them as simple to understand as a standard nursery rhyme.

Yes, I’m aware that the origin of many nursery rhymes is wrought with political innuendo and complicated social satire, but I’m on a roll here.

Anyway…

Drost (2011) explains that her purpose is to orient ‘novices’ to the quantitative side of research—a side that has terrified my qualitative heart since I first learned there was a difference between the two. Drost, however, makes it crystal clear: “social science research is the quantification of human behavior” (p. 105). The instruments used to measure these ideas need to be valid and reliable. It’s as simple as that.

Drost (2011) proceeds to break down the concept of reliability, or “the consistency of measurement…or stability of measurement over a variety of conditions in which basically the same results should be obtained” (p. 106).

Even though I still find myself rubbing my temples trying to differentiate random and systemic errors, the bathroom scale examples given helped create some context in my mind. Random errors do make more sense to my qualitative brain because humans and their behavior can certainly be random.  However, the statistical idea of errors canceling each other out is one I may have to set up a meeting to discuss with my professor (or a professional counselor) after my brain implodes…but more on that another time.

It was interesting to understand the thinking behind terminology I’ve noticed in various articles I’ve read over the years: “test-retest,” “maturation,” and the “split-half approach” (Drost, 2011, pp. 109-110). Interrater reliability, too, makes sense when evaluating the potential biases of people judging answers on a test, for example.

Internal consistency, as helpfully described by Drost, (2011) is explained very simply as “consistency within the instrument” as well as “how well a set of items measures a particular behaviour (sic) or characteristic within the test” (p. 111). I am not joking when I state that even after successfully completing a statistical analysis course for doctoral students, I am just now aware that internal consistency is connected to Cronbach’s alpha! I tip my hat to you, Drost!

As the article went on, I found myself making notations next to information that I would have used in the critique of the Erhel and Jamet (2013) article for our coursework had I understood it sooner. But I suppose that is how scaffolding works, yes? And now I’m hungry to learn more! A seed has been planted!

All metaphors aside, I found this article to be incredibly helpful in understanding the ins and outs of concepts that have, frankly, intimidated me and my fear of all things math and quant for years. The article concludes with Drost (2011) once again stating her quest to help the “novice researcher with insight into two important concepts in research methodology” (p. 121).

Dr. Ellen Drost? Consider yourself successful! And please accept my deepest gratitude!

‘Til next time…

Drost, E. (2011). Validity and Reliability in Social Science Research. Education Research and Perspectives, 38(1), 105 – 123.

Leave a comment