It’s rare that a research question gets scrutinized at the national level, and yet, we find ourselves in one of those rare times. The New York Times recently reported that the US Census is implementing a total revision to health insurance coverage questions. Naturally, both sides of the isle have opposing things to say about it including It’s making it easier to assess and we may never know the impact of The Affordable Care Act (also known as Obamacare). As a researcher, I wanted to have a look at the change for myself and apply my own knowledge before drawing a conclusion.
I don’t want to get into the politics of the change. Rather, I want to focus on the question of change itself. According to The New York Times, the old question asked if individuals had coverage at any time in the prior year. This question paints with very broad strokes by lumping an entire year into one binary response. The new survey asks if they have insurance at the time of the interview and follows up to find out what months the individual was covered. The new question provides a more accurate view of insurance coverage, but doesn’t do so by counting individuals, but rather by identifying smaller trends in coverage lapses.
The best way to conduct the follow up to a baseline study is to ask the same questions. I don’t think there’s any question there. Repeatability is a staple of modern scientific study (Even though, from what I can assess thus far, one could take the data from the new questions and count the number of individuals with lapsed coverage during the year and quantify the number provided by the previous question making the outrage a moot point.), but what if you didn’t ask the right question the first time? Survey writing is a deceptively difficult skill to master, but keep this in mind, asking the same question will get you a better comparison, but asking the wrong question will always get you the wrong answer. In research, much like in life, if you find yourself in a hole, the first thing you should do is stop digging.
When we do a survey at User Insight, we use qualitative research to inform our quant work. Talking to participants allows us to understand where confusions come up in the questions and gather a holistic view of possible answers. In addition to guiding the correct wording of questions, qualitative research provides a deeper understanding of why we begin to see certain trends emerge in the data. Suddenly “big data” is more manageable and our quantitative work is more effective. As a researcher, my goal is to get you the right answers by asking the right questions to the right people.
If you find your product is in a hole, it’s not too late to contact us.