Summary Survey Research Methods
Hoorcollege 1
Chapter 2
Characteristics survey:
- Providing statistical estimates of a target population (by taking a representable sample)
o Issue: how closely sample responding mirrors population
- Answers people give can be used to accurately describe characteristics of respondents
o Issue: how well answers measure characteristics to be described
In survey methodology, random differences between the sample and population should be
minimized. Two kinds of errors:
1) Sampling error (error solely due to the fact that a sample was
drawn instead of an entire population)
2) Bias (in some systematic way, the sample is different)
Three steps in the process of collecting data:
1) Choosing the sample frame. If there’s a group that has no chance of being selected, bias will
occur
2) The process of selecting who is in the sample should be randomized.
3) Failure to collect answers from everyone selected to be in the sample (dropout)
Validity: describes a relations between an answer and some measure of the true score especially
in subjective questions, people could be estimating wrongly. It is more difficult to measure the true
value of subjective questions.
From sample to population sampling error people over 65 are less likely to respond to internet
From answers to true characteristics invalidity number of sweets eaten is underreported
Chapter 6
Ensuring consist measurements acquires asking the same set of questions
The question-and-answer process is entirely scripted
The question means the same thing to every respondent
The communication is consistently to all respondents
Problems with wording:
- Inadequate wording not a complete question is constituted
- Unacceptable optional wording options are being given, is a sign of a weak question that’s
being given
- Poor wording ask one specific question (not for different items)
- Poorly defined terms what is meant with the definitions specifically
- Multiple questions asking two questions at once
In order to ensure a consistent meaning to all respondents, try to avoid unfamiliar words and words
that have a double meaning
“don’t know” option: generally improves measurement. But you want to know if people generally
don’t know
, Screening question: ask if someone has knowledge on a certain topic
Other factors:
- Standardized expectations for type of response (lead the respondents to a way that you want
them to. Say “how old where you when” instead of “when did you..”)
- Specialized wording for special subgroups (change vocabulary for each group that answers,
differs per country)
Levels of measurements:
1) Nominal – people or events are sorted into unordered categories (people fit themselves into
categories “are you married”)
2) Ordinal – people or events are ordered or placed in ordered categories along a single
dimension (report factual events nonnumerical “usually, sometimes”)
3) Interval data – numbers are attached that provide meaningful information about the distance
between ordered stimuli or classes
4) Ratio data – numbers are assigned such that ratios between values are meaningful, as well as
the intervals between them
Advantages open questions Advantages closed questions
o Obtain unanticipated answers o Respondent can more reliably answer
o Describe real views o Researcher can more reliably interpret
the meaning of the answers
o Answer in own words o Higher likelihood that more people give
an analytically interesting answer
o More appropriate if the list with o Easier to record answers
answers is very long
Continuum respondents are able to answer in different dimensions (NOTE: difference in
understanding, relative)
Agree disagree sorts people in different categories (no middle option), people are more likely to
agree
Why questions could not be answered accurately:
o They don’t understand the question
o They don’t know the answer either permit someone to consult with household members,
interview the household member that’s best informed or ask to only provide information
about themselves
o They can’t recall it (although they know it)
o They don’t want to report the answer in the interview context
When respondents are asked questions they can’t answer:
- Change the question to less detailed information
- Help the respondent estimate the answer
- Change or drop the objective
With sensitive questions:
- Minimize sense of judgement
, - Use self-administered data collection procedures (a more personal format)
- Confidentiality and anonymity
Improving validity of subjective measures:
1) Make the questions as reliable as possible (standardized, without vagueness)
2) Along a continuum, it’s better to have more categories, to a certain extent
3) Ask multiple questions that measure the same subjective state
Chapter 7
Survey instrument design: deciding what to measure and designing and testing questions that will be
good measures
Survey instrument development processs:
Focus group discussions (it is always valuable to conduct focused discussions with people
who are in the study population about the issues to be studied, in order to compare reality
with abstract concepts)
Drafting a tentative set of questions (it is valuable to abstract questions from other research,
but the risk is generalizability of all research)
Critical review to detect common flaws
Individual cognitive interviews
Putting questions into a survey instrument
Pretesting using an approximation of proposed data collection procedures
Defining objects dependent variables, independent variables and control variables
Cognitive testing:
1) Are questions consistently understood?
2) Do respondents have the information needed to answer the questions?
3) Do the answers accurately describe what respondents have to say?
4) Do the answers provide valid measures of what the question is designed to measure?
Design, format
If the survey isn’t interviewer administered, the layout and format of the questionnaire should be
able to make up. A few rules to achieve that goal:
Differentiate between the words, use uppercase letters for instructions
Give clear instructions on skipping questions (either computer assisted instruments or
comments for the participant)
Put optional wording in parenthesis
Check to make sure all words are written (including definitions and explanations)
Useful principles for participants to handle questions:
Questionnaire should be self-explanatory
Should mainly include closed options
There should be few question forms (different kinds of questions)
Clear and uncluttered
Write and portray clear, visual views on how to proceed
Hoorcollege 1
Chapter 2
Characteristics survey:
- Providing statistical estimates of a target population (by taking a representable sample)
o Issue: how closely sample responding mirrors population
- Answers people give can be used to accurately describe characteristics of respondents
o Issue: how well answers measure characteristics to be described
In survey methodology, random differences between the sample and population should be
minimized. Two kinds of errors:
1) Sampling error (error solely due to the fact that a sample was
drawn instead of an entire population)
2) Bias (in some systematic way, the sample is different)
Three steps in the process of collecting data:
1) Choosing the sample frame. If there’s a group that has no chance of being selected, bias will
occur
2) The process of selecting who is in the sample should be randomized.
3) Failure to collect answers from everyone selected to be in the sample (dropout)
Validity: describes a relations between an answer and some measure of the true score especially
in subjective questions, people could be estimating wrongly. It is more difficult to measure the true
value of subjective questions.
From sample to population sampling error people over 65 are less likely to respond to internet
From answers to true characteristics invalidity number of sweets eaten is underreported
Chapter 6
Ensuring consist measurements acquires asking the same set of questions
The question-and-answer process is entirely scripted
The question means the same thing to every respondent
The communication is consistently to all respondents
Problems with wording:
- Inadequate wording not a complete question is constituted
- Unacceptable optional wording options are being given, is a sign of a weak question that’s
being given
- Poor wording ask one specific question (not for different items)
- Poorly defined terms what is meant with the definitions specifically
- Multiple questions asking two questions at once
In order to ensure a consistent meaning to all respondents, try to avoid unfamiliar words and words
that have a double meaning
“don’t know” option: generally improves measurement. But you want to know if people generally
don’t know
, Screening question: ask if someone has knowledge on a certain topic
Other factors:
- Standardized expectations for type of response (lead the respondents to a way that you want
them to. Say “how old where you when” instead of “when did you..”)
- Specialized wording for special subgroups (change vocabulary for each group that answers,
differs per country)
Levels of measurements:
1) Nominal – people or events are sorted into unordered categories (people fit themselves into
categories “are you married”)
2) Ordinal – people or events are ordered or placed in ordered categories along a single
dimension (report factual events nonnumerical “usually, sometimes”)
3) Interval data – numbers are attached that provide meaningful information about the distance
between ordered stimuli or classes
4) Ratio data – numbers are assigned such that ratios between values are meaningful, as well as
the intervals between them
Advantages open questions Advantages closed questions
o Obtain unanticipated answers o Respondent can more reliably answer
o Describe real views o Researcher can more reliably interpret
the meaning of the answers
o Answer in own words o Higher likelihood that more people give
an analytically interesting answer
o More appropriate if the list with o Easier to record answers
answers is very long
Continuum respondents are able to answer in different dimensions (NOTE: difference in
understanding, relative)
Agree disagree sorts people in different categories (no middle option), people are more likely to
agree
Why questions could not be answered accurately:
o They don’t understand the question
o They don’t know the answer either permit someone to consult with household members,
interview the household member that’s best informed or ask to only provide information
about themselves
o They can’t recall it (although they know it)
o They don’t want to report the answer in the interview context
When respondents are asked questions they can’t answer:
- Change the question to less detailed information
- Help the respondent estimate the answer
- Change or drop the objective
With sensitive questions:
- Minimize sense of judgement
, - Use self-administered data collection procedures (a more personal format)
- Confidentiality and anonymity
Improving validity of subjective measures:
1) Make the questions as reliable as possible (standardized, without vagueness)
2) Along a continuum, it’s better to have more categories, to a certain extent
3) Ask multiple questions that measure the same subjective state
Chapter 7
Survey instrument design: deciding what to measure and designing and testing questions that will be
good measures
Survey instrument development processs:
Focus group discussions (it is always valuable to conduct focused discussions with people
who are in the study population about the issues to be studied, in order to compare reality
with abstract concepts)
Drafting a tentative set of questions (it is valuable to abstract questions from other research,
but the risk is generalizability of all research)
Critical review to detect common flaws
Individual cognitive interviews
Putting questions into a survey instrument
Pretesting using an approximation of proposed data collection procedures
Defining objects dependent variables, independent variables and control variables
Cognitive testing:
1) Are questions consistently understood?
2) Do respondents have the information needed to answer the questions?
3) Do the answers accurately describe what respondents have to say?
4) Do the answers provide valid measures of what the question is designed to measure?
Design, format
If the survey isn’t interviewer administered, the layout and format of the questionnaire should be
able to make up. A few rules to achieve that goal:
Differentiate between the words, use uppercase letters for instructions
Give clear instructions on skipping questions (either computer assisted instruments or
comments for the participant)
Put optional wording in parenthesis
Check to make sure all words are written (including definitions and explanations)
Useful principles for participants to handle questions:
Questionnaire should be self-explanatory
Should mainly include closed options
There should be few question forms (different kinds of questions)
Clear and uncluttered
Write and portray clear, visual views on how to proceed