- Privacy Policy
Home » Survey Research – Types, Methods, Examples
Survey Research – Types, Methods, Examples
Table of Contents
Survey Research
Definition:
Survey Research is a quantitative research method that involves collecting standardized data from a sample of individuals or groups through the use of structured questionnaires or interviews. The data collected is then analyzed statistically to identify patterns and relationships between variables, and to draw conclusions about the population being studied.
Survey research can be used to answer a variety of questions, including:
- What are people’s opinions about a certain topic?
- What are people’s experiences with a certain product or service?
- What are people’s beliefs about a certain issue?
Survey Research Methods
Survey Research Methods are as follows:
- Telephone surveys: A survey research method where questions are administered to respondents over the phone, often used in market research or political polling.
- Face-to-face surveys: A survey research method where questions are administered to respondents in person, often used in social or health research.
- Mail surveys: A survey research method where questionnaires are sent to respondents through mail, often used in customer satisfaction or opinion surveys.
- Online surveys: A survey research method where questions are administered to respondents through online platforms, often used in market research or customer feedback.
- Email surveys: A survey research method where questionnaires are sent to respondents through email, often used in customer satisfaction or opinion surveys.
- Mixed-mode surveys: A survey research method that combines two or more survey modes, often used to increase response rates or reach diverse populations.
- Computer-assisted surveys: A survey research method that uses computer technology to administer or collect survey data, often used in large-scale surveys or data collection.
- Interactive voice response surveys: A survey research method where respondents answer questions through a touch-tone telephone system, often used in automated customer satisfaction or opinion surveys.
- Mobile surveys: A survey research method where questions are administered to respondents through mobile devices, often used in market research or customer feedback.
- Group-administered surveys: A survey research method where questions are administered to a group of respondents simultaneously, often used in education or training evaluation.
- Web-intercept surveys: A survey research method where questions are administered to website visitors, often used in website or user experience research.
- In-app surveys: A survey research method where questions are administered to users of a mobile application, often used in mobile app or user experience research.
- Social media surveys: A survey research method where questions are administered to respondents through social media platforms, often used in social media or brand awareness research.
- SMS surveys: A survey research method where questions are administered to respondents through text messaging, often used in customer feedback or opinion surveys.
- IVR surveys: A survey research method where questions are administered to respondents through an interactive voice response system, often used in automated customer feedback or opinion surveys.
- Mixed-method surveys: A survey research method that combines both qualitative and quantitative data collection methods, often used in exploratory or mixed-method research.
- Drop-off surveys: A survey research method where respondents are provided with a survey questionnaire and asked to return it at a later time or through a designated drop-off location.
- Intercept surveys: A survey research method where respondents are approached in public places and asked to participate in a survey, often used in market research or customer feedback.
- Hybrid surveys: A survey research method that combines two or more survey modes, data sources, or research methods, often used in complex or multi-dimensional research questions.
Types of Survey Research
There are several types of survey research that can be used to collect data from a sample of individuals or groups. following are Types of Survey Research:
- Cross-sectional survey: A type of survey research that gathers data from a sample of individuals at a specific point in time, providing a snapshot of the population being studied.
- Longitudinal survey: A type of survey research that gathers data from the same sample of individuals over an extended period of time, allowing researchers to track changes or trends in the population being studied.
- Panel survey: A type of longitudinal survey research that tracks the same sample of individuals over time, typically collecting data at multiple points in time.
- Epidemiological survey: A type of survey research that studies the distribution and determinants of health and disease in a population, often used to identify risk factors and inform public health interventions.
- Observational survey: A type of survey research that collects data through direct observation of individuals or groups, often used in behavioral or social research.
- Correlational survey: A type of survey research that measures the degree of association or relationship between two or more variables, often used to identify patterns or trends in data.
- Experimental survey: A type of survey research that involves manipulating one or more variables to observe the effect on an outcome, often used to test causal hypotheses.
- Descriptive survey: A type of survey research that describes the characteristics or attributes of a population or phenomenon, often used in exploratory research or to summarize existing data.
- Diagnostic survey: A type of survey research that assesses the current state or condition of an individual or system, often used in health or organizational research.
- Explanatory survey: A type of survey research that seeks to explain or understand the causes or mechanisms behind a phenomenon, often used in social or psychological research.
- Process evaluation survey: A type of survey research that measures the implementation and outcomes of a program or intervention, often used in program evaluation or quality improvement.
- Impact evaluation survey: A type of survey research that assesses the effectiveness or impact of a program or intervention, often used to inform policy or decision-making.
- Customer satisfaction survey: A type of survey research that measures the satisfaction or dissatisfaction of customers with a product, service, or experience, often used in marketing or customer service research.
- Market research survey: A type of survey research that collects data on consumer preferences, behaviors, or attitudes, often used in market research or product development.
- Public opinion survey: A type of survey research that measures the attitudes, beliefs, or opinions of a population on a specific issue or topic, often used in political or social research.
- Behavioral survey: A type of survey research that measures actual behavior or actions of individuals, often used in health or social research.
- Attitude survey: A type of survey research that measures the attitudes, beliefs, or opinions of individuals, often used in social or psychological research.
- Opinion poll: A type of survey research that measures the opinions or preferences of a population on a specific issue or topic, often used in political or media research.
- Ad hoc survey: A type of survey research that is conducted for a specific purpose or research question, often used in exploratory research or to answer a specific research question.
Types Based on Methodology
Based on Methodology Survey are divided into two Types:
Quantitative Survey Research
Qualitative survey research.
Quantitative survey research is a method of collecting numerical data from a sample of participants through the use of standardized surveys or questionnaires. The purpose of quantitative survey research is to gather empirical evidence that can be analyzed statistically to draw conclusions about a particular population or phenomenon.
In quantitative survey research, the questions are structured and pre-determined, often utilizing closed-ended questions, where participants are given a limited set of response options to choose from. This approach allows for efficient data collection and analysis, as well as the ability to generalize the findings to a larger population.
Quantitative survey research is often used in market research, social sciences, public health, and other fields where numerical data is needed to make informed decisions and recommendations.
Qualitative survey research is a method of collecting non-numerical data from a sample of participants through the use of open-ended questions or semi-structured interviews. The purpose of qualitative survey research is to gain a deeper understanding of the experiences, perceptions, and attitudes of participants towards a particular phenomenon or topic.
In qualitative survey research, the questions are open-ended, allowing participants to share their thoughts and experiences in their own words. This approach allows for a rich and nuanced understanding of the topic being studied, and can provide insights that are difficult to capture through quantitative methods alone.
Qualitative survey research is often used in social sciences, education, psychology, and other fields where a deeper understanding of human experiences and perceptions is needed to inform policy, practice, or theory.
Data Analysis Methods
There are several Survey Research Data Analysis Methods that researchers may use, including:
- Descriptive statistics: This method is used to summarize and describe the basic features of the survey data, such as the mean, median, mode, and standard deviation. These statistics can help researchers understand the distribution of responses and identify any trends or patterns.
- Inferential statistics: This method is used to make inferences about the larger population based on the data collected in the survey. Common inferential statistical methods include hypothesis testing, regression analysis, and correlation analysis.
- Factor analysis: This method is used to identify underlying factors or dimensions in the survey data. This can help researchers simplify the data and identify patterns and relationships that may not be immediately apparent.
- Cluster analysis: This method is used to group similar respondents together based on their survey responses. This can help researchers identify subgroups within the larger population and understand how different groups may differ in their attitudes, behaviors, or preferences.
- Structural equation modeling: This method is used to test complex relationships between variables in the survey data. It can help researchers understand how different variables may be related to one another and how they may influence one another.
- Content analysis: This method is used to analyze open-ended responses in the survey data. Researchers may use software to identify themes or categories in the responses, or they may manually review and code the responses.
- Text mining: This method is used to analyze text-based survey data, such as responses to open-ended questions. Researchers may use software to identify patterns and themes in the text, or they may manually review and code the text.
Applications of Survey Research
Here are some common applications of survey research:
- Market Research: Companies use survey research to gather insights about customer needs, preferences, and behavior. These insights are used to create marketing strategies and develop new products.
- Public Opinion Research: Governments and political parties use survey research to understand public opinion on various issues. This information is used to develop policies and make decisions.
- Social Research: Survey research is used in social research to study social trends, attitudes, and behavior. Researchers use survey data to explore topics such as education, health, and social inequality.
- Academic Research: Survey research is used in academic research to study various phenomena. Researchers use survey data to test theories, explore relationships between variables, and draw conclusions.
- Customer Satisfaction Research: Companies use survey research to gather information about customer satisfaction with their products and services. This information is used to improve customer experience and retention.
- Employee Surveys: Employers use survey research to gather feedback from employees about their job satisfaction, working conditions, and organizational culture. This information is used to improve employee retention and productivity.
- Health Research: Survey research is used in health research to study topics such as disease prevalence, health behaviors, and healthcare access. Researchers use survey data to develop interventions and improve healthcare outcomes.
Examples of Survey Research
Here are some real-time examples of survey research:
- COVID-19 Pandemic Surveys: Since the outbreak of the COVID-19 pandemic, surveys have been conducted to gather information about public attitudes, behaviors, and perceptions related to the pandemic. Governments and healthcare organizations have used this data to develop public health strategies and messaging.
- Political Polls During Elections: During election seasons, surveys are used to measure public opinion on political candidates, policies, and issues in real-time. This information is used by political parties to develop campaign strategies and make decisions.
- Customer Feedback Surveys: Companies often use real-time customer feedback surveys to gather insights about customer experience and satisfaction. This information is used to improve products and services quickly.
- Event Surveys: Organizers of events such as conferences and trade shows often use surveys to gather feedback from attendees in real-time. This information can be used to improve future events and make adjustments during the current event.
- Website and App Surveys: Website and app owners use surveys to gather real-time feedback from users about the functionality, user experience, and overall satisfaction with their platforms. This feedback can be used to improve the user experience and retain customers.
- Employee Pulse Surveys: Employers use real-time pulse surveys to gather feedback from employees about their work experience and overall job satisfaction. This feedback is used to make changes in real-time to improve employee retention and productivity.
Survey Sample
Purpose of survey research.
The purpose of survey research is to gather data and insights from a representative sample of individuals. Survey research allows researchers to collect data quickly and efficiently from a large number of people, making it a valuable tool for understanding attitudes, behaviors, and preferences.
Here are some common purposes of survey research:
- Descriptive Research: Survey research is often used to describe characteristics of a population or a phenomenon. For example, a survey could be used to describe the characteristics of a particular demographic group, such as age, gender, or income.
- Exploratory Research: Survey research can be used to explore new topics or areas of research. Exploratory surveys are often used to generate hypotheses or identify potential relationships between variables.
- Explanatory Research: Survey research can be used to explain relationships between variables. For example, a survey could be used to determine whether there is a relationship between educational attainment and income.
- Evaluation Research: Survey research can be used to evaluate the effectiveness of a program or intervention. For example, a survey could be used to evaluate the impact of a health education program on behavior change.
- Monitoring Research: Survey research can be used to monitor trends or changes over time. For example, a survey could be used to monitor changes in attitudes towards climate change or political candidates over time.
When to use Survey Research
there are certain circumstances where survey research is particularly appropriate. Here are some situations where survey research may be useful:
- When the research question involves attitudes, beliefs, or opinions: Survey research is particularly useful for understanding attitudes, beliefs, and opinions on a particular topic. For example, a survey could be used to understand public opinion on a political issue.
- When the research question involves behaviors or experiences: Survey research can also be useful for understanding behaviors and experiences. For example, a survey could be used to understand the prevalence of a particular health behavior.
- When a large sample size is needed: Survey research allows researchers to collect data from a large number of people quickly and efficiently. This makes it a useful method when a large sample size is needed to ensure statistical validity.
- When the research question is time-sensitive: Survey research can be conducted quickly, which makes it a useful method when the research question is time-sensitive. For example, a survey could be used to understand public opinion on a breaking news story.
- When the research question involves a geographically dispersed population: Survey research can be conducted online, which makes it a useful method when the population of interest is geographically dispersed.
How to Conduct Survey Research
Conducting survey research involves several steps that need to be carefully planned and executed. Here is a general overview of the process:
- Define the research question: The first step in conducting survey research is to clearly define the research question. The research question should be specific, measurable, and relevant to the population of interest.
- Develop a survey instrument : The next step is to develop a survey instrument. This can be done using various methods, such as online survey tools or paper surveys. The survey instrument should be designed to elicit the information needed to answer the research question, and should be pre-tested with a small sample of individuals.
- Select a sample : The sample is the group of individuals who will be invited to participate in the survey. The sample should be representative of the population of interest, and the size of the sample should be sufficient to ensure statistical validity.
- Administer the survey: The survey can be administered in various ways, such as online, by mail, or in person. The method of administration should be chosen based on the population of interest and the research question.
- Analyze the data: Once the survey data is collected, it needs to be analyzed. This involves summarizing the data using statistical methods, such as frequency distributions or regression analysis.
- Draw conclusions: The final step is to draw conclusions based on the data analysis. This involves interpreting the results and answering the research question.
Advantages of Survey Research
There are several advantages to using survey research, including:
- Efficient data collection: Survey research allows researchers to collect data quickly and efficiently from a large number of people. This makes it a useful method for gathering information on a wide range of topics.
- Standardized data collection: Surveys are typically standardized, which means that all participants receive the same questions in the same order. This ensures that the data collected is consistent and reliable.
- Cost-effective: Surveys can be conducted online, by mail, or in person, which makes them a cost-effective method of data collection.
- Anonymity: Participants can remain anonymous when responding to a survey. This can encourage participants to be more honest and open in their responses.
- Easy comparison: Surveys allow for easy comparison of data between different groups or over time. This makes it possible to identify trends and patterns in the data.
- Versatility: Surveys can be used to collect data on a wide range of topics, including attitudes, beliefs, behaviors, and preferences.
Limitations of Survey Research
Here are some of the main limitations of survey research:
- Limited depth: Surveys are typically designed to collect quantitative data, which means that they do not provide much depth or detail about people’s experiences or opinions. This can limit the insights that can be gained from the data.
- Potential for bias: Surveys can be affected by various biases, including selection bias, response bias, and social desirability bias. These biases can distort the results and make them less accurate.
- L imited validity: Surveys are only as valid as the questions they ask. If the questions are poorly designed or ambiguous, the results may not accurately reflect the respondents’ attitudes or behaviors.
- Limited generalizability : Survey results are only generalizable to the population from which the sample was drawn. If the sample is not representative of the population, the results may not be generalizable to the larger population.
- Limited ability to capture context: Surveys typically do not capture the context in which attitudes or behaviors occur. This can make it difficult to understand the reasons behind the responses.
- Limited ability to capture complex phenomena: Surveys are not well-suited to capture complex phenomena, such as emotions or the dynamics of interpersonal relationships.
Following is an example of a Survey Sample:
Welcome to our Survey Research Page! We value your opinions and appreciate your participation in this survey. Please answer the questions below as honestly and thoroughly as possible.
1. What is your age?
- A) Under 18
- G) 65 or older
2. What is your highest level of education completed?
- A) Less than high school
- B) High school or equivalent
- C) Some college or technical school
- D) Bachelor’s degree
- E) Graduate or professional degree
3. What is your current employment status?
- A) Employed full-time
- B) Employed part-time
- C) Self-employed
- D) Unemployed
4. How often do you use the internet per day?
- A) Less than 1 hour
- B) 1-3 hours
- C) 3-5 hours
- D) 5-7 hours
- E) More than 7 hours
5. How often do you engage in social media per day?
6. Have you ever participated in a survey research study before?
7. If you have participated in a survey research study before, how was your experience?
- A) Excellent
- E) Very poor
8. What are some of the topics that you would be interested in participating in a survey research study about?
……………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….
9. How often would you be willing to participate in survey research studies?
- A) Once a week
- B) Once a month
- C) Once every 6 months
- D) Once a year
10. Any additional comments or suggestions?
Thank you for taking the time to complete this survey. Your feedback is important to us and will help us improve our survey research efforts.
About the author
Muhammad Hassan
Researcher, Academic Writer, Web developer
You may also like
Mixed Methods Research – Types & Analysis
Focus Groups – Steps, Examples and Guide
Observational Research – Methods and Guide
Case Study – Methods, Examples and Guide
Triangulation in Research – Types, Methods and...
Explanatory Research – Types, Methods, Guide
- Help Center
- اَلْعَرَبِيَّةُ
- Deutsch (Schweiz)
- Español (Mexico)
- Bahasa Indonesia
- Bahasa Melayu
- Português (Brasil)
- Tiếng việt
Survey Research — Types, Methods and Example Questions
Survey research The world of research is vast and complex, but with the right tools and understanding, it's an open field of discovery. Welcome to a journey into the heart of survey research. What is survey research? Survey research is the lens through which we view the opinions, behaviors, and experiences of a population. Think of it as the research world's detective, cleverly sleuthing out the truths hidden beneath layers of human complexity. Why is survey research important? Survey research is a Swiss Army Knife in a researcher's toolbox. It’s adaptable, reliable, and incredibly versatile, but its real power? It gives voice to the silent majority. Whether it's understanding customer preferences or assessing the impact of a social policy, survey research is the bridge between unanswered questions and insightful data. Let's embark on this exploration, armed with the spirit of openness, a sprinkle of curiosity, and the thirst for making knowledge accessible. As we journey further into the realm of survey research, we'll delve deeper into the diverse types of surveys, innovative data collection methods, and the rewards and challenges that come with them. Types of survey research Survey research is like an artist's palette, offering a variety of types to suit your unique research needs. Each type paints a different picture, giving us fascinating insights into the world around us. Cross-Sectional Surveys: Capture a snapshot of a population at a specific moment in time. They're your trusty Polaroid camera, freezing a moment for analysis and understanding. Longitudinal Surveys: Track changes over time, much like a time-lapse video. They help to identify trends and patterns, offering a dynamic perspective of your subject. Descriptive Surveys: Draw a detailed picture of the current state of affairs. They're your magnifying glass, examining the prevalence of a phenomenon or attitudes within a group. Analytical Surveys: Deep dive into the reasons behind certain outcomes. They're the research world's version of Sherlock Holmes, unraveling the complex web of cause and effect. But, what method should you choose for data collection? The plot thickens, doesn't it? Let's unravel this mystery in our next section. Survey research and data collection methods Data collection in survey research is an art form, and there's no one-size-fits-all method. Think of it as your paintbrush, each stroke represents a different way of capturing data. Online Surveys: In the digital age, online surveys have surged in popularity. They're fast, cost-effective, and can reach a global audience. But like a mysterious online acquaintance, respondents may not always be who they say they are. Mail Surveys: Like a postcard from a distant friend, mail surveys have a certain charm. They're great for reaching respondents without internet access. However, they’re slower and have lower response rates. They’re a test of patience and persistence. Telephone Surveys: With the sound of a ringing phone, the human element enters the picture. Great for reaching a diverse audience, they bring a touch of personal connection. But, remember, not all are fans of unsolicited calls. Face-to-Face Surveys: These are the heart-to-heart conversations of the survey world. While they require more resources, they're the gold standard for in-depth, high-quality data. As we journey further, let’s weigh the pros and cons of survey research. Advantages and disadvantages of survey research Every hero has its strengths and weaknesses, and survey research is no exception. Let's unwrap the gift box of survey research to see what lies inside. Advantages: Versatility: Like a superhero with multiple powers, surveys can be adapted to different topics, audiences, and research needs. Accessibility: With online surveys, geographical boundaries dissolve. We can reach out to the world from our living room. Anonymity: Like a confessional booth, surveys allow respondents to share their views without fear of judgment. Disadvantages: Response Bias: Ever met someone who says what you want to hear? Survey respondents can be like that too. Limited Depth: Like a puddle after a rainstorm, some surveys only skim the surface of complex issues. Nonresponse: Sometimes, potential respondents play hard to get, skewing the data. Survey research may have its challenges, but it also presents opportunities to learn and grow. As we forge ahead on our journey, we dive into the design process of survey research. Limitations of survey research Every research method has its limitations, like bumps on the road to discovery. But don't worry, with the right approach, these challenges become opportunities for growth. Misinterpretation: Sometimes, respondents might misunderstand your questions, like a badly translated novel. To overcome this, keep your questions simple and clear. Social Desirability Bias: People often want to present themselves in the best light. They might answer questions in a way that portrays them positively, even if it's not entirely accurate. Overcome this by ensuring anonymity and emphasizing honesty. Sample Representation: If your survey sample isn't representative of the population you're studying, it can skew your results. Aiming for a diverse sample can mitigate this. Now that we're aware of the limitations let's delve into the world of survey design. {loadmoduleid 430} Survey research design Designing a survey is like crafting a roadmap to discovery. It's an intricate process that involves careful planning, innovative strategies, and a deep understanding of your research goals. Let's get started. Approach and Strategy Your approach and strategy are the compasses guiding your survey research. Clear objectives, defined research questions, and an understanding of your target audience lay the foundation for a successful survey. Panel The panel is the heartbeat of your survey, the respondents who breathe life into your research. Selecting a representative panel ensures your research is accurate and inclusive. 9 Tips on Building the Perfect Survey Research Questionnaire Keep It Simple: Clear and straightforward questions lead to accurate responses. Make It Relevant: Ensure every question ties back to your research objectives. Order Matters: Start with easy questions to build rapport and save sensitive ones for later. Avoid Double-Barreled Questions: Stick to one idea per question. Offer a Balanced Scale: For rating scales, provide an equal number of positive and negative options. Provide a ‘Don't Know’ Option: This prevents guessing and keeps your data accurate. Pretest Your Survey: A pilot run helps you spot any issues before the final launch. Keep It Short: Respect your respondents' time. Make It Engaging: Keep your respondents interested with a mix of question types. Survey research examples and questions Examples serve as a bridge connecting theoretical concepts to real-world scenarios. Let's consider a few practical examples of survey research across various domains. User Experience (UX) Imagine being a UX designer at a budding tech start-up. Your app is gaining traction, but to keep your user base growing and engaged, you must ensure that your app's UX is top-notch. In this case, a well-designed survey could be a beacon, guiding you toward understanding user behavior, preferences, and pain points. Here's an example of how such a survey could look: "On a scale of 1 to 10, how would you rate the ease of navigating our app?" (Array question type) "How often do you encounter difficulties while using our app?" (Single choice - List radio question type) "What features do you use most frequently in our app?" (Multiple choice - Buttons question type) "What improvements would you suggest for our app?" (Multiple short text question type) "What features would you like to see in future updates?" (Long free text question type) This line of questioning, while straightforward, provides invaluable insights. It enables the UX designer to identify strengths to capitalize on and weaknesses to improve, ultimately leading to a product that resonates with users. Psychology and Ethics in survey research The realm of survey research is not just about data and numbers, but it's also about understanding human behavior and treating respondents ethically. Psychology: In-depth understanding of cognitive biases and social dynamics can profoundly influence survey design. Let's take the 'Recency Effect,' a psychological principle stating that people tend to remember recent events more vividly than those in the past. While framing questions about user experiences, this insight could be invaluable. For example, a question like "Can you recall an instance in the past week when our customer service exceeded your expectations?" is likely to fetch more accurate responses than asking about an event several months ago. Ethics: On the other hand, maintaining privacy, confidentiality, and informed consent is more than ethical - it's fundamental to the integrity of the research process. Imagine conducting a sensitive survey about workplace culture. Ensuring respondents that their responses will remain confidential and anonymous can encourage more honest responses. An introductory note stating these assurances, along with a clear outline of the survey's purpose, can help build trust with your respondents. Survey research software In the age of digital information, survey research software has become a trusted ally for researchers. It simplifies complex processes like data collection, analysis, and visualization, democratizing research and making it more accessible to a broad audience. LimeSurvey, our innovative, user-friendly tool, brings this vision to life. It stands at the crossroads of simplicity and power, embodying the essence of accessible survey research. Whether you're a freelancer exploring new market trends, a psychology student curious about human behavior, or an HR officer aiming to improve company culture, LimeSurvey empowers you to conduct efficient, effective research. Its suite of features and intuitive design matches your research pace, allowing your curiosity to take the front seat. For instance, consider you're a researcher studying consumer behavior across different demographics. With LimeSurvey, you can easily design demographic-specific questions, distribute your survey across various channels, collect responses in real-time, and visualize your data through intuitive dashboards. This synergy of tools and functionalities makes LimeSurvey a perfect ally in your quest for knowledge. Conclusion If you've come this far, we can sense your spark of curiosity. Are you eager to take the reins and conduct your own survey research? Are you ready to embrace the simple yet powerful tool that LimeSurvey offers? If so, we can't wait to see where your journey takes you next! In the world of survey research, there's always more to explore, more to learn and more to discover. So, keep your curiosity alive, stay open to new ideas, and remember, your exploration is just beginning! We hope that our exploration has been as enlightening for you as it was exciting for us. Remember, the journey doesn't end here. With the power of knowledge and the right tools in your hands, there's no limit to what you can achieve. So, let your curiosity be your guide and dive into the fascinating world of survey research with LimeSurvey! Try it out for free now! Happy surveying! {loadmoduleid 429}
Table Content
Survey research.
The world of research is vast and complex, but with the right tools and understanding, it's an open field of discovery. Welcome to a journey into the heart of survey research.
What is survey research?
Survey research is the lens through which we view the opinions, behaviors, and experiences of a population. Think of it as the research world's detective, cleverly sleuthing out the truths hidden beneath layers of human complexity.
Why is survey research important?
Survey research is a Swiss Army Knife in a researcher's toolbox. It’s adaptable, reliable, and incredibly versatile, but its real power? It gives voice to the silent majority. Whether it's understanding customer preferences or assessing the impact of a social policy, survey research is the bridge between unanswered questions and insightful data.
Let's embark on this exploration, armed with the spirit of openness, a sprinkle of curiosity, and the thirst for making knowledge accessible. As we journey further into the realm of survey research, we'll delve deeper into the diverse types of surveys, innovative data collection methods, and the rewards and challenges that come with them.
Types of survey research
Survey research is like an artist's palette, offering a variety of types to suit your unique research needs. Each type paints a different picture, giving us fascinating insights into the world around us.
- Cross-Sectional Surveys: Capture a snapshot of a population at a specific moment in time. They're your trusty Polaroid camera, freezing a moment for analysis and understanding.
- Longitudinal Surveys: Track changes over time, much like a time-lapse video. They help to identify trends and patterns, offering a dynamic perspective of your subject.
- Descriptive Surveys: Draw a detailed picture of the current state of affairs. They're your magnifying glass, examining the prevalence of a phenomenon or attitudes within a group.
- Analytical Surveys: Deep dive into the reasons behind certain outcomes. They're the research world's version of Sherlock Holmes, unraveling the complex web of cause and effect.
But, what method should you choose for data collection? The plot thickens, doesn't it? Let's unravel this mystery in our next section.
Survey research and data collection methods
Data collection in survey research is an art form, and there's no one-size-fits-all method. Think of it as your paintbrush, each stroke represents a different way of capturing data.
- Online Surveys: In the digital age, online surveys have surged in popularity. They're fast, cost-effective, and can reach a global audience. But like a mysterious online acquaintance, respondents may not always be who they say they are.
- Mail Surveys: Like a postcard from a distant friend, mail surveys have a certain charm. They're great for reaching respondents without internet access. However, they’re slower and have lower response rates. They’re a test of patience and persistence.
- Telephone Surveys: With the sound of a ringing phone, the human element enters the picture. Great for reaching a diverse audience, they bring a touch of personal connection. But, remember, not all are fans of unsolicited calls.
- Face-to-Face Surveys: These are the heart-to-heart conversations of the survey world. While they require more resources, they're the gold standard for in-depth, high-quality data.
As we journey further, let’s weigh the pros and cons of survey research.
Advantages and disadvantages of survey research
Every hero has its strengths and weaknesses, and survey research is no exception. Let's unwrap the gift box of survey research to see what lies inside.
Advantages:
- Versatility: Like a superhero with multiple powers, surveys can be adapted to different topics, audiences, and research needs.
- Accessibility: With online surveys, geographical boundaries dissolve. We can reach out to the world from our living room.
- Anonymity: Like a confessional booth, surveys allow respondents to share their views without fear of judgment.
Disadvantages:
- Response Bias: Ever met someone who says what you want to hear? Survey respondents can be like that too.
- Limited Depth: Like a puddle after a rainstorm, some surveys only skim the surface of complex issues.
- Nonresponse: Sometimes, potential respondents play hard to get, skewing the data.
Survey research may have its challenges, but it also presents opportunities to learn and grow. As we forge ahead on our journey, we dive into the design process of survey research.
Limitations of survey research
Every research method has its limitations, like bumps on the road to discovery. But don't worry, with the right approach, these challenges become opportunities for growth.
Misinterpretation: Sometimes, respondents might misunderstand your questions, like a badly translated novel. To overcome this, keep your questions simple and clear.
Social Desirability Bias: People often want to present themselves in the best light. They might answer questions in a way that portrays them positively, even if it's not entirely accurate. Overcome this by ensuring anonymity and emphasizing honesty.
Sample Representation: If your survey sample isn't representative of the population you're studying, it can skew your results. Aiming for a diverse sample can mitigate this.
Now that we're aware of the limitations let's delve into the world of survey design.
- Create surveys in 40+ languages
- Unlimited number of users
- Ready-to-go survey templates
- So much more...
Survey research design
Designing a survey is like crafting a roadmap to discovery. It's an intricate process that involves careful planning, innovative strategies, and a deep understanding of your research goals. Let's get started.
Approach and Strategy
Your approach and strategy are the compasses guiding your survey research. Clear objectives, defined research questions, and an understanding of your target audience lay the foundation for a successful survey.
The panel is the heartbeat of your survey, the respondents who breathe life into your research. Selecting a representative panel ensures your research is accurate and inclusive.
9 Tips on Building the Perfect Survey Research Questionnaire
- Keep It Simple: Clear and straightforward questions lead to accurate responses.
- Make It Relevant: Ensure every question ties back to your research objectives.
- Order Matters: Start with easy questions to build rapport and save sensitive ones for later.
- Avoid Double-Barreled Questions: Stick to one idea per question.
- Offer a Balanced Scale: For rating scales, provide an equal number of positive and negative options.
- Provide a ‘Don't Know’ Option: This prevents guessing and keeps your data accurate.
- Pretest Your Survey: A pilot run helps you spot any issues before the final launch.
- Keep It Short: Respect your respondents' time.
- Make It Engaging: Keep your respondents interested with a mix of question types.
Survey research examples and questions
Examples serve as a bridge connecting theoretical concepts to real-world scenarios. Let's consider a few practical examples of survey research across various domains.
User Experience (UX)
Imagine being a UX designer at a budding tech start-up. Your app is gaining traction, but to keep your user base growing and engaged, you must ensure that your app's UX is top-notch. In this case, a well-designed survey could be a beacon, guiding you toward understanding user behavior, preferences, and pain points.
Here's an example of how such a survey could look:
This line of questioning, while straightforward, provides invaluable insights. It enables the UX designer to identify strengths to capitalize on and weaknesses to improve, ultimately leading to a product that resonates with users.
Psychology and Ethics in survey research
The realm of survey research is not just about data and numbers, but it's also about understanding human behavior and treating respondents ethically.
Psychology: In-depth understanding of cognitive biases and social dynamics can profoundly influence survey design. Let's take the 'Recency Effect,' a psychological principle stating that people tend to remember recent events more vividly than those in the past. While framing questions about user experiences, this insight could be invaluable.
For example, a question like "Can you recall an instance in the past week when our customer service exceeded your expectations?" is likely to fetch more accurate responses than asking about an event several months ago.
Ethics: On the other hand, maintaining privacy, confidentiality, and informed consent is more than ethical - it's fundamental to the integrity of the research process.
Imagine conducting a sensitive survey about workplace culture. Ensuring respondents that their responses will remain confidential and anonymous can encourage more honest responses. An introductory note stating these assurances, along with a clear outline of the survey's purpose, can help build trust with your respondents.
Survey research software
In the age of digital information, survey research software has become a trusted ally for researchers. It simplifies complex processes like data collection, analysis, and visualization, democratizing research and making it more accessible to a broad audience.
LimeSurvey, our innovative, user-friendly tool, brings this vision to life. It stands at the crossroads of simplicity and power, embodying the essence of accessible survey research.
Whether you're a freelancer exploring new market trends, a psychology student curious about human behavior, or an HR officer aiming to improve company culture, LimeSurvey empowers you to conduct efficient, effective research. Its suite of features and intuitive design matches your research pace, allowing your curiosity to take the front seat.
For instance, consider you're a researcher studying consumer behavior across different demographics. With LimeSurvey, you can easily design demographic-specific questions, distribute your survey across various channels, collect responses in real-time, and visualize your data through intuitive dashboards. This synergy of tools and functionalities makes LimeSurvey a perfect ally in your quest for knowledge.
If you've come this far, we can sense your spark of curiosity. Are you eager to take the reins and conduct your own survey research? Are you ready to embrace the simple yet powerful tool that LimeSurvey offers? If so, we can't wait to see where your journey takes you next!
In the world of survey research, there's always more to explore, more to learn and more to discover. So, keep your curiosity alive, stay open to new ideas, and remember, your exploration is just beginning!
We hope that our exploration has been as enlightening for you as it was exciting for us. Remember, the journey doesn't end here. With the power of knowledge and the right tools in your hands, there's no limit to what you can achieve. So, let your curiosity be your guide and dive into the fascinating world of survey research with LimeSurvey! Try it out for free now!
Happy surveying!
Think one step ahead.
Step into a bright future with our simple online survey tool
Open Source
Writing Good Survey Questions: 10 Best Practices
August 20, 2023 2023-08-20
- Email article
- Share on LinkedIn
- Share on Twitter
Unfortunately, there is no simple formula for cranking out good, unbiased questionnaires.
That said, there are certainly common mistakes in survey design that can be avoided if you know what to look for. Below, I’ve provided the 10 most common and dangerous errors that can be made when designing a survey and guidelines for how to avoid them.
In This Article:
1. ask about the right things, 2. use language that is neutral, natural, and clear, 3. don’t ask respondents to predict behavior, 4. focus on closed-ended questions, 5. avoid double-barreled questions, 6. use balanced scales, 7. answer options should be all-inclusive and mutually exclusive, 8. provide an opt-out, 9. allow most questions to be optional, 10. respect your respondents, ask only questions that you need answered.
One of the easiest traps to fall into when writing a survey is to ask about too much. After all, you want to take advantage of this one opportunity to ask questions of your audience, right?
The most important thing to remember about surveys is to keep them short . Ask only about the things that are essential for answering your research questions. If you don’t absolutely need the information, leave it out.
Don’t Ask Questions that You Can Find the Answer to
When drafting a survey, many researchers slip into autopilot and start by asking a plethora of demographic questions . Ask yourself: do you need all that demographic information? Will you use it to answer your research questions? Even if you will use it, is there another way to capture it besides asking about it in a survey? For example, if you are surveying current customers, and they are providing their email addresses, could you look up their demographic information if needed?
Don’t Ask Questions that Respondents Can’t Answer Accurately
Surveys are best for capturing quantitative attitudinal data . If you’re looking to learn something qualitative or behavioral, there’s likely a method better suited to your needs. Asking the question in a survey is, at best, likely to introduce inefficiency in your process, and, at worst, will produce unreliable or misleading data.
For example, consider the question below:
⛔️
|
If I were asked this question, I could only speculate about what might make a button stand out. Maybe a large size? Maybe a different color, compared to surrounding content? But this is merely conjecture. The only reliable way to tell if the button actually stood out for me would be to mock up the page and show it to me. This type of question would be better studied with other research methods, such as usability testing or A/B testing , but not with a survey.
Avoid Biasing Respondents
There are endless ways in which bias can be introduced into survey data , and it is the researcher’s task to minimize this bias as much as possible. For example, consider the wording of the following question.
⛔️
|
By initially providing the context that the organization is committed to achieving a 5-star satisfaction rating , the survey creators are, in essence, pleading with the respondent to give them one. The respondent may feel guilty providing an honest response if they had a less than stellar experience.
Note also the use of the word satisfaction . This wording subtly biases the participant into framing their experience as a satisfactory one.
An alternative wording of the question might remove the first sentence altogether, and simply ask respondents to rate their experience.
Use Natural, Familiar Language
We must always be on the lookout for jargon in survey design. If respondents cannot understand your questions or response options, you will introduce bad data into your dataset. While we should strive to keep survey questions short and simple, it is sometimes necessary to provide brief definitions or descriptions when asking about complex topics, to prevent misunderstanding. Always pilot your questionnaires with the target audience to ensure that all jargon has been removed.
Speak to Respondents Like Humans
For some reason, when drafting a questionnaire, many researchers introduce unnecessary formality and flowery language into their questions. Resist this urge. Phrase questions as clearly and simply as possible, as though you were asking them in an interview format.
People are notoriously unreliable predictors of their own behavior. For various reasons, predictions are almost bound to be flawed, leading Jakob Nielsen to remind us to never listen to users .
Yet, requests for behavioral predictions are rampant in insufficiently thought-out UX surveys. Consider the question: How likely are you to use this product? While a respondent may feel likely to use a product based on a description or a brief tutorial, their answer does not constitute a reliable prediction and should not be used to make critical product decisions.
Often, instead of future-prediction requests , you will see present-estimate requests : How often do you currently use this product in an average week? While this type of question avoids the problem of predictions, it still is unreliable. Users struggle to estimate based on some imaginary “average” week and will often, instead, recall outlier weeks, which are more memorable.
The best way to phrase a question like this is to ask for specific, recent memories : Approximately how many times did you use this product in the past 7 days? It is important to include the word approximately and to allow for ranges rather than exact numbers. Reporting an exact count of a past behavior is often either challenging or impossible, so asking for it introduces imprecise data. It can also make respondents more likely to drop off if they feel incapable of answering the question accurately.
⛔️
| ⚠️
| ✅
|
Surveys are, at their core, a quantitative research method . They rely upon closed-ended questions (e.g., multiple-choice or rating-scale questions) to generate quantitative data. Surveys can also leverage open-ended questions (e.g., short-answer or long-answer questions) to generate qualitative data. That said, the best surveys rely upon closed-ended questions, with a smattering of open-ended questions to provide additional qualitative color and support to the mostly quantitative data.
If you find that your questionnaire relies overly heavily on open-ended questions, it might be a red flag that another qualitative-research method (e.g., interviews ) may serve your research aims better.
On the subject of open-ended survey questions, it is often wise to include one broad open-ended question at the end of your questionnaire . Many respondents will have an issue or piece of feedback in mind when they start a survey, and they’re simply waiting for the right question to come up. If no such question exists, they may end the survey experience with a bad taste. A final, optional, long-answer question with a prompt like Is there anything else you’d like to share? can help to alleviate this frustration and supply some potentially valuable data.
A double-barreled question asks respondents to answer two things at once. For example: How easy and intuitive was this website to use? Easy and intuitive , while related, are not synonymous, and, therefore, the question is asking the respondent to use a single rating scale to assess the website on two distinct dimensions simultaneously. By necessity, the respondent will either pick one of these words to focus on or try to assess both and estimate a midpoint “average” score. Neither of these will generate fully accurate or reliable data.
Therefore, double-barreled questions should always be avoided and, instead, split up into two separate questions.
⛔️
| ✅
|
Rating-scale questions are tremendously valuable in generating quantitative data in survey design. Often, a respondent is asked to rate their agreement with a statement on an agreement scale (e.g., Strongly Agree, Agree, Neutral, Disagree, Strongly Disagree ), or otherwise to rate something using a scale of adjectives (e.g., Excellent, Good, Neutral, Fair, Poo r).
You’ll notice that, in both of the examples given above, there is an equal number of positive and negative options (2 each), surrounding a neutral option. The equal number of positive and negative options means that the response scale is balanced and eliminates a potential source of bias or error.
In an unbalanced scale, you’ll see an unequal number of positive and negative options (e.g., Excellent, Very Good, Good, Poor, Very Poor ). This example contains 3 positive options and only 2 negative ones. It, therefore, biases the participant to select a positive option.
⛔️
| ✅
|
Answer options for a multiple-choice question should include all possible answers (i.e., all inclusive) and should not overlap (i.e., mutually exclusive). For example, consider the following question:
⛔️
|
In this formulation, some possible answers are skipped (i.e., anyone who is over 50 won’t be able to select an answer). Additionally, some answers overlap (e.g., a 20-year-old could select either the first or second response).
Always doublecheck your numeric answer options to ensure that all numbers are included and none are repeated .
No matter how carefully and inclusively you craft your questions, there will always be respondents for whom none of the available answers are acceptable. Maybe they are an edge case you hadn’t considered. Maybe they don’t remember the answer. Or maybe they simply don’t want to answer that particular question. Always provide an opt-out answer in these cases to avoid bad data.
Opt-out answers can include things like the following: Not applicable , None of the above , I don’t know, I don’t recall , Other , or Prefer not to answer . Any multiple-choice question should include at least one of these answers. However, avoid the temptation to include one catch-all opt-out answer containing multiple possibilities . For example, an option labeled I don’t know / Not applicable covers two very different responses with different meanings; combining them fogs your data.
It is so tempting to make questions required in a questionnaire. After all, we want the data! However, the choice to make any individual question required will likely lead to one of two unwanted results:
- Bad Data: If a respondent is unable to answer a question accurately, but the question is required, they may select an answer at random. These types of answers will be impossible to detect and will introduce bad data into your study, in the form of random-response bias.
- Dropoffs: The other option available to a participant unable to correctly answer a required question is to abandon the questionnaire. This behavior will increase the effort needed to reach the desired number of responses.
Therefore, before deciding to make any question required, consider if doing so is worth the risks of bad data and dropoffs.
In the field of user experience, we like to say that we are user advocates. That doesn’t just mean advocating for user needs when it comes to product decisions. It also means respecting our users any time we’re fortunate enough to interact with them.
Don’t Assume Negativity
This is particularly important when discussing health issues or disability. Phrasings such Do you suffer from hypertension? may be perceived as offensive. Instead, use objective wording such as Do you have hypertension?
Be Sensitive with Sensitive Topics
When asking about any topics that may be deemed sensitive, private, or offensive, first ask yourself: Does it really need to be asked? Often, we can get plenty of valuable information while omitting that topic.
Other times, it is necessary to delve into potentially sensitive topics. In these cases, be sure to choose your wording carefully. Ensure you’re using the current preferred terminology favored by members of the population you’re addressing. If necessary, consider providing a brief explanation for why you are asking about that particular topic and what benefit will come from responding.
Use Inclusive and Appropriate Wording for Demographic Questions
When asking about topics such as race, ethnicity, sex, or gender identity, use accurate and sensitive terminology. For example, it is no longer appropriate to offer a simple binary option for gender questions. At a minimum, a third option indicating an Other or Non-binary category is expected, as well as an opt-out answer for those that prefer not to respond.
An inclusive question is respectful of your users’ identities and allows them to answer only if they feel comfortable.
Related Courses
Survey design and execution.
Use surveys to drive and evaluate UX design
ResearchOps: Scaling User Research
Orchestrate and optimize research to amplify its impact
User Interviews
Uncover in-depth, accurate insights about your users
Related Topics
- Research Methods Research Methods
Learn More:
Coding in Thematic Analysis
Maria Rosala · 3 min
Practical vs. Statistical Significance
Caleb Sponheim · 3 min
MVP: Why It Isn't Always Release 1
Sara Paul · 4 min
Related Articles:
Should You Run a Survey?
Maddie Brown · 6 min
10 Survey Challenges and How to Avoid Them
Tanner Kohler · 15 min
User-Feedback Requests: 5 Guidelines
Anna Kaley · 10 min
Rating Scales in UX Research: Likert or Semantic Differential?
Maria Rosala · 7 min
Between-Subjects vs. Within-Subjects Study Design
Raluca Budiu · 8 min
27 Tips and Tricks for Conducting Successful User Research in the Field
Susan Farrell and Mayya Azarova · 5 min
Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.
13.1 Writing effective survey questions and questionnaires
Learning objectives.
Learners will be able to…
- Describe some of the ways that survey questions might confuse respondents and how to word questions and responses clearly
- Create mutually exclusive, exhaustive, and balanced response options
- Define fence-sitting and floating
- Describe the considerations involved in constructing a well-designed questionnaire
- Discuss why pilot testing is important
In the previous chapter, we reviewed how researchers collect data using surveys. Guided by their sampling approach and research context, researchers should choose the survey approach that provides the most favorable tradeoffs in strengths and challenges. With this information in hand, researchers need to write their questionnaire and revise it before beginning data collection. Each method of delivery requires a questionnaire, but they vary a bit based on how they will be used by the researcher. Since phone surveys are read aloud, researchers will pay more attention to how the questionnaire sounds than how it looks. Online surveys can use advanced tools to require the completion of certain questions, present interactive questions and answers, and otherwise afford greater flexibility in how questionnaires are designed. As you read this chapter, consider how your method of delivery impacts the type of questionnaire you will design.
Start with operationalization
The first thing you need to do to write effective survey questions is identify what exactly you wish to know. As silly as it sounds to state what seems so completely obvious, we can’t stress enough how easy it is to forget to include important questions when designing a survey. Begin by looking at your research question and refreshing your memory of the operational definitions you developed for those variables from Chapter 11. You should have a pretty firm grasp of your operational definitions before starting the process of questionnaire design. You may have taken those operational definitions from other researchers’ methods, found established scales and indices for your measures, or created your own questions and answer options.
TRACK 1 (IF YOU ARE CREATING A RESEARCH PROPOSAL FOR THIS CLASS)
STOP! Make sure you have a complete operational definition for the dependent and independent variables in your research question. A complete operational definition contains the variable being measured, the measure used, and how the researcher interprets the measure. Let’s make sure you have what you need from Chapter 11 to begin writing your questionnaire.
List all of the dependent and independent variables in your research question.
- It’s normal to have one dependent or independent variable. It’s also normal to have more than one of either.
- Make sure that your research question (and this list) contain all of the variables in your hypothesis. Your hypothesis should only include variables from you research question.
For each variable in your list:
- If you don’t have questions and answers finalized yet, write a first draft and revise it based on what you read in this section.
- If you are using a measure from another researcher, you should be able to write out all of the questions and answers associated with that measure. If you only have the name of a scale or a few questions, you need to access to the full text and some documentation on how to administer and interpret it before you can finish your questionnaire.
- For example, an interpretation might be “there are five 7-point Likert scale questions…point values are added across all five items for each participant…and scores below 10 indicate the participant has low self-esteem”
- Don’t introduce other variables into the mix here. All we are concerned with is how you will measure each variable by itself. The connection between variables is done using statistical tests, not operational definitions.
- Detail any validity or reliability issues uncovered by previous researchers using the same measures. If you have concerns about validity and reliability, note them, as well.
TRACK 2 (IF YOU AREN’T CREATING A RESEARCH PROPOSAL FOR THIS CLASS)
You are interested in researching the decision-making processes of parents of elementary-aged children during the beginning of the COVID-19 pandemic in 2020. Specifically, you want to if and how parents’ socioeconomic class impacted their decisions about whether to send their children to school in-person or instead opt for online classes or homeschooling.
- Create a working research question for this topic.
- What is the dependent variable in this research question? The independent variable? What other variables might you want to control?
For the independent variable, dependent variable, and at least one control variable from your list:
- What measure (the specific question and answers) might you use for each one? Write out a first draft based on what you read in this section.
If you completed the exercise above and listed out all of the questions and answer choices you will use to measure the variables in your research question, you have already produced a pretty solid first draft of your questionnaire! Congrats! In essence, questionnaires are all of the self-report measures in your operational definitions for the independent, dependent, and control variables in your study arranged into one document and administered to participants. There are a few questions on a questionnaire (like name or ID#) that are not associated with the measurement of variables. These are the exception, and it’s useful to think of a questionnaire as a list of measures for variables. Of course, researchers often use more than one measure of a variable (i.e., triangulation ) so they can more confidently assert that their findings are true. A questionnaire should contain all of the measures researchers plan to collect about their variables by asking participants to self-report.
Sticking close to your operational definitions is important because it helps you avoid an everything-but-the-kitchen-sink approach that includes every possible question that occurs to you. Doing so puts an unnecessary burden on your survey respondents. Remember that you have asked your participants to give you their time and attention and to take care in responding to your questions; show them your respect by only asking questions that you actually plan to use in your analysis. For each question in your questionnaire, ask yourself how this question measures a variable in your study. An operational definition should contain the questions, response options, and how the researcher will draw conclusions about the variable based on participants’ responses.
Writing questions
So, almost all of the questions on a questionnaire are measuring some variable. For many variables, researchers will create their own questions rather than using one from another researcher. This section will provide some tips on how to create good questions to accurately measure variables in your study. First, questions should be as clear and to the point as possible. This is not the time to show off your creative writing skills; a survey is a technical instrument and should be written in a way that is as direct and concise as possible. As I’ve mentioned earlier, your survey respondents have agreed to give their time and attention to your survey. The best way to show your appreciation for their time is to not waste it. Ensuring that your questions are clear and concise will go a long way toward showing your respondents the gratitude they deserve. Pilot testing the questionnaire with friends or colleagues can help identify these issues. This process is commonly called pretesting, but to avoid any confusion with pretesting in experimental design, we refer to it as pilot testing.
Related to the point about not wasting respondents’ time, make sure that every question you pose will be relevant to every person you ask to complete it. This means two things: first, that respondents have knowledge about whatever topic you are asking them about, and second, that respondents have experienced the events, behaviors, or feelings you are asking them to report. If you are asking participants for second-hand knowledge—asking clinicians about clients’ feelings, asking teachers about students’ feelings, and so forth—you may want to clarify that the variable you are asking about is the key informant’s perception of what is happening in the target population. A well-planned sampling approach ensures that participants are the most knowledgeable population to complete your survey.
If you decide that you do wish to include questions about matters with which only a portion of respondents will have had experience, make sure you know why you are doing so. For example, if you are asking about MSW student study patterns, and you decide to include a question on studying for the social work licensing exam, you may only have a small subset of participants who have begun studying for the graduate exam or took the bachelor’s-level exam. If you decide to include this question that speaks to a minority of participants’ experiences, think about why you are including it. Are you interested in how studying for class and studying for licensure differ? Are you trying to triangulate study skills measures? Researchers should carefully consider whether questions relevant to only a subset of participants is likely to produce enough valid responses for quantitative analysis.
Many times, questions that are relevant to a subsample of participants are conditional on an answer to a previous question. A participant might select that they rent their home, and as a result, you might ask whether they carry renter’s insurance. That question is not relevant to homeowners, so it would be wise not to ask them to respond to it. In that case, the question of whether someone rents or owns their home is a filter question , designed to identify some subset of survey respondents who are asked additional questions that are not relevant to the entire sample. Figure 13.1 presents an example of how to accomplish this on a paper survey by adding instructions to the participant that indicate what question to proceed to next based on their response to the first one. Using online survey tools, researchers can use filter questions to only present relevant questions to participants.
Researchers should eliminate questions that ask about things participants don’t know to minimize confusion. Assuming the question is relevant to the participant, other sources of confusion come from how the question is worded. The use of negative wording can be a source of potential confusion. Taking the question from Figure 13.1 about drinking as our example, what if we had instead asked, “Did you not abstain from drinking during your first semester of college?” This is a double negative, and it’s not clear how to answer the question accurately. It is a good idea to avoid negative phrasing, when possible. For example, “did you not drink alcohol during your first semester of college?” is less clear than “did you drink alcohol your first semester of college?”
Another 877777771`issue arises when you use jargon, or technical language, that people do not commonly know. For example, if you asked adolescents how they experience imaginary audience , they would find it difficult to link those words to the concepts from David Elkind’s theory. The words you use in your questions must be understandable to your participants. If you find yourself using jargon or slang, break it down into terms that are more universal and easier to understand.
Asking multiple questions as though they are a single question can also confuse survey respondents. There’s a specific term for this sort of question; it is called a double-barreled question . Figure 13.2 shows a double-barreled question. Do you see what makes the question double-barreled? How would someone respond if they felt their college classes were more demanding but also more boring than their high school classes? Or less demanding but more interesting? Because the question combines “demanding” and “interesting,” there is no way to respond yes to one criterion but no to the other.
Another thing to avoid when constructing survey questions is the problem of social desirability . We all want to look good, right? And we all probably know the politically correct response to a variety of questions whether we agree with the politically correct response or not. In survey research, social desirability refers to the idea that respondents will try to answer questions in a way that will present them in a favorable light. (You may recall we covered social desirability bias in Chapter 11. )
Perhaps we decide that to understand the transition to college, we need to know whether respondents ever cheated on an exam in high school or college for our research project. We all know that cheating on exams is generally frowned upon (at least I hope we all know this). So, it may be difficult to get people to admit to cheating on a survey. But if you can guarantee respondents’ confidentiality, or even better, their anonymity, chances are much better that they will be honest about having engaged in this socially undesirable behavior. Another way to avoid problems of social desirability is to try to phrase difficult questions in the most benign way possible. Earl Babbie (2010) [1] offers a useful suggestion for helping you do this—simply imagine how you would feel responding to your survey questions. If you would be uncomfortable, chances are others would as well.
Try to step outside your role as researcher for a second, and imagine you were one of your participants. Evaluate the following:
- Is the question too general? Sometimes, questions that are too general may not accurately convey respondents’ perceptions. If you asked someone how they liked a certain book and provide a response scale ranging from “not at all” to “extremely well”, and if that person selected “extremely well,” what do they mean? Instead, ask more specific behavioral questions, such as “Will you recommend this book to others?” or “Do you plan to read other books by the same author?”
- Is the question too detailed? Avoid unnecessarily detailed questions that serve no specific research purpose. For instance, do you need the age of each child in a household or is just the number of children in the household acceptable? However, if unsure, it is better to err on the side of details than generality.
- Is the question presumptuous? Does your question make assumptions? For instance, if you ask, “what do you think the benefits of a tax cut would be?” you are presuming that the participant sees the tax cut as beneficial. But many people may not view tax cuts as beneficial. Some might see tax cuts as a precursor to less funding for public schools and fewer public services such as police, ambulance, and fire department. Avoid questions with built-in presumptions.
- Does the question ask the participant to imagine something? Is the question imaginary? A popular question on many television game shows is “if you won a million dollars on this show, how will you plan to spend it?” Most participants have never been faced with this large amount of money and have never thought about this scenario. In fact, most don’t even know that after taxes, the value of the million dollars will be greatly reduced. In addition, some game shows spread the amount over a 20-year period. Without understanding this “imaginary” situation, participants may not have the background information necessary to provide a meaningful response.
Try to step outside your role as researcher for a second, and imagine you were one of your participants. Use the following prompts to evaluate your draft questions from the previous exercise:
Cultural considerations
When researchers write items for questionnaires, they must be conscientious to avoid culturally biased questions that may be inappropriate or difficult for certain populations.
[insert information related to asking about demographics and how this might make some people uncomfortable based on their identity(ies) and how to potentially address]
You should also avoid using terms or phrases that may be regionally or culturally specific (unless you are absolutely certain all your respondents come from the region or culture whose terms you are using). When I first moved to southwest Virginia, I didn’t know what a holler was. Where I grew up in New Jersey, to holler means to yell. Even then, in New Jersey, we shouted and screamed, but we didn’t holler much. In southwest Virginia, my home at the time, a holler also means a small valley in between the mountains. If I used holler in that way on my survey, people who live near me may understand, but almost everyone else would be totally confused.
Testing questionnaires before using them
Finally, it is important to get feedback on your survey questions from as many people as possible, especially people who are like those in your sample. Now is not the time to be shy. Ask your friends for help, ask your mentors for feedback, ask your family to take a look at your survey as well. The more feedback you can get on your survey questions, the better the chances that you will come up with a set of questions that are understandable to a wide variety of people and, most importantly, to those in your sample.
In sum, in order to pose effective survey questions, researchers should do the following:
- Identify how each question measures an independent, dependent, or control variable in their study.
- Keep questions clear and succinct.
- Make sure respondents have relevant lived experience to provide informed answers to your questions.
- Use filter questions to avoid getting answers from uninformed participants.
- Avoid questions that are likely to confuse respondents—including those that use double negatives, use culturally specific terms or jargon, and pose more than one question at a time.
- Imagine how respondents would feel responding to questions.
- Get feedback, especially from people who resemble those in the researcher’s sample.
Table 13.1 offers one model for writing effective questionnaire items.
“Are you now or have you ever been the possessor of a firearm?” | Have you ever possessed a firearm? | |
“Who did you vote for in the last election?” | Note: Only include items that are relevant to your study. | |
“Are you a gun person?” | Do you currently own a gun?” | |
How much have you read about the new gun control measure and sales tax?” | “How much have you read about the new sales tax on firearm purchases?” | |
“How much do you support the beneficial new gun control measure?” | “What is your view of the new gun control measure?” |
Let’s complete a first draft of your questions.
- In the first exercise, you wrote out the questions and answers for each measure of your independent and dependent variables. Evaluate each question using the criteria listed above on effective survey questions.
- Type out questions for your control variables and evaluate them, as well. Consider what response options you want to offer participants.
Now, let’s revise any questions that do not meet your standards!
- Use the BRUSO model in Table 13.1 for an illustration of how to address deficits in question wording. Keep in mind that you are writing a first draft in this exercise, and it will take a few drafts and revisions before your questions are ready to distribute to participants.
- In the first exercise, you wrote out the question and answers for your independent, dependent, and at least one control variable. Evaluate each question using the criteria listed above on effective survey questions.
- Use the BRUSO model in Table 13.1 for an illustration of how to address deficits in question wording. In real research, it will take a few drafts and revisions before your questions are ready to distribute to participants.
Writing response options
While posing clear and understandable questions in your survey is certainly important, so too is providing respondents with unambiguous response options. Response options are the answers that you provide to the people completing your questionnaire. Generally, respondents will be asked to choose a single (or best) response to each question you pose. We call questions in which the researcher provides all of the response options closed-ended questions . Keep in mind, closed-ended questions can also instruct respondents to choose multiple response options, rank response options against one another, or assign a percentage to each response option. But be cautious when experimenting with different response options! Accepting multiple responses to a single question may add complexity when it comes to quantitatively analyzing and interpreting your data.
Surveys need not be limited to closed-ended questions. Sometimes survey researchers include open-ended questions in their survey instruments as a way to gather additional details from respondents. An open-ended question does not include response options; instead, respondents are asked to reply to the question in their own way, using their own words. These questions are generally used to find out more about a survey participant’s experiences or feelings about whatever they are being asked to report in the survey. If, for example, a survey includes closed-ended questions asking respondents to report on their involvement in extracurricular activities during college, an open-ended question could ask respondents why they participated in those activities or what they gained from their participation. While responses to such questions may also be captured using a closed-ended format, allowing participants to share some of their responses in their own words can make the experience of completing the survey more satisfying to respondents and can also reveal new motivations or explanations that had not occurred to the researcher. This is particularly important for mixed-methods research. It is possible to analyze open-ended response options quantitatively using content analysis (i.e., counting how often a theme is represented in a transcript looking for statistical patterns). However, for most researchers, qualitative data analysis will be needed to analyze open-ended questions, and researchers need to think through how they will analyze any open-ended questions as part of their data analysis plan. Open-ended questions cannot be operationally defined because you don’t know what responses you will get. We will address qualitative data analysis in greater detail in Chapter 19.
To write an effective response options for closed-ended questions, there are a couple of guidelines worth following. First, be sure that your response options are mutually exclusive . Look back at Figure 13.1, which contains questions about how often and how many drinks respondents consumed. Do you notice that there are no overlapping categories in the response options for these questions? This is another one of those points about question construction that seems fairly obvious but that can be easily overlooked. Response options should also be exhaustive . In other words, every possible response should be covered in the set of response options that you provide. For example, note that in question 10a in Figure 13.1, we have covered all possibilities—those who drank, say, an average of once per month can choose the first response option (“less than one time per week”) while those who drank multiple times a day each day of the week can choose the last response option (“7+”). All the possibilities in between these two extremes are covered by the middle three response options, and every respondent fits into one of the response options we provided.
Earlier in this section, we discussed double-barreled questions. Response options can also be double barreled, and this should be avoided. Figure 13.3 is an example of a question that uses double-barreled response options. Other tips about questions are also relevant to response options, including that participants should be knowledgeable enough to select or decline a response option as well as avoiding jargon and cultural idioms.
Even if you phrase questions and response options clearly, participants are influenced by how many response options are presented on the questionnaire. For Likert scales, five or seven response options generally allow about as much precision as respondents are capable of. However, numerical scales with more options can sometimes be appropriate. For dimensions such as attractiveness, pain, and likelihood, a 0-to-10 scale will be familiar to many respondents and easy for them to use. Regardless of the number of response options, the most extreme ones should generally be “balanced” around a neutral or modal midpoint. An example of an unbalanced rating scale measuring perceived likelihood might look like this:
Unlikely | Somewhat Likely | Likely | Very Likely | Extremely Likely
Because we have four rankings of likely and only one ranking of unlikely, the scale is unbalanced and most responses will be biased toward “likely” rather than “unlikely.” A balanced version might look like this:
Extremely Unlikely | Somewhat Unlikely | As Likely as Not | Somewhat Likely | Extremely Likely
In this example, the midpoint is halfway between likely and unlikely. Of course, a middle or neutral response option does not have to be included. Researchers sometimes choose to leave it out because they want to encourage respondents to think more deeply about their response and not simply choose the middle option by default. Fence-sitters are respondents who choose neutral response options, even if they have an opinion. Some people will be drawn to respond, “no opinion” even if they have an opinion, particularly if their true opinion is the not a socially desirable opinion. Floaters , on the other hand, are those that choose a substantive answer to a question when really, they don’t understand the question or don’t have an opinion.
As you can see, floating is the flip side of fence-sitting. Thus, the solution to one problem is often the cause of the other. How you decide which approach to take depends on the goals of your research. Sometimes researchers specifically want to learn something about people who claim to have no opinion. In this case, allowing for fence-sitting would be necessary. Other times researchers feel confident their respondents will all be familiar with every topic in their survey. In this case, perhaps it is okay to force respondents to choose one side or another (e.g., agree or disagree) without a middle option (e.g., neither agree nor disagree) or to not include an option like “don’t know enough to say” or “not applicable.” There is no always-correct solution to either problem. But in general, including middle option in a response set provides a more exhaustive set of response options than one that excludes one.
==This came from 10.3 under “Measuring unidimensional concepts” but it seems more appropriate in the chapter about writing survey questions. We need to make sure this section flows well. Maybe there should be a better organized subsection on rating scales? Where does this go? Does it need any revision?===
The number of response options on a typical rating scale is usually five or seven, though it can range from three to 11. Five-point scales are best for unipolar scales where only one construct is tested, such as frequency (Never, Rarely, Sometimes, Often, Always). Seven-point scales are best for bipolar scales where there is a dichotomous spectrum, such as liking (Like very much, Like somewhat, Like slightly, Neither like nor dislike, Dislike slightly, Dislike somewhat, Dislike very much). For bipolar questions, it is useful to offer an earlier question that branches them into an area of the scale; if asking about liking ice cream, first ask “Do you generally like or dislike ice cream?” Once the respondent chooses like or dislike, refine it by offering them relevant choices from the seven-point scale. Branching improves both reliability and validity (Krosnick & Berent, 1993). [2] Although you often see scales with numerical labels, it is best to only present verbal labels to the respondents but convert them to numerical values in the analyses. Avoid partial labels or length or overly specific labels. In some cases, the verbal labels can be supplemented with (or even replaced by) meaningful graphics. The last rating scale shown in Figure 10.1 is a visual-analog scale, on which participants make a mark somewhere along the horizontal line to indicate the magnitude of their response.
Finalizing Response Options
The most important check before your finalize your response options is to align them with your operational definitions. As we’ve discussed before, your operational definitions include your measures (questions and responses options) as well as how to interpret those measures in terms of the variable being measured. In particular, you should be able to interpret all response options to a question based on your operational definition of the variable it measures. If you wanted to measure the variable “social class,” you might ask one question about a participant’s annual income and another about family size. Your operational definition would need to provide clear instructions on how to interpret response options. Your operational definition is basically like this social class calculator from Pew Research , though they include a few more questions in their definition.
To drill down a bit more, as Pew specifies in the section titled “how the income calculator works,” the interval/ratio data respondents enter is interpreted using a formula combining a participant’s four responses to the questions posed by Pew categorizing their household into three categories—upper, middle, or lower class. So, the operational definition includes the four questions comprising the measure and the formula or interpretation which converts responses into the three final categories that we are familiar with: lower, middle, and upper class.
It’s perfectly normal for operational definitions to change levels of measurement, and it’s also perfectly normal for the level of measurement to stay the same. The important thing is that each response option a participant can provide is accounted for by the operational definition. Throw any combination of family size, location, or income at the Pew calculator, and it will define you into one of those three social class categories.
Unlike Pew’s definition, the operational definitions in your study may not need their own webpage to define and describe. For many questions and answers, interpreting response options is easy. If you were measuring “income” instead of “social class,” you could simply operationalize the term by asking people to list their total household income before taxes are taken out. Higher values indicate higher income, and lower values indicate lower income. Easy. Regardless of whether your operational definitions are simple or more complex, every response option to every question on your survey (with a few exceptions) should be interpretable using an operational definition of a variable. Just like we want to avoid an everything-but-the-kitchen-sink approach to questions on our questionnaire, you want to make sure your final questionnaire only contains response options that you will use in your study.
One note of caution on interpretation (sorry for repeating this). We want to remind you again that an operational definition should not mention more than one variable. In our example above, your operational definition could not say “a family of three making under $50,000 is lower class; therefore, they are more likely to experience food insecurity.” That last clause about food insecurity may well be true, but it’s not a part of the operational definition for social class. Each variable (food insecurity and class) should have its own operational definition. If you are talking about how to interpret the relationship between two variables, you are talking about your data analysis plan . We will discuss how to create your data analysis plan beginning in Chapter 14 . For now, one consideration is that depending on the statistical test you use to test relationships between variables, you may need nominal, ordinal, or interval/ratio data. Your questions and response options should match the level of measurement you need with the requirements of the specific statistical tests in your data analysis plan. Once you finalize your data analysis plan, return to your questionnaire to confirm the level of measurement matches with the statistical test you’ve chosen.
In summary, to write effective response options researchers should do the following:
- Avoid wording that is likely to confuse respondents—including double negatives, use culturally specific terms or jargon, and double-barreled response options.
- Ensure response options are relevant to participants’ knowledge and experience so they can make an informed and accurate choice.
- Present mutually exclusive and exhaustive response options.
- Consider fence-sitters and floaters, and the use of neutral or “not applicable” response options.
- Define how response options are interpreted as part of an operational definition of a variable.
- Check level of measurement matches operational definitions and the statistical tests in the data analysis plan (once you develop one in the future)
Look back at the response options you drafted in the previous exercise. Make sure you have a first draft of response options for each closed-ended question on your questionnaire.
- Using the criteria above, evaluate the wording of the response options for each question on your questionnaire.
- Revise your questions and response options until you have a complete first draft.
- Do your first read-through and provide a dummy answer to each question. Make sure you can link each response option and each question to an operational definition.
Look back at the response options you drafted in the previous exercise.
From this discussion, we hope it is clear why researchers using quantitative methods spell out all of their plans ahead of time. Ultimately, there should be a straight line from operational definition through measures on your questionnaire to the data analysis plan. If your questionnaire includes response options that are not aligned with operational definitions or not included in the data analysis plan, the responses you receive back from participants won’t fit with your conceptualization of the key variables in your study. If you do not fix these errors and proceed with collecting unstructured data, you will lose out on many of the benefits of survey research and face overwhelming challenges in answering your research question.
Designing questionnaires
Based on your work in the previous section, you should have a first draft of the questions and response options for the key variables in your study. Now, you’ll also need to think about how to present your written questions and response options to survey respondents. It’s time to write a final draft of your questionnaire and make it look nice. Designing questionnaires takes some thought. First, consider the route of administration for your survey. What we cover in this section will apply equally to paper and online surveys, but if you are planning to use online survey software, you should watch tutorial videos and explore the features of of the survey software you will use.
Informed consent & instructions
Writing effective items is only one part of constructing a survey. For one thing, every survey should have a written or spoken introduction that serves two basic functions (Peterson, 2000) . [3] One is to encourage respondents to participate in the survey. In many types of research, such encouragement is not necessary either because participants do not know they are in a study (as in naturalistic observation) or because they are part of a subject pool and have already shown their willingness to participate by signing up and showing up for the study. Survey research usually catches respondents by surprise when they answer their phone, go to their mailbox, or check their e-mail—and the researcher must make a good case for why they should agree to participate. Thus, the introduction should briefly explain the purpose of the survey and its importance, provide information about the sponsor of the survey (university-based surveys tend to generate higher response rates), acknowledge the importance of the respondent’s participation, and describe any incentives for participating.
The second function of the introduction is to establish informed consent . Remember that this involves describing to respondents everything that might affect their decision to participate. This includes the topics covered by the survey, the amount of time it is likely to take, the respondent’s option to withdraw at any time, confidentiality issues, and other ethical considerations we covered in Chapter 6. Written consent forms are not always used in survey research (when the research is of minimal risk and completion of the survey instrument is often accepted by the IRB as evidence of consent to participate), so it is important that this part of the introduction be well documented and presented clearly and in its entirety to every respondent.
Organizing items to be easy and intuitive to follow
The introduction should be followed by the substantive questionnaire items. But first, it is important to present clear instructions for completing the questionnaire, including examples of how to use any unusual response scales. Remember that the introduction is the point at which respondents are usually most interested and least fatigued, so it is good practice to start with the most important items for purposes of the research and proceed to less important items. Items should also be grouped by topic or by type. For example, items using the same rating scale (e.g., a 5-point agreement scale) should be grouped together if possible to make things faster and easier for respondents. Demographic items are often presented last. This can be because they are easy to answer in the event respondents have become tired or bored, because they are least interesting to participants, or because they can raise concerns for respondents from marginalized groups who may see questions about their identities as a potential red flag. Of course, any survey should end with an expression of appreciation to the respondent.
Questions are often organized thematically. If our survey were measuring social class, perhaps we’d have a few questions asking about employment, others focused on education, and still others on housing and community resources. Those may be the themes around which we organize our questions. Or perhaps it would make more sense to present any questions we had about parents’ income and then present a series of questions about estimated future income. Grouping by theme is one way to be deliberate about how you present your questions. Keep in mind that you are surveying people, and these people will be trying to follow the logic in your questionnaire. Jumping from topic to topic can give people a bit of whiplash and may make participants less likely to complete it.
Using a matrix is a nice way of streamlining response options for similar questions. A matrix is a question type that lists a set of questions for which the answer categories are all the same. If you have a set of questions for which the response options are the same, it may make sense to create a matrix rather than posing each question and its response options individually. Not only will this save you some space in your survey but it will also help respondents progress through your survey more easily. A sample matrix can be seen in Figure 13.4.
Once you have grouped similar questions together, you’ll need to think about the order in which to present those question groups. Most survey researchers agree that it is best to begin a survey with questions that will want to make respondents continue (Babbie, 2010; Dillman, 2000; Neuman, 2003). [4] In other words, don’t bore respondents, but don’t scare them away either. There’s some disagreement over where on a survey to place demographic questions, such as those about a person’s age, gender, and race. On the one hand, placing them at the beginning of the questionnaire may lead respondents to think the survey is boring, unimportant, and not something they want to bother completing. On the other hand, if your survey deals with some very sensitive topic, such as child sexual abuse or criminal convictions, you don’t want to scare respondents away or shock them by beginning with your most intrusive questions.
Your participants are human. They will react emotionally to questionnaire items, and they will also try to uncover your research questions and hypotheses. In truth, the order in which you present questions on a survey is best determined by the unique characteristics of your research. When feasible, you should consult with key informants from your target population determine how best to order your questions. If it is not feasible to do so, think about the unique characteristics of your topic, your questions, and most importantly, your sample. Keeping in mind the characteristics and needs of the people you will ask to complete your survey should help guide you as you determine the most appropriate order in which to present your questions. None of your decisions will be perfect, and all studies have limitations.
Questionnaire length
You’ll also need to consider the time it will take respondents to complete your questionnaire. Surveys vary in length, from just a page or two to a dozen or more pages, which means they also vary in the time it takes to complete them. How long to make your survey depends on several factors. First, what is it that you wish to know? Wanting to understand how grades vary by gender and year in school certainly requires fewer questions than wanting to know how people’s experiences in college are shaped by demographic characteristics, college attended, housing situation, family background, college major, friendship networks, and extracurricular activities. Keep in mind that even if your research question requires a sizable number of questions be included in your questionnaire, do your best to keep the questionnaire as brief as possible. Any hint that you’ve thrown in a bunch of useless questions just for the sake of it will turn off respondents and may make them not want to complete your survey.
Second, and perhaps more important, how long are respondents likely to be willing to spend completing your questionnaire? If you are studying college students, asking them to use their very limited time to complete your survey may mean they won’t want to spend more than a few minutes on it. But if you ask them to complete your survey during down-time between classes and there is little work to be done, students may be willing to give you a bit more of their time. Think about places and times that your sampling frame naturally gathers and whether you would be able to either recruit participants or distribute a survey in that context. Estimate how long your participants would reasonably have to complete a survey presented to them during this time. The more you know about your population (such as what weeks have less work and more free time), the better you can target questionnaire length.
The time that survey researchers ask respondents to spend on questionnaires varies greatly. Some researchers advise that surveys should not take longer than about 15 minutes to complete (as cited in Babbie 2010), [5] whereas others suggest that up to 20 minutes is acceptable (Hopper, 2010). [6] As with question order, there is no clear-cut, always-correct answer about questionnaire length. The unique characteristics of your study and your sample should be considered to determine how long to make your questionnaire. For example, if you planned to distribute your questionnaire to students in between classes, you will need to make sure it is short enough to complete before the next class begins.
When designing a questionnaire, a researcher should consider:
- Weighing strengths and limitations of the method of delivery, including the advanced tools in online survey software or the simplicity of paper questionnaires.
- Grouping together items that ask about the same thing.
- Moving any questions about sensitive items to the end of the questionnaire, so as not to scare respondents off.
- Moving any questions that engage the respondent to answer the questionnaire at the beginning, so as not to bore them.
- Timing the length of the questionnaire with a reasonable length of time you can ask of your participants.
- Dedicating time to visual design and ensure the questionnaire looks professional.
Type out a final draft of your questionnaire in a word processor or online survey tool.
- Evaluate your questionnaire using the guidelines above, revise it, and get it ready to share with other student researchers.
- Take a look at the question drafts you have completed and decide on an order for your questions. E valuate your draft questionnaire using the guidelines above, and revise as needed.
Pilot testing and revising questionnaires
A good way to estimate the time it will take respondents to complete your questionnaire (and other potential challenges) is through pilot testing . Pilot testing allows you to get feedback on your questionnaire so you can improve it before you actually administer it. It can be quite expensive and time consuming if you wish to pilot test your questionnaire on a large sample of people who very much resemble the sample to whom you will eventually administer the finalized version of your questionnaire. But you can learn a lot and make great improvements to your questionnaire simply by pilot testing with a small number of people to whom you have easy access (perhaps you have a few friends who owe you a favor). By pilot testing your questionnaire, you can find out how understandable your questions are, get feedback on question wording and order, find out whether any of your questions are boring or offensive, and learn whether there are places where you should have included filter questions. You can also time pilot testers as they take your survey. This will give you a good idea about the estimate to provide respondents when you administer your survey and whether you have some wiggle room to add additional items or need to cut a few items.
Perhaps this goes without saying, but your questionnaire should also have an attractive design. A messy presentation style can confuse respondents or, at the very least, annoy them. Be brief, to the point, and as clear as possible. Avoid cramming too much into a single page. Make your font size readable (at least 12 point or larger, depending on the characteristics of your sample), leave a reasonable amount of space between items, and make sure all instructions are exceptionally clear. If you are using an online survey, ensure that participants can complete it via mobile, computer, and tablet devices. Think about books, documents, articles, or web pages that you have read yourself—which were relatively easy to read and easy on the eyes and why? Try to mimic those features in the presentation of your survey questions. While online survey tools automate much of visual design, word processors are designed for writing all kinds of documents and may need more manual adjustment as part of visual design.
Realistically, your questionnaire will continue to evolve as you develop your data analysis plan over the next few chapters. By now, you should have a complete draft of your questionnaire grounded in an underlying logic that ties together each question and response option to a variable in your study. Once your questionnaire is finalized, you will need to submit it for ethical approval from your IRB. If your study requires IRB approval, it may be worthwhile to submit your proposal before your questionnaire is completely done. Revisions to IRB protocols are common and it takes less time to review a few changes to questions and answers than it does to review the entire study, so give them the whole study as soon as you can. Once the IRB approves your questionnaire, you cannot change it without their okay.
Key Takeaways
- A questionnaire is comprised of self-report measures of variables in a research study.
- Make sure your survey questions will be relevant to all respondents and that you use filter questions when necessary.
- Effective survey questions and responses take careful construction by researchers, as participants may be confused or otherwise influenced by how items are phrased.
- The questionnaire should start with informed consent and instructions, flow logically from one topic to the next, engage but not shock participants, and thank participants at the end.
- Pilot testing can help identify any issues in a questionnaire before distributing it to participants, including language or length issues.
It’s a myth that researchers work alone! Get together with a few of your fellow students and swap questionnaires for pilot testing.
- Use the criteria in each section above (questions, response options, questionnaires) and provide your peers with the strengths and weaknesses of their questionnaires.
- See if you can guess their research question and hypothesis based on the questionnaire alone.
It’s a myth that researchers work alone! Get together with a few of your fellow students and compare draft questionnaires.
- What are the strengths and limitations of your questionnaire as compared to those of your peers?
- Is there anything you would like to use from your peers’ questionnaires in your own?
- Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth. ↵
- Krosnick, J.A. & Berent, M.K. (1993). Comparisons of party identification and policy preferences: The impact of survey question format. American Journal of Political Science, 27(3), 941-964. ↵
- Peterson, R. A. (2000). Constructing effective questionnaires . Thousand Oaks, CA: Sage. ↵
- Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth; Dillman, D. A. (2000). Mail and Internet surveys: The tailored design method (2nd ed.). New York, NY: Wiley; Neuman, W. L. (2003). Social research methods: Qualitative and quantitative approaches (5th ed.). Boston, MA: Pearson. ↵
- Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth. ↵
- Hopper, J. (2010). How long should a survey be? Retrieved from http://www.verstaresearch.com/blog/how-long-should-a-survey-be ↵
According to the APA Dictionary of Psychology, an operational definition is "a description of something in terms of the operations (procedures, actions, or processes) by which it could be observed and measured. For example, the operational definition of anxiety could be in terms of a test score, withdrawal from a situation, or activation of the sympathetic nervous system. The process of creating an operational definition is known as operationalization."
Triangulation of data refers to the use of multiple types, measures or sources of data in a research project to increase the confidence that we have in our findings.
Testing out your research materials in advance on people who are not included as participants in your study.
items on a questionnaire designed to identify some subset of survey respondents who are asked additional questions that are not relevant to the entire sample
a question that asks more than one thing at a time, making it difficult to respond accurately
When a participant answers in a way that they believe is socially the most acceptable answer.
the answers researchers provide to participants to choose from when completing a questionnaire
questions in which the researcher provides all of the response options
Questions for which the researcher does not include response options, allowing for respondents to answer the question in their own words
respondents to a survey who choose neutral response options, even if they have an opinion
respondents to a survey who choose a substantive answer to a question when really, they don’t understand the question or don’t have an opinion
An ordered outline that includes your research question, a description of the data you are going to use to answer it, and the exact analyses, step-by-step, that you plan to run to answer your research question.
A process through which the researcher explains the research process, procedures, risks and benefits to a potential participant, usually through a written document, which the participant than signs, as evidence of their agreement to participate.
a type of survey question that lists a set of questions for which the response options are all the same in a grid layout
Doctoral Research Methods in Social Work Copyright © by Mavs Open Press. All Rights Reserved.
Share This Book
Have a language expert improve your writing
Run a free plagiarism check in 10 minutes, automatically generate references for free.
- Knowledge Base
- Methodology
- Doing Survey Research | A Step-by-Step Guide & Examples
Doing Survey Research | A Step-by-Step Guide & Examples
Published on 6 May 2022 by Shona McCombes . Revised on 10 October 2022.
Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps:
- Determine who will participate in the survey
- Decide the type of survey (mail, online, or in-person)
- Design the survey questions and layout
- Distribute the survey
- Analyse the responses
- Write up the results
Surveys are a flexible method of data collection that can be used in many different types of research .
Table of contents
What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyse the survey results, step 6: write up the survey results, frequently asked questions about surveys.
Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.
Common uses of survey research include:
- Social research: Investigating the experiences and characteristics of different social groups
- Market research: Finding out what customers think about products, services, and companies
- Health research: Collecting data from patients about symptoms and treatments
- Politics: Measuring public opinion about parties and policies
- Psychology: Researching personality traits, preferences, and behaviours
Surveys can be used in both cross-sectional studies , where you collect data just once, and longitudinal studies , where you survey the same sample several times over an extended period.
Prevent plagiarism, run a free check.
Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.
Populations
The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:
- The population of Brazil
- University students in the UK
- Second-generation immigrants in the Netherlands
- Customers of a specific company aged 18 to 24
- British transgender women over the age of 50
Your survey should aim to produce results that can be generalised to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.
It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every university student in the UK. Instead, you will usually survey a sample from the population.
The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.
There are many sampling methods that allow you to generalise to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions.
There are two main types of survey:
- A questionnaire , where a list of questions is distributed by post, online, or in person, and respondents fill it out themselves
- An interview , where the researcher asks a set of questions by phone or in person and records the responses
Which type you choose depends on the sample size and location, as well as the focus of the research.
Questionnaires
Sending out a paper survey by post is a common method of gathering demographic information (for example, in a government census of the population).
- You can easily access a large sample.
- You have some control over who is included in the sample (e.g., residents of a specific region).
- The response rate is often low.
Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .
- You can quickly access a large sample without constraints on time or location.
- The data is easy to process and analyse.
- The anonymity and accessibility of online surveys mean you have less control over who responds.
If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping centre or ask all students to complete a questionnaire at the end of a class.
- You can screen respondents to make sure only people in the target population are included in the sample.
- You can collect time- and location-specific data (e.g., the opinions of a shop’s weekday customers).
- The sample size will be smaller, so this method is less suitable for collecting data on broad populations.
Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.
- You have personal contact with respondents, so you know exactly who will be included in the sample in advance.
- You can clarify questions and ask for follow-up information when necessary.
- The lack of anonymity may cause respondents to answer less honestly, and there is more risk of researcher bias.
Like questionnaires, interviews can be used to collect quantitative data : the researcher records each response as a category or rating and statistically analyses the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analysed individually to gain a richer understanding of their opinions and feelings.
Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:
- The type of questions
- The content of the questions
- The phrasing of the questions
- The ordering and layout of the survey
Open-ended vs closed-ended questions
There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.
Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:
- A binary answer (e.g., yes/no or agree/disagree )
- A scale (e.g., a Likert scale with five points ranging from strongly agree to strongly disagree )
- A list of options with a single answer possible (e.g., age categories)
- A list of options with multiple answers possible (e.g., leisure interests)
Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analysed to find patterns, trends, and correlations .
Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.
Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.
The content of the survey questions
To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.
When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an ‘other’ field.
Phrasing the survey questions
In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic.
Use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no bias towards one answer or another.
Ordering the survey questions
The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.
If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.
If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.
Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.
When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by post, online, or in person.
There are many methods of analysing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also cleanse the data by removing incomplete or incorrectly completed responses.
If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organising them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analysing interviews.
Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.
Finally, when you have collected and analysed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .
In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.
Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyse it. In the results section, you summarise the key results from your analysis.
A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.
To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.
Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.
Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.
The type of data determines what statistical tests you should use to analyse your data.
A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.
Cite this Scribbr article
If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.
McCombes, S. (2022, October 10). Doing Survey Research | A Step-by-Step Guide & Examples. Scribbr. Retrieved 7 October 2024, from https://www.scribbr.co.uk/research-methods/surveys/
Is this article helpful?
Shona McCombes
Other students also liked, qualitative vs quantitative research | examples & methods, construct validity | definition, types, & examples, what is a likert scale | guide & examples.
Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service
Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve
Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground
Know how your people feel and empower managers to improve employee engagement, productivity, and retention
Take action in the moments that matter most along the employee journey and drive bottom line growth
Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people
Get faster, richer insights with qual and quant tools that make powerful market research available to everyone
Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts
Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market
Explore the platform powering Experience Management
- Free Account
- Product Demos
- For Digital
- For Customer Care
- For Human Resources
- For Researchers
- Financial Services
- All Industries
Popular Use Cases
- Customer Experience
- Employee Experience
- Net Promoter Score
- Voice of Customer
- Customer Success Hub
- Product Documentation
- Training & Certification
- XM Institute
- Popular Resources
- Customer Stories
- Artificial Intelligence
- Market Research
- Partnerships
- Marketplace
The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.
- English/AU & NZ
- Español/Europa
- Español/América Latina
- Português Brasileiro
- REQUEST DEMO
Academic Experience
How to write great survey questions (with examples)
Learning how to write survey questions is both art and science. The wording you choose can make the difference between accurate, useful data and just the opposite. Fortunately, we’ve got a raft of tips to help.
Figuring out how to make a good survey that yields actionable insights is all about sweating the details. And writing effective questionnaire questions is the first step.
Essential for success is understanding the different types of survey questions and how they work. Each format needs a slightly different approach to question-writing.
In this article, we’ll share how to write survey questionnaires and list some common errors to avoid so you can improve your surveys and the data they provide.
Free eBook: The Qualtrics survey template guide
Survey question types
Did you know that Qualtrics provides 23 question types you can use in your surveys ? Some are very popular and used frequently by a wide range of people from students to market researchers, while others are more specialist and used to explore complex topics. Here’s an introduction to some basic survey question formats, and how to write them well.
Multiple choice
Familiar to many, multiple choice questions ask a respondent to pick from a range of options. You can set up the question so that only one selection is possible, or allow more than one to be ticked.
When writing a multiple choice question…
- Be clear about whether the survey taker should choose one (“pick only one”) or several (“select all that apply”).
- Think carefully about the options you provide, since these will shape your results data.
- The phrase “of the following” can be helpful for setting expectations. For example, if you ask “What is your favorite meal” and provide the options “hamburger and fries”, “spaghetti and meatballs”, there’s a good chance your respondent’s true favorite won’t be included. If you add “of the following” the question makes more sense.
Asking participants to rank things in order, whether it’s order of preference, frequency or perceived value, is done using a rank structure. There can be a variety of interfaces, including drag-and-drop, radio buttons, text boxes and more.
When writing a rank order question…
- Explain how the interface works and what the respondent should do to indicate their choice. For example “drag and drop the items in this list to show your order of preference.”
- Be clear about which end of the scale is which. For example, “With the best at the top, rank these items from best to worst”
- Be as specific as you can about how the respondent should consider the options and how to rank them. For example, “thinking about the last 3 months’ viewing, rank these TV streaming services in order of quality, starting with the best”
Slider structures ask the respondent to move a pointer or button along a scale, usually a numerical one, to indicate their answers.
When writing a slider question…
- Consider whether the question format will be intuitive to your respondents, and whether you should add help text such as “click/tap and drag on the bar to select your answer”
- Qualtrics includes the option for an open field where your respondent can type their answer instead of using a slider. If you offer this, make sure to reference it in the survey question so the respondent understands its purpose.
Also known as an open field question, this format allows survey-takers to answer in their own words by typing into the comments box.
When writing a text entry question…
- Use open-ended question structures like “How do you feel about…” “If you said x, why?” or “What makes a good x?”
- Open-ended questions take more effort to answer, so use these types of questions sparingly.
- Be as clear and specific as possible in how you frame the question. Give them as much context as you can to help make answering easier. For example, rather than “How is our customer service?”, write “Thinking about your experience with us today, in what areas could we do better?”
Matrix table
Matrix structures allow you to address several topics using the same rating system, for example a Likert scale (Very satisfied / satisfied / neither satisfied nor dissatisfied / dissatisfied / very dissatisfied).
When writing a matrix table question…
- Make sure the topics are clearly differentiated from each other, so that participants don’t get confused by similar questions placed side by side and answer the wrong one.
- Keep text brief and focused. A matrix includes a lot of information already, so make it easier for your survey-taker by using plain language and short, clear phrases in your matrix text.
- Add detail to the introductory static text if necessary to help keep the labels short. For example, if your introductory text says “In the Philadelphia store, how satisfied were you with the…” you can make the topic labels very brief, for example “staff friendliness” “signage” “price labeling” etc.
Now that you know your rating scales from your open fields, here are the 7 most common mistakes to avoid when you write questions. We’ve also added plenty of survey question examples to help illustrate the points.
Likert Scale Questions
Likert scales are commonly used in market research when dealing with single topic survyes. They're simple and most reliable when combatting survey bias . For each question or statement, subjects choose from a range of possible responses. The responses, for example, typically include:
- Strongly agree
- Strongly disagree
7 survey question examples to avoid.
There are countless great examples of writing survey questions but how do you know if your types of survey questions will perform well? We've highlighted the 7 most common mistakes when attempting to get customer feedback with online surveys.
Survey question mistake #1: Failing to avoid leading words / questions
Subtle wording differences can produce great differences in results. For example, non-specific words and ideas can cause a certain level of confusing ambiguity in your survey. “Could,” “should,” and “might” all sound about the same, but may produce a 20% difference in agreement to a question.
In addition, strong words such as “force” and “prohibit” represent control or action and can bias your results.
Example: The government should force you to pay higher taxes.
No one likes to be forced, and no one likes higher taxes. This agreement scale question makes it sound doubly bad to raise taxes. When survey questions read more like normative statements than questions looking for objective feedback, any ability to measure that feedback becomes difficult.
Wording alternatives can be developed. How about simple statements such as: The government should increase taxes, or the government needs to increase taxes.
Example: How would you rate the career of legendary outfielder Joe Dimaggio?
This survey question tells you Joe Dimaggio is a legendary outfielder. This type of wording can bias respondents.
How about replacing the word “legendary” with “baseball” as in: How would you rate the career of baseball outfielder Joe Dimaggio? A rating scale question like this gets more accurate answers from the start.
Survey question mistake #2: Failing to give mutually exclusive choices
Multiple choice response options should be mutually exclusive so that respondents can make clear choices. Don’t create ambiguity for respondents.
Review your survey and identify ways respondents could get stuck with either too many or no single, correct answers to choose from.
Example: What is your age group?
What answer would you select if you were 10, 20, or 30? Survey questions like this will frustrate a respondent and invalidate your results.
Example: What type of vehicle do you own?
This question has the same problem. What if the respondent owns a truck, hybrid, convertible, cross-over, motorcycle, or no vehicle at all?
Survey question mistake #3: Not asking direct questions
Questions that are vague and do not communicate your intent can limit the usefulness of your results. Make sure respondents know what you’re asking.
Example: What suggestions do you have for improving Tom’s Tomato Juice?
This question may be intended to obtain suggestions about improving taste, but respondents will offer suggestions about texture, the type of can or bottle, about mixing juices, or even suggestions relating to using tomato juice as a mixer or in recipes.
Example: What do you like to do for fun?
Finding out that respondents like to play Scrabble isn’t what the researcher is looking for, but it may be the response received. It is unclear that the researcher is asking about movies vs. other forms of paid entertainment. A respondent could take this question in many directions.
Survey question mistake #4: Forgetting to add a “prefer not to answer” option
Sometimes respondents may not want you to collect certain types of information or may not want to provide you with the types of information requested.
Questions about income, occupation, personal health, finances, family life, personal hygiene, and personal, political, or religious beliefs can be too intrusive and be rejected by the respondent.
Privacy is an important issue to most people. Incentives and assurances of confidentiality can make it easier to obtain private information.
While current research does not support that PNA (Prefer Not to Answer) options increase data quality or response rates, many respondents appreciate this non-disclosure option.
Furthermore, different cultural groups may respond differently. One recent study found that while U.S. respondents skip sensitive questions, Asian respondents often discontinue the survey entirely.
- What is your race?
- What is your age?
- Did you vote in the last election?
- What are your religious beliefs?
- What are your political beliefs?
- What is your annual household income?
These types of questions should be asked only when absolutely necessary. In addition, they should always include an option to not answer. (e.g. “Prefer Not to Answer”).
Survey question mistake #5: Failing to cover all possible answer choices
Do you have all of the options covered? If you are unsure, conduct a pretest version of your survey using “Other (please specify)” as an option.
If more than 10% of respondents (in a pretest or otherwise) select “other,” you are probably missing an answer. Review the “Other” text your test respondents have provided and add the most frequently mentioned new options to the list.
Example: You indicated that you eat at Joe's fast food once every 3 months. Why don't you eat at Joe's more often?
There isn't a location near my house
I don't like the taste of the food
Never heard of it
This question doesn’t include other options, such as healthiness of the food, price/value or some “other” reason. Over 10% of respondents would probably have a problem answering this question.
Survey question mistake #6: Not using unbalanced scales carefully
Unbalanced scales may be appropriate for some situations and promote bias in others.
For instance, a hospital might use an Excellent - Very Good - Good - Fair scale where “Fair” is the lowest customer satisfaction point because they believe “Fair” is absolutely unacceptable and requires correction.
The key is to correctly interpret your analysis of the scale. If “Fair” is the lowest point on a scale, then a result slightly better than fair is probably not a good one.
Additionally, scale points should represent equi-distant points on a scale. That is, they should have the same equal conceptual distance from one point to the next.
For example, researchers have shown the points to be nearly equi-distant on the strongly disagree–disagree–neutral–agree–strongly agree scale.
Set your bottom point as the worst possible situation and top point as the best possible, then evenly spread the labels for your scale points in-between.
Example: What is your opinion of Crazy Justin's auto-repair?
Pretty good
The Best Ever
This question puts the center of the scale at fantastic, and the lowest possible rating as “Pretty Good.” This question is not capable of collecting true opinions of respondents.
Survey question mistake #7: Not asking only one question at a time
There is often a temptation to ask multiple questions at once. This can cause problems for respondents and influence their responses.
Review each question and make sure it asks only one clear question.
Example: What is the fastest and most economical internet service for you?
This is really asking two questions. The fastest is often not the most economical.
Example: How likely are you to go out for dinner and a movie this weekend?
Dinner and Movie
Dinner Only
Even though “dinner and a movie” is a common term, this is two questions as well. It is best to separate activities into different questions or give respondents these options:
5 more tips on how to write a survey
Here are 5 easy ways to help ensure your survey results are unbiased and actionable.
1. Use the Funnel Technique
Structure your questionnaire using the “funnel” technique. Start with broad, general interest questions that are easy for the respondent to answer. These questions serve to warm up the respondent and get them involved in the survey before giving them a challenge. The most difficult questions are placed in the middle – those that take time to think about and those that are of less general interest. At the end, we again place general questions that are easier to answer and of broad interest and application. Typically, these last questions include demographic and other classification questions.
2. Use “Ringer” questions
In social settings, are you more introverted or more extroverted?
That was a ringer question and its purpose was to recapture your attention if you happened to lose focus earlier in this article.
Questionnaires often include “ringer” or “throw away” questions to increase interest and willingness to respond to a survey. These questions are about hot topics of the day and often have little to do with the survey. While these questions will definitely spice up a boring survey, they require valuable space that could be devoted to the main topic of interest. Use this type of question sparingly.
3. Keep your questionnaire short
Questionnaires should be kept short and to the point. Most long surveys are not completed, and the ones that are completed are often answered hastily. A quick look at a survey containing page after page of boring questions produces a response of, “there is no way I’m going to complete this thing”. If a questionnaire is long, the person must either be very interested in the topic, an employee, or paid for their time. Web surveys have some advantages because the respondent often can't view all of the survey questions at once. However, if your survey's navigation sends them page after page of questions, your response rate will drop off dramatically.
How long is too long? The sweet spot is to keep the survey to less than five minutes. This translates into about 15 questions. The average respondent is able to complete about 3 multiple choice questions per minute. An open-ended text response question counts for about three multiple choice questions depending, of course, on the difficulty of the question. While only a rule of thumb, this formula will accurately predict the limits of your survey.
4. Watch your writing style
The best survey questions are always easy to read and understand. As a rule of thumb, the level of sophistication in your survey writing should be at the 9th to 11th grade level. Don’t use big words. Use simple sentences and simple choices for the answers. Simplicity is always best.
5. Use randomization
We know that being the first on the list in elections increases the chance of being elected. Similar bias occurs in all questionnaires when the same answer appears at the top of the list for each respondent. Randomization corrects this bias by randomly rotating the order of the multiple choice matrix questions for each respondent.
While not totally inclusive, these seven survey question tips are common offenders in building good survey questions. And the five tips above should steer you in the right direction.
Focus on creating clear questions and having an understandable, appropriate, and complete set of answer choices. Great questions and great answer choices lead to great research success. To learn more about survey question design, download our eBook, The Qualtrics survey template guide or get started with a free survey account with our world-class survey software .
Sarah Fisher
Related Articles
February 8, 2023
Smoothing the transition from school to work with work-based learning
December 6, 2022
How customer experience helps bring Open Universities Australia’s brand promise to life
August 9, 2022
3 things that will improve your teachers’ school experience
August 2, 2022
Why a sense of belonging at school matters for K-12 students
July 14, 2022
Improve the student experience with simplified course evaluations
March 17, 2022
Understanding what’s important to college students
February 18, 2022
Malala: ‘Education transforms lives, communities, and countries’
July 8, 2020
5 challenges in getting back to school (and 5 ways to tackle them)
Stay up to date with the latest xm thought leadership, tips and news., request demo.
Ready to learn more about Qualtrics?
Surveys & Questionnaires
Surveys involve asking a series of questions to participants. They can be administered online, in person, or remotely (e.g. by post/mail). The data collected can be analysed quantitatively or qualitatively (or both). Researchers might carry out statistical surveys to make statistical inferences about the population being studied. Such inferences depend strongly on the survey questions used (Solomon, 2001) meaning that getting the wording right is crucial. For this reason, many test out surveys in pilot studies with smaller populations and use the results to refine their survey instrument.
Sampling for surveys can range between self-selection (e.g. where a link is shared with members of a target population in the hope they and others contribute data and share the survey) through to the use of specialised statistical techniques (“probability sampling”) that analyse results from a carefully selected sample to draw statistical conclusions about the wider population. Survey methodologies therefore cover a range of considerations including sampling, research instrument design, improving response rates, ensuring quality in data, and methods of analysis (Groves et al., 2011).
One common question format is to collect quantitative data alongside qualitative questions. This allows a more detailed description or justification for the answer given to be provided. Collecting ordinal data (e.g. ranking of preferences through a Likert scale) can be a way to make qualitative data more amenable to quantitative analysis. But there is no one superior approach: the crucial thing is that the survey questions and their phrasing aligns with the research question(s) correctly.
Surveys are widely used in education science and in the social sciences more generally. Surveys are highly efficient (both in terms of time and money) compared with other methods, and can be administered remotely. They can provide a series of data points on a subject which can be compared across the sample group(s). This provides a considerable degree of flexibility when it comes to analysing data as several variables may be tested at once. Surveys also work well when used alongside other methods, perhaps to provide a baseline of data (such as demographics) for the first step in a research study. They are also commonly used in evaluations of teaching & learning (i.e. after an intervention to assess the impact). However, there are some noteworthy disadvantages to using surveys. Respondents may not feel encouraged to provide accurate answers, or may not feel comfortable providing answers that present themselves in a unfavourable manner (particularly if the survey is not anonymous). “Closed” questions may have a lower validity rate than other question types as they might be interpreted differently. Data errors due to question non-responses may exist creating bias. Survey answer options should be selected carefully because they may be interpreted differently by respondents (Vehovar & Katja Lozar, 2008).
Surveys & Questionnaires: GO-GN Insights
Marjon Baas collected quantitative data through a questionnaire among teachers within an OER Community of Practice to explore the effect of the activities undertaken to encourage the use of the community on teachers’ behaviour in relation to OER.
“I used several theoretical models (Clements and Pawlowski, 2012; Cox and Trotter, 2017; Armellini and Nie, 2013) to conceptualise different aspects (that relate to) OER adoption. This enabled me as a researcher to design my specific research instruments.”
Judith Pete had a deliberate selection of twelve Sub-Saharan African universities across Kenya, Ghana and South Africa with randomly sampled students and lecturers to develop a representative view of OER. Separate questionnaires were used for students (n=2249) and lecturers (n=106).
“We used surveys to collect data across three continents. Online survey tools were very helpful in online data collection and, where that was not possible, local coordinators used physical copies of the survey and later entered the information into the database. This approach was cost-effective, versatile and quick and easy to implement. We were able to reach a wide range of respondents in a short time. Sometimes we wondered, though, whether all those who responded had enough time to fully process and understand the questions that they were being asked. We had to allocate a significant amount of time to curating the data afterwards.”
Samia Almousa adopted Unified Theory of Acceptance and Use of Technology (UTAUT) survey questionnaire, along with additional constructs (relating to information quality and culture) as a lens through which her research data is analysed.
“In my research, I have employed a Sequential Explanatory Mixed Methods Design (online questionnaires and semi-structured interviews) to examine the academics’ perceptions of OERs integration into their teaching practices, as well as to explore the motivations that encourage them to use and reuse OERs, and share their teaching materials in the public domain. The online questionnaire was an efficient and fast way to reach a large number of academics. I used the online survey platform, which does not require entering data or coding as data is input by the participants and answers are saved automatically (Sills & Song, 2002). Using questionnaires as a data collection tool has some drawbacks. In my study, the questionnaire I developed was long, which made some participants choose their answers randomly. In addition, I have received many responses from academics in other universities although the questionnaire was sent to the sample university. Since I expected this to happen, I required the participants to write the name of their university in the personal information section of the questionnaire, then excluded the responses from outside the research sample. My advice for any researcher attempting to use questionnaires as a data collection tool is to ensure that their questionnaire is as short and clear as possible to help the researcher in analysing the findings and the participants in answering all questions accurately. Additionally, personal questions should be as few as possible to protect the identity and privacy of the participants, and to obtain the ethical approval quickly.”
Olawale Kazeeem Iyikolakan adopted a descriptive survey of the correlational type. The author research design examines the relationship among the key research variables (technological self-efficacy, perception, and use of open educational resources) and to identify the most significant factors that influence academic performance of LIS undergraduates without a causal connection.
“The descriptive research design is used as a gathering of information about prevailing conditions or situations for the purpose of description and interpretation (Aggarwal, 2008). My research design examines the relationship among the key research variables (technological self-efficacy, perception, and use of open educational resources) to identify the most significant factors that influence academic performance of Library & Information Science undergraduates without a causal connection. Ponto (2015) describes that descriptive survey research is a useful and legitimate approach to research that has clear benefits in helping to describe and explore variables and constructs of interest by using quantitative research strategies (e.g., using a survey with numerically rated items. “The reason for the choice of descriptive survey research instead of ex-post-facto quasi-experimental design is that this type of research design is used to capture people’s perceptions, views, use, about a current issue, current state of play or movements such as perception and use of OER. This research design comes with several merits as it enables the researcher to obtain the needed primary data directly from the respondents. Other advantages include: (1) Using this method, the researcher has no control over the variable; (2) the researcher can only report what has happened or what is happening. One of the demerits of this type of research design is that research results may reflect a certain level of bias due to the absence of statistical tests.”
Useful references for Surveys & Questionnaires: Aggarwal (2008); Fowler (2014); Groves et al., 2011); Lefever, Dal & Matthíasdóttir (2007); Ponto (2015); Sills & Song (2002); Solomon (2001); Vehovar & Manfreda (2008); Vehovar, Manfreda, & Berzelak (2018)
Research Methods Handbook Copyright © 2020 by Rob Farrow; Francisco Iniesto; Martin Weller; and Rebecca Pitt is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.
Share This Book
Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.
Chapter 8: Data Collection Methods: Survey Research
8.6 Designing Effective Survey Questions
We have considered several general points about surveys, including some of their pros and cons, as well as when to use surveys, and how often and in what ways to administer them. In this section we will get more specific and take a look at how to pose understandable questions that will yield useable data and how to present those questions on your survey.
Asking Effective Survey Questions
The first thing you need to do in order to write effective survey questions is to identify what exactly it is that you wish to know. While that should go without saying, we cannot stress enough how easy it is to forget to include important questions when designing a survey. For example, suppose you want to understand how students at your school made the transition from high school to college. You wish to identify which students were comparatively more or less successful in this transition and which factors contributed to students’ success or lack thereof. To understand which factors shaped successful students’ transitions to college, you will need to include questions in your survey about all the possible factors that could contribute. Consulting the literature on the topic will certainly help, but you should also take the time to do some brainstorming on your own and to talk with others about what they think may be important in the transition to college. Perhaps time or space limitations will not allow you to include every single item you have come up with, so you will need to think about ranking your questions to be sure to include those that you view as most important.
Although we have stressed the importance of including questions on all topics you view as important to your overall research question, you do not want to take an everything-but-the-kitchen-sink approach by uncritically including every possible question that occurs to you. Doing so puts an unnecessary burden on your survey respondents. Remember that you have asked your respondents to give you their time and attention and to take care in responding to your questions; show them your respect by only asking questions that you view as important.
Once you have identified all the topics about which you would like to ask questions, you will need to actually write those questions. Questions should be as clear and to the point as possible. This is not the time to show off your creative writing skills; a survey is a technical instrument and should be written in a way that is as direct and succinct as possible. The best way to show your appreciation for your respondents´ time is to not waste it. Ensuring that your questions are clear and not overly wordy will go a long way toward showing your respondents the gratitude they deserve.
To properly value respondents’ time, make sure that every question you pose will be relevant to every person you ask to complete your survey. This means two things: first, that respondents have knowledge about your survey topic, and second, that respondents have experience with the events, behaviours, or feelings you are asking them to report. In our example of the transition to college, heeding the criterion of relevance would mean that respondents must understand what exactly you mean by “transition to college” (if you are going to use that phrase in your survey) and have actually experienced the transition to college themselves.
When developing survey questions, a researcher must consider the following aspects:
Context effects : This can be a function of funneling or be inadvertent, but questions that are asked can prime (i.e., make more salient) certain views or thoughts that then impact the way respondents answer subsequent questions. For example, if we ask you a number of questions about harm reduction and the Insite Safe Injection Site, and then ask you whether you support the Safe Injection Site, you may be more likely to support the site than if I had asked you several questions about crime in the area of the site before asking you if you support the site.
Context appropriate wording : It is important that the wording you choose is appropriate for the people who are going to be answering your questions. You should not ask people questions they cannot understand due to their age, or language barriers (including jargon). Use vocabulary appropriate for the people who are answering your survey.
Minimizing bias : Questions with loaded terms (e.g., adjectives like disgusting, dangerous, or wonderful; and terms like always or never) and non-neutral wording should be avoided. These questions ultimately lead people to the “correct” answer. The tone of the question will also impact how people answer. People answering the questions should not feel judged for their response or their opinion. If they do, they are less likely to answer the question honestly; instead, they will answer the question the way they think you want them to respond.
Ambiguity : Questions can be ambiguous in many ways. This is one area that can benefit from pilot testing (or pre-testing) your questions to determine which questions can be interpreted differently from your intended meaning. In particular, use of words like “often” or “sometimes” can result in different interpretations. However, even words that appear to be clear to the researcher can be misinterpreted by the respondents and make the question difficult for them to answer. Acronyms can also make questions difficult to answer if they are unknown to the respondents. As noted above, context appropriate wording to the audience responding to the questions should be considered; thus, acronyms are sometimes appropriate.
Meaningless responses : People can and do respond to questions about things about which they have no knowledge. As a researcher, you want responses by people who have some knowledge of the subject or ability to meaningfully answer the question.
Double-barreled questions : This type of question should be avoided at all costs – essentially this is a question where there is more than one question within it. For example: Do you enjoy biking and hiking in your free time? If a respondent enjoys biking but not hiking, how do they respond?
If you decide that you do wish to pose some questions about matters with which only a portion of respondents will have had experience, it may be appropriate to introduce a filter question into your survey. A filter question is designed to identify some subset of survey respondents who are asked additional questions that are not relevant to the entire sample.
There are some ways of asking questions that are bound to confuse survey respondents. Researchers should take great care to avoid these kinds of questions. These include: questions that pose double negatives, those that use confusing or culturally specific terms, and those that ask more than one question but are posed as a single question. Any time respondents are forced to decipher questions that utilize two forms of negation, confusion is bound to ensue. In general, avoiding negative terms in your question wording will help to increase respondent understanding. You should also avoid using terms or phrases that may be regionally or culturally specific (unless you are absolutely certain all your respondents come from the region or culture whose terms you are using).
Another thing to avoid when constructing survey questions is the problem of social desirability. We all want to look good, right? And we all probably know the politically correct response to a variety of questions, whether we agree with the politically correct response or not. In survey research, social desirability refers to the idea that respondents will try to answer questions in a way that will present them in a favourable light. Perhaps we decide that to understand the transition to college, we need to know whether respondents ever cheated on an exam in high school or college. We all know that cheating on exams is wrong, so it may be difficult to get people to admit to cheating on an exam in a survey. But if you can guarantee respondents’ confidentiality, or even better, their anonymity, chances are much better that they will be honest about having engaged in this socially undesirable behaviour. Another way to avoid problems of social desirability is to try to phrase difficult questions in the most benign way possible. Babbie (2010) offers a useful suggestion for helping you do this—simply imagine how you would feel responding to your survey questions. If you would be uncomfortable, chances are others would as well.
Finally, it is important to get feedback on your survey questions in a pre-test, from as many people as possible, especially people who are like those in your sample. Now is not the time to be shy. Ask your friends for help, ask your mentors for feedback, ask your family to take a look at your survey as well. The more feedback you can get on your survey questions, the better are the chances that you will come up with a set of questions that are understandable to a wide variety of people and, most importantly, to those in your sample.
In order to pose effective survey questions, researchers should do the following:
- Identify what it is they wish to know.
- Keep questions clear and succinct.
- Make questions relevant to respondents.
- Use filter questions when necessary.
- Avoid questions that are likely to confuse respondents, such as those that use double negatives or culturally specific terms, or pose more than one question in the form of a single question (double-barreled question).
- Imagine how they would feel responding to these questions themselves.
- Get feedback, especially from people who resemble those in the researcher’s sample.
Research Methods for the Social Sciences: An Introduction Copyright © 2020 by Valerie Sheppard is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.
Share This Book
- University Libraries
- Research Guides
- Topic Guides
- Research Methods Guide
- Survey Research
Research Methods Guide: Survey Research
- Introduction
- Research Design & Method
- Interview Research
- Data Analysis
- Resources & Consultation
Survey Jokes
"There's never an option that reflects what I exactly want to say."
"Sorry, sir, but we don't have a category for that."
"I'm an honest person but when I take an online survey, I'm a big liar."
"93% of the people who took the survey thought it was 100% waste of time."
"Contrary to popular opinion, most people's opinions aren't so popular."
"According to the latest poll, 64% of the public don't pay any attention to polls..."
"Do you really care about my opinion or am I just a random sample?"
"Having conducted a survey of 32.4% of the 56.6% of tax experts about 43% of the budget we found an 87.6% probability that we haven't got the foggiest ideas of what it means."
You can find more survey jokes and cartoons here .
Goals of Survey Research
Surveys are designed to:
- Preferences
- Describe demographic information (e.g., age, gender, school year, affiliation)
- Draw patterns from the population studied
- Explain trends out of phenomenon
Modes of Data Collection
There are several types of surveys, including:
- Face-to-Face
- Paper and Pencil
- Online (Web-based): e.g., SurveyMonkey , Qualtrics , Question Pro
FAQ: Desigining Survey Questions
How can I develop effective survey questions?
Clarity, simplicity, length, and acceptability are keys to create effective survey questions. In crafting your survey, you should avoid complicated, long, and ambiguous questions. Try not to address too many issues in one question - thus avoid double-barreled questions. Try not to ask too many questions in one survey. Try not to ask too difficult questions.
What question formats are there?
There are two basic formats: 1) Forced (closed-ended), and 2) Open-ended. You can use a combination of both in a survey depending on your goal.
Can you give me some examples of closed- and open-ended questions?
Let's say you are interested in studying people's environment concerns through shopping habits:
- Yes" or "No" (Note: Survey respondents are forced to choose one of the two.)
- "Very infrequently", "Somewhat infrequently", "Occasionally", "Somewhat frequently", "Very frequently" (Note: Survey respondents are forced to choose one out of multiple choices.)
- "In the last 30 days, how many times have you used a reusable shopping bag when doing grocery shopping?" (Note: Survey respondents are allowed to use either numeric values or text entries.)
- "Please describe why (or why not) you use a reusable shopping bag for grocery shopping." (Note: Survey respondents can freely write their answers if they want to).
Does ordering of questions influence survey results?
Yes, it does. The natural and logical flow of survey is important to collect good survey results. Start and end the survey with easy questions. Start the survey with most familiar questions. Keep in mind that a high response rate does not guarantee a high survey completion rate - in many online surveys, people do not always complete a survey.
Do you have any other suggestions for conducting a good survey research?
- Invitation: Creating a good invitation to participate your survey is important. In your survey invitation (email) letter, try to include 1-2 sentences describing a purpose or goal of your survey.
- Mention length of survey
- Ensure that responses are confidential
- Ensure that participation is voluntary
- Provide contact information in the cases where participants have questions about survey
- Pretesting: Before administering a survey, make sure you test your survey in advance. Survey pretesting will help you determine the effectiveness of your survey.
- << Previous: Research Design & Method
- Next: Interview Research >>
- Last Updated: Aug 21, 2023 10:42 AM
Numbers, Facts and Trends Shaping Your World
Read our research on:
Full Topic List
Regions & Countries
- Publications
- Our Methods
- Short Reads
- Tools & Resources
Read Our Research On:
Methods 101: Survey Question Wording
The second video in Pew Research Center’s Methods 101 series helps explain question wording – a concept at the center of sound public opinion survey research – and why it’s important. Writing clear and neutral survey questions is much more difficult than it might seem. We spend a lot of time thinking about the phrasing and ordering of our survey questions. Paying close attention to these seemingly minor factors makes a huge difference. It helps us avoid the trap of poorly worded or leading questions, which can skew survey results.
Sign up for our weekly newsletter
Fresh data delivered Saturday mornings
901 E St. NW, Suite 300 Washington, DC 20004 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 | Media Inquiries
Research Topics
- Email Newsletters
ABOUT PEW RESEARCH CENTER Pew Research Center is a nonpartisan, nonadvocacy fact tank that informs the public about the issues, attitudes and trends shaping the world. It does not take policy positions. The Center conducts public opinion polling, demographic research, computational social science research and other data-driven research. Pew Research Center is a subsidiary of The Pew Charitable Trusts , its primary funder.
© 2024 Pew Research Center
Popular searches
- How to Get Participants For Your Study
- How to Do Segmentation?
- Conjoint Preference Share Simulator
- MaxDiff Analysis
- Likert Scales
- Reliability & Validity
Request consultation
Do you need support in running a pricing or product study? We can help you with agile consumer research and conjoint analysis.
Looking for an online survey platform?
Conjointly offers a great survey tool with multiple question types, randomisation blocks, and multilingual support. The Basic tier is always free.
Research Methods Knowledge Base
- Navigating the Knowledge Base
- Foundations
- Construct Validity
- Reliability
- Levels of Measurement
- Types of Surveys
- Selecting the Survey Method
Types of Survey Questions
- Question Content
- Response Format
- Question Wording
- Question Placement
- Online Survey Research Panels
- Pluses & Minuses of Survey Methods
- Scaling in Measurement
- Qualitative Measures
- Unobtrusive Measures
- Research Design
- Table of Contents
Fully-functional online survey tool with various question types, logic, randomisation, and reporting for unlimited number of surveys.
Completely free for academics and students .
Survey questions can be divided into two broad types: structured and unstructured . From an instrument design point of view, the structured questions pose the greater difficulties (see Decisions About the Response Format ). From a content perspective, it may actually be more difficult to write good unstructured questions. Here, I’ll discuss the variety of structured questions you can consider for your survey (we’ll discuss unstructured questioning more under Interviews ).
Dichotomous Questions
When a question has two possible responses, we consider it dichotomous . Surveys often use dichotomous questions that ask for a Yes/No, True/False or Agree/Disagree response. There are a variety of ways to lay these questions out on a questionnaire:
Questions Based on Level Of Measurement
We can also classify questions in terms of their level of measurement . For instance, we might measure occupation using a nominal question. Here, the number next to each response has no meaning except as a placeholder for that response. The choice of a “2” for a lawyer and a “1” for a truck driver is arbitrary – from the numbering system used we can’t infer that a lawyer is “twice” something that a truck driver is.
We might ask respondents to rank order their preferences for presidential candidates using an ordinal question:
We want the respondent to put a 1, 2, 3 or 4 next to the candidate, where 1 is the respondent’s first choice. Note that this could get confusing. We might want to state the prompt more explicitly so the respondent knows we want a number from one to 4 (the respondent might check their favorite candidate, or assign higher numbers to candidates they prefer more instead of understanding that we want rank ordering).
We can also construct survey questions that attempt to measure on an interval level. One of the most common of these types is the traditional 1-to-5 rating (or 1-to-7, or 1-to-9, etc.). This is sometimes referred to as a Likert response scale (see Likert Scaling) . Here, we see how we might ask an opinion question on a 1-to-5 bipolar scale (it’s called bipolar because there is a neutral point and the two ends of the scale are at opposite positions of the opinion):
Another interval question uses an approach called the semantic differential . Here, an object is assessed by the respondent on a set of bipolar adjective pairs (using 5-point rating scale):
Finally, we can also get at interval measures by using what is called a cumulative or Guttman scale (see Guttman Scaling ). Here, the respondent checks each item with which they agree. The items themselves are constructed so that they are cumulative – if you agree to one, you probably agree to all of the ones above it in the list:
Filter or Contingency Questions
Sometimes you have to ask the respondent one question in order to determine if they are qualified or experienced enough to answer a subsequent one. This requires using a filter or contingency question . For instance, you may want to ask one question if the respondent has ever smoked marijuana and a different question if they have not. in this case, you would have to construct a filter question to determine whether they’ve ever smoked marijuana:
Filter questions can get very complex. Sometimes, you have to have multiple filter questions in order to direct your respondents to the correct subsequent questions. There are a few conventions you should keep in mind when using filters:
- Try to avoid having more than three levels (two jumps) for any question. Too many jumps will confuse the respondent and may discourage them from continuing with the survey.
- If only two levels, use graphic to jump (e.g. arrow and box). The example above shows how you can make effective use of an arrow and box to help direct the respondent to the correct subsequent question.
- If possible, jump to a new page. If you can’t fit the response to a filter on a single page, it’s probably best to be able to say something like “If YES, please turn to page 4” rather that “If YES, please go to Question 38” because the respondent will generally have an easier time finding a page than a specific question.
Cookie Consent
Conjointly uses essential cookies to make our site work. We also use additional cookies in order to understand the usage of the site, gather audience analytics, and for remarketing purposes.
For more information on Conjointly's use of cookies, please read our Cookie Policy .
Which one are you?
I am new to conjointly, i am already using conjointly.
- Survey Research: Types, Examples & Methods
Surveys have been proven to be one of the most effective methods of conducting research. They help you to gather relevant data from a large audience, which helps you to arrive at a valid and objective conclusion.
Just like other research methods, survey research had to be conducted the right way to be effective. In this article, we’ll dive into the nitty-gritty of survey research and show you how to get the most out of it.
What is Survey Research?
Survey research is simply a systematic investigation conducted via a survey. In other words, it is a type of research carried out by administering surveys to respondents.
Surveys already serve as a great method of opinion sampling and finding out what people think about different contexts and situations. Applying this to research means you can gather first-hand information from persons affected by specific contexts.
Survey research proves useful in numerous primary research scenarios. Consider the case whereby a restaurant wants to gather feedback from its customers on its new signatory dish. A good way to do this is to conduct survey research on a defined customer demographic.
By doing this, the restaurant is better able to gather primary data from the customers (respondents) with regards to what they think and feel about the new dish across multiple facets. This means they’d have more valid and objective information to work with.
Why Conduct Survey Research?
One of the strongest arguments for survey research is that it helps you gather the most authentic data sets in the systematic investigation. Survey research is a gateway to collecting specific information from defined respondents, first-hand.
Surveys combine different question types that make it easy for you to collect numerous information from respondents. When you come across a questionnaire for survey research, you’re likely to see a neat blend of close-ended and open-ended questions, together with other survey response scale questions.
Apart from what we’ve discussed so far, here are some other reasons why survey research is important:
- It gives you insights into respondents’ behaviors and preferences which is valid in any systematic investigation.
- Many times, survey research is structured in an interactive manner which makes it easier for respondents to communicate their thoughts and experiences.
- It allows you to gather important data that proves useful for product improvement; especially in market research.
Characteristics of Survey Research
- Usage : Survey research is mostly deployed in the field of social science; especially to gather information about human behavior in different social contexts.
- Systematic : Like other research methods, survey research is systematic. This means that it is usually conducted in line with empirical methods and follows specific processes.
- Replicable : In survey research, applying the same methods often translates to achieving similar results.
- Types : Survey research can be conducted using forms (offline and online) or via structured, semi-structured, and unstructured interviews .
- Data : The data gathered from survey research is mostly quantitative; although it can be qualitative.
- Impartial Sampling : The data sample in survey research is random and not subject to unavoidable biases.
- Ecological Validity : Survey research often makes use of data samples obtained from real-world occurrences.
Types of Survey Research
Survey research can be subdivided into different types based on its objectives, data source, and methodology.
Types of Survey Research Based on Objective
- Exploratory Survey Research
Exploratory survey research is aimed at finding out more about the research context. Here, the survey research pays attention to discovering new ideas and insights about the research subject(s) or contexts.
Exploratory survey research is usually made up of open-ended questions that allow respondents to fully communicate their thoughts and varying perspectives on the subject matter. In many cases, systematic investigation kicks off with an exploratory research survey.
- Predictive Survey Research
This type of research is also referred to as causal survey research because it pays attention to the causative relationship between the variables in the survey research. In other words, predictive survey research pays attention to existing patterns to explain the relationship between two variables.
It can also be referred to as conclusive research because it allows you to identify causal variables and resultant variables; that is cause and effect. Predictive variables allow you to determine the nature of the relationship between the causal variables and the effect to be predicted.
- Descriptive Survey Research
Unlike predictive research, descriptive survey research is largely observational. It is ideal for quantitative research because it helps you to gather numeric data.
The questions listed in descriptive survey research help you to uncover new insights into the actions, thoughts, and feelings of survey respondents. With this data, you can know the extent to which different conditions can be obtained among these subjects.
Types of Survey Research Based on Data Source
- Secondary Data
Survey research can be designed to collect and process secondary data. Secondary data is a type of data that has been collected from primary sources in the past and is readily available for use. It is the type of data that is already existing.
Since secondary data is gathered from third-party sources, it is mostly generic, unlike primary data that is specific to the research context. Common sources of secondary data in survey research include books, data collected through other surveys, online data, data from government archives, and libraries.
- Primary Data
This is the type of research data that is collected directly; that is, data collected from first-hand sources. Primary data is usually tailored to a specific research context so that reflects the aims and objectives of the systematic investigation.
One of the strongest points of primary data over its secondary counterpart is validity. Because it is collected directly from first-hand sources, primary data typically results in objective research findings.
You can collect primary data via interviews, surveys, and questionnaires, and observation methods.
Types of Survey Research Based on Methodology
- Quantitative Research
Quantitative research is a common research method that is used to gather numerical data in a systematic investigation. It is often deployed in research contexts that require statistical information to arrive at valid results such as in social science or science.
For instance, as an organization looking to find out how many persons are using your product in a particular location, you can administer survey research to collect useful quantitative data. Other quantitative research methods include polls, face-to-face interviews, and systematic observation.
- Qualitative Research
This is a method of systematic investigation that is used to collect non-numerical data from research participants. In other words, it is a research method that allows you to gather open-ended information from your target audience.
Typically, organizations deploy qualitative research methods when they need to gather descriptive data from their customers; for example, when they need to collect customer feedback in product evaluation. Qualitative research methods include one-on-one interviews, observation, case studies, and focus groups.
Survey Research Scales
- Nominal Scale
This is a type of survey research scale that uses numbers to label the different answer options in a survey. On a nominal scale , the numbers have no value in themselves; they simply serve as labels for qualitative variables in the survey.
In cases where a nominal scale is used for identification, there is typically a specific one-on-one relationship between the numeric value and the variable it represents. On the other hand, when the variable is used for classification, then each number on the scale serves as a label or a tag.
Examples of Nominal Scale in Survey Research
1. How would you describe your complexion?
2. Have you used this product?
- Ordinal Scale
This is a type of variable measurement scale that arranges answer options in a specific ranking order without necessarily indicating the degree of variation between these options. Ordinal data is qualitative and can be named, ranked, or grouped.
In an ordinal scale , the different properties of the variables are relatively unknown, and it also identifies, describes, and shows the rank of the different variables. With an ordered scale, it is easier for researchers to measure the degree of agreement and/or disagreement with different variables.
With ordinal scales, you can measure non-numerical attributes such as the degree of happiness, agreement, or opposition of respondents in specific contexts. Using an ordinal scale makes it easy for you to compare variables and process survey responses accordingly.
Examples of Ordinal Scale in Survey Research
1. How often do you use this product?
- Prefer not to say
2. How much do you agree with our new policies?
- Totally agree
- Somewhat agree
- Totally disagree
- Interval Scale
This is a type of survey scale that is used to measure variables existing at equal intervals along a common scale. In some way, it combines the attributes of nominal and ordinal scales since it is used where there is order and there is a meaningful difference between 2 variables.
With an interval scale, you can quantify the difference in value between two variables in survey research. In addition to this, you can carry out other mathematical processes like calculating the mean and median of research variables.
Examples of Interval Scale in Survey Research
1. Our customer support team was very effective.
- Completely agree
- Neither agree nor disagree
- Somewhat disagree
- Completely disagree
2. I enjoyed using this product.
Another example of an interval scale can be seen in the Net Promoter Score.
- Ratio Scale
Just like the interval scale, the ratio scale is quantitative and it is used when you need to compare intervals or differences in survey research. It is the highest level of measurement and it is made up of bits and pieces of the other survey scales.
One of the unique features of the ratio scale is it has a true zero and equal intervals between the variables on the scale. This zero indicates an absence of the variable being measured by the scale. Common occurrences of ratio scales can be seen with distance (length), area, and population measurement.
Examples of Ratio Scale in Survey Research
1. How old are you?
- Below 18 years
- 41 and above
2. How many times do you shop in a week?
- Less than twice
- Three times
- More than four times
Uses of Survey Research
- Health Surveys
Survey research is used by health practitioners to gather useful data from patients in different medical and safety contexts. It helps you to gather primary and secondary data about medical conditions and risk factors of multiple diseases and infections.
In addition to this, administering health surveys regularly helps you to monitor the overall health status of your population; whether in the workplace, school, or community. This kind of data can be used to help prevent outbreaks and minimize medical emergencies in these contexts.
Survey research is also useful when conducting polls; whether online or offline. A poll is a data collection tool that helps you to gather public opinion about a particular subject from a well-defined research sample.
By administering survey research, you can gather valid data from a well-defined research sample, and utilize research findings for decision making. For example, during elections, individuals can be asked to choose their preferred leader via questionnaires administered as part of survey research.
- Customer Satisfaction
Customer satisfaction is one of the cores of every organization as it is directly concerned with how well your product or service meets the needs of your clients. Survey research is an effective way to measure customer satisfaction at different intervals.
As a restaurant, for example, you can send out online surveys to customers immediately when they patronize your business. In these surveys, encourage them to provide feedback on their experience and to provide information on how your service delivery can be improved.
Survey research makes data collection and analysis easy during a census. With an online survey tool like Formplus , you can seamlessly gather data during a census without moving from a spot. Formplus has multiple sharing options that help you collect information without stress.
Survey Research Methods
Survey research can be done using different online and offline methods. Let’s examine a few of them here.
- Telephone Surveys
This is a means of conducting survey research via phone calls. In a telephone survey, the researcher places a call to the survey respondents and gathers information from them by asking questions about the research context under consideration.
A telephone survey is a kind of simulation of the face-to-face survey experience since it involves discussing with respondents to gather and process valid data. However, major challenges with this method include the fact that it is expensive and time-consuming.
- Online Surveys
An online survey is a data collection tool used to create and administer surveys and questionnaires using data tools like Formplus. Online surveys work better than paper forms and other offline survey methods because you can easily gather and process data from a large sample size with them.
- Face-to-Face Interviews
Face-to-face interviews for survey research can be structured, semi-structured, or unstructured depending on the research context and the type of data you want to collect. If you want to gather qualitative data , then unstructured and semi-structured interviews are the way to go.
On the other hand, if you want to collect quantifiable information from your research sample, conducting a structured interview is the best way to go. Face-to-face interviews can also be time-consuming and cost-intensive. Let’s mention here that face-to-face surveys are one of the most widely used methods of survey data collection.
How to Conduct Research Surveys on Formplus
With Formplus, you can create forms for survey research without any hassles. Follow this step-by-step guide to create and administer online surveys for research via Formplus.
1. Sign up at www.formpl.us to create your Formplus account. If you already have a Formplus account, click here to log in.
5. Use the form customization options to change the appearance of your survey. You can add your organization’s logo to the survey, change the form font and layout, and insert preferred background images.
Advantages of Survey Research
- It is inexpensive – with survey research, you can avoid the cost of in-person interviews. It’s also easy to receive data as you can share your surveys online and get responses from a large demographic
- It is the fastest way to get a large amount of first-hand data
- Surveys allow you to compare the results you get through charts and graphs
- It is versatile as it can be used for any research topic
- Surveys are perfect for anonymous respondents in the research
Disadvantages of Survey Research
- Some questions may not get answers
- People may understand survey questions differently
- It may not be the best option for respondents with visual or hearing impairments as well as a demographic with no literacy levels
- People can provide dishonest answers in a survey research
Conclusion
In this article, we’ve discussed survey research extensively; touching on different important aspects of this concept. As a researcher, organization, individual, or student, it is important to understand how survey research works to utilize it effectively and get the most from this method of systematic investigation.
As we’ve already stated, conducting survey research online is one of the most effective methods of data collection as it allows you to gather valid data from a large group of respondents. If you’re looking to kick off your survey research, you can start by signing up for a Formplus account here.
Connect to Formplus, Get Started Now - It's Free!
- ethnographic research survey
- survey research
- survey research method
- busayo.longe
You may also like:
Cobra Effect & Perverse Survey Incentives: Definition, Implications & Examples
In this post, we will discuss the origin of the Cobra effect, its implication, and some examples
Goodhart’s Law: Definition, Implications & Examples
In this article, we will discuss Goodhart’s law in different fields, especially in survey research, and how you can avoid it.
Need More Survey Respondents? Best Survey Distribution Methods to Try
This post offers the viable options you can consider when scouting for survey audiences.
Cluster Sampling Guide: Types, Methods, Examples & Uses
In this guide, we’d explore different types of cluster sampling and show you how to apply this technique to market research.
Formplus - For Seamless Data Collection
Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Publications
- Account settings
The PMC website is updating on October 15, 2024. Learn More or Try it out now .
- Advanced Search
- Journal List
- J Korean Med Sci
- v.38(48); 2023 Dec 11
- PMC10713437
Designing, Conducting, and Reporting Survey Studies: A Primer for Researchers
Olena zimba.
1 Department of Clinical Rheumatology and Immunology, University Hospital in Krakow, Krakow, Poland.
2 National Institute of Geriatrics, Rheumatology and Rehabilitation, Warsaw, Poland.
3 Department of Internal Medicine N2, Danylo Halytsky Lviv National Medical University, Lviv, Ukraine.
Armen Yuri Gasparyan
4 Departments of Rheumatology and Research and Development, Dudley Group NHS Foundation Trust (Teaching Trust of the University of Birmingham, UK), Russells Hall Hospital, Dudley, UK.
Survey studies have become instrumental in contributing to the evidence accumulation in rapidly developing medical disciplines such as medical education, public health, and nursing. The global medical community has seen an upsurge of surveys covering the experience and perceptions of health specialists, patients, and public representatives in the peri-pandemic coronavirus disease 2019 period. Currently, surveys can play a central role in increasing research activities in non-mainstream science countries where limited research funding and other barriers hinder science growth. Planning surveys starts with overviewing related reviews and other publications which may help to design questionnaires with comprehensive coverage of all related points. The validity and reliability of questionnaires rely on input from experts and potential responders who may suggest pertinent revisions to prepare forms with attractive designs, easily understandable questions, and correctly ordered points that appeal to target respondents. Currently available numerous online platforms such as Google Forms and Survey Monkey enable moderating online surveys and collecting responses from a large number of responders. Online surveys benefit from disseminating questionnaires via social media and other online platforms which facilitate the survey internationalization and participation of large groups of responders. Survey reporting can be arranged in line with related recommendations and reporting standards all of which have their strengths and limitations. The current article overviews available recommendations and presents pointers on designing, conducting, and reporting surveys.
INTRODUCTION
Surveys are increasingly popular research studies that are aimed at collecting and analyzing opinions of diverse subject groups at certain periods. Initially and predominantly employed for applied social science research, 1 surveys have maintained their social dimension and transformed into indispensable tools for analyzing knowledge, perceptions, prevalence of clinical conditions, and practices in the medical sciences. 2 In rapidly developing disciplines with social dimensions such as medical education, public health, and nursing, online surveys have become essential for monitoring and auditing healthcare and education services 3 , 4 and generating new hypotheses and research questions. 5 In non-mainstream science countries with uninterrupted Internet access, online surveys have also been praised as useful studies for increasing research activities. 6
In 2016, the Medical Subject Headings (MeSH) vocabulary of the US National Library of Medicine introduced "surveys and questionnaires" as a structured keyword, defining survey studies as "collections of data obtained from voluntary subjects" ( https://www.ncbi.nlm.nih.gov/mesh/?term=surveys+and+questionnaires ). Such studies are instrumental in the absence of evidence from randomized controlled trials, systematic reviews, and cohort studies. Tagging survey reports with this MeSH term is advisable for increasing the retrieval of relevant documents while searching through Medline, Scopus, and other global databases.
Surveys are relatively easy to conduct by distributing web-based and non-web-based questionnaires to large groups of potential responders. The ease of conduct primarily depends on the way of approaching potential respondents. Face-to-face interviews, regular postmails, e-mails, phone calls, and social media posts can be employed to reach numerous potential respondents. Digitization and social media popularization have improved the distribution of questionnaires, expanded respondents' engagement, facilitated swift data processing, and globalization of survey studies. 7
SURVEY REPORTING GUIDANCE
Despite the ease of survey studies and their importance for maintaining research activities across academic disciplines, their methodological quality, reproducibility, and implications vary widely. The deficiencies in designing and reporting are the main reason for the inefficiency of some surveys. For instance, systematic analyses of survey methodologies in nephrology, transfusion medicine, and radiology have indicated that less than one-third of related reports provide valid and reliable data. 8 , 9 , 10 Additionally, no discussions of respondents' representativeness, reasons for nonresponse, and generalizability of the results have been pinpointed as drawbacks of some survey reports. The revealed deficiencies have justified the need for survey designing and data processing in line with reporting recommendations, including those listed on the EQUATOR Network website ( https://www.equator-network.org/ ).
Arguably, survey studies lack discipline-specific and globally-acceptable reporting guidance. The diversity of surveyed subjects and populations is perhaps the main confounder. Although most questionnaires contain socio-demographic questions, there are no reporting guidelines specifically tailored to comprehensively inquire specialists across different academic disciplines, patients, and public representatives.
The EQUATOR Network platform currently lists some widely promoted documents with statements on conducting and reporting web-based and non-web-based surveys ( Table 1 ). 11 , 12 , 13 , 14 The oldest published recommendation guides on postal, face-to-face, and telephone interviews. 1 One of its critical points highlights the need to formulate a clear and explicit question/objective to run a focused survey and to design questionnaires with respondent-friendly layout and content. 1 The Checklist for Reporting Results of Internet E-Surveys (CHERRIES) is the most-used document for reporting online surveys. 11 The CHERRIES checklist included points on ensuring the reliability of online surveys and avoiding manipulations with multiple entries by the same users. 11 A specific set of recommendations, listed by the EQUATOR Network, is available for specialists who plan web-based and non-web-based surveys of knowledge, attitude, and practice in clinical medicine. 12 These recommendations help design valid questionnaires, survey representative subjects with clinical knowledge, and complete transparent reporting of the obtained results. 12
References | Guideline titles and acronyms | Descriptions | Limitations | EQUATOR Network listing |
---|---|---|---|---|
Kelley et al., 2003 | Good practice in the conduct and reporting of survey research | The checklist and recommendations focus on designing questionnaires and ensuring the reliability of non-web-based surveys only. | The checklist and recommendations are not based on the Delphi method. | + |
Eysenbach, 2004 | Checklist for Reporting Results of Internet E-Surveys (CHERRIES) | The CHERRIES checklist focuses on web-based surveys. It ensures the reliability and representativeness of online responses and prevents duplicate/multiple entries by the same users. It is the top-cited e-survey checklist. | This checklist is not based on an expert panel consensus (Delphi method). It does not cover all parts of e-survey reports. | + |
Burns et al., 2008 | A guide for the design and conduct of self-administered surveys of clinicians | This guide includes statements on designing, conducting, and reporting web- and non-web-based surveys of clinicians' knowledge, attitude, and practice. | The statements are based on a literature review, but not the Delphi method. | + |
Sharma et al., 2021 | Consensus-based Checklist for Reporting of Survey Studies (CROSS) | This is a checklist with 19 sections covering all parts of web- and non-web-based survey reports. It is based on the Delphi method with 3 survey rounds in January 2018 -December 2019 and 24 experts responding to the 1 round. | Although 24 experts with numerous related publications were initially enrolled, 6 of them were lost to follow-up. | + |
Gaur et al., 2020 | Reporting survey based studies - a primer for authors | These recommendations cover points on planning and reporting surveys in the COVID-19 pandemic. Various online platforms, including social media, for distributing questionnaires and conducting surveys are presented. | Although these recommendations are based on a comprehensive literature review, statements are not discussed with a panel of experts and lack Delphi consensus agreements. | - |
COVID-19 = coronavirus disease 2019.
From January 2018 to December 2019, three rounds of surveying experts with interest in surveys and questionnaires allowed reaching consensus on a set of points for reporting web-based and non-web-based surveys. 13 The Consensus-Based Checklist for Reporting of Survey Studies included a rating of 19 items of survey reports, from titles to acknowledgments. 13 Finally, rapid recommendations on online surveys amid the coronavirus disease 2019 (COVID-19) pandemic were published to guide the authors on how to choose social media and other online platforms for disseminating questionnaires and targeting representative groups of respondents. 14
Adhering to a combination of these recommendations is advisable to minimize the limitations of each document and increase the transparency of survey reports. For cross-sectional analyses of large sample sizes, additionally consulting the STROBE standard of the EQUATOR Network may further improve the accuracy of reporting respondents' inclusion and exclusion criteria. In fact, there are examples of online survey reports adhering to both CHERRIES and STROBE recommendations. 15 , 16
ETHICS CONSIDERATIONS
Although health research authorities in some countries lack mandates for full ethics review of survey studies, obtaining formal review protocols or ethics waivers is advisable for most surveys involving respondents from more than one country. And following country-based regulations and ethical norms of research are therefore mandatory. 14 , 17
Full ethics review or exemption procedures are important steps for planning and conducting ethically sound surveys. Given the non-interventional origin and absence of immediate health risks for participants, ethics committees may approve survey protocols without a full ethics review. 18 A full ethics review is however required when the informational and psychological harms of surveys increase the risk. 18 Informational harms may result from unauthorized access to respondents' personal data and stigmatization of respondents with leaked information about social diseases. Psychological harms may include anxiety, depression, and exacerbation of underlying psychiatric diseases.
Survey questionnaires submitted for evaluation should indicate how informed consent is obtained from respondents. 13 Additionally, information about confidentiality, anonymity, questionnaire delivery modes, compensations, and mechanisms preventing unauthorized access to questionnaires should be provided. 13 , 14 Ethical considerations and validation are especially important in studies involving vulnerable and marginalized subjects with diminished autonomy and poor social status due to dementia, substance abuse, inappropriate sexual behavior, and certain infections. 18 , 19 , 20 Precautions should be taken to avoid confidentiality breaches and bot activities when surveying via insecure online platforms. 21
Monetary compensation helps attract respondents to fill out lengthy questionnaires. However, such incentives may create mechanisms deceiving the system by surveyees with a primary interest in compensation. 22 Ethics review protocols may include points on recording online responders' IP addresses and blocking duplicate submissions from the same Internet locations. 22 IP addresses are viewed as personal information in the EU, but not in the US. Notably, IP identification may deter some potential responders in the EU. 21
PATIENT KNOWLEDGE AND PERCEPTION SURVEYS
The design of patient knowledge and perception surveys is insufficiently defined and poorly explored. Although such surveys are aimed at consistently covering research questions on clinical presentation, prevention, and treatment, more emphasis is now placed on psychometric aspects of designing related questionnaires. 23 , 24 , 25 Targeting responsive patient groups to collect reliable answers is yet another challenge that can be addressed by distributing questionnaires to patients with good knowledge of their diseases, particularly those registering with university-affiliated clinics and representing patient associations. 26 , 27 , 28
The structure of questionnaires may differ for surveys of patient groups with various age-dependent health issues. Care should be taken when children are targeted since they often report a variety of modifiable conditions such as anxiety and depression, musculoskeletal problems, and pain, affecting their quality of life. 29 Likewise, gender and age differences should be considered in questionnaires addressing the quality of life in association with mental health and social status. 30 Questionnaires for older adults may benefit from including questions about social support and assistance in the context of caring for aging diseases. 31 Finally, addressing the needs of digital technologies and home-care applications may help to ensure the completeness of questionnaires for older adults with sedentary lifestyles and mobility disabilities. 32 , 33
SOCIAL MEDIA FOR QUESTIONNAIRE DISTRIBUTION
The widespread use of social media has made it easier to distribute questionnaires to a large number of potential responders. Employing popular platforms such as Twitter and Facebook has become particularly useful for conducting nationwide surveys on awareness and concerns about global health and pandemic issues. 34 , 35 When various social media platforms are simultaneously employed, participants' sociodemographic factors such as gender, age, and level of education may confound the study results. 36 Knowing targeted groups' preferred online networking and communication sites may better direct the questionnaire distribution. 37 , 38 , 39
Preliminary evidence suggests that distributing survey links via social-media accounts of individual users and organized e-groups with interest in specific health issues may increase their engagement and correctness of responses. 40 , 41
Since surveys employing social media are publicly accessible, related questionnaires should be professionally edited to easily inquire target populations, avoid sensitive and disturbing points, and ensure privacy and confidentiality. 42 , 43 Although counting e-post views is feasible, response rates of social-media distributed questionnaires are practically impossible to record. The latter is an inherent limitation of such surveys.
SURVEY SAMPLING
Establishing connections with target populations and diversifying questionnaire dissemination may increase the rigor of current surveys which are abundantly administered. 44 Sample sizes depend on various factors, including the chosen topic, aim, and sampling strategy (random or non-random). 12 Some topics such as COVID-19 and global health may easily attract the attention of large respondent groups motivated to answer a variety of questionnaire questions. In the beginning of the pandemic, most surveys employed non-random (non-probability) sampling strategies which resulted in analyses of numerous responses without response rate calculations. These qualitative research studies were mainly aimed to analyze opinions of specialists and patients exposed to COVID-19 to develop rapid guidelines and initiate clinical trials.
Outside the pandemic, and beyond hot topics, there is a growing trend of low response rates and inadequate representation of target populations. 45 Such a trend makes it difficult to design and conduct random (probability) surveys. Subsequently, hypotheses of current online surveys often omit points on randomization and sample size calculation, ending up with qualitative analyses and pilot studies. In fact, convenience (non-random or non-probability) sampling can be particularly suitable for previously unexplored and emerging topics when overviewing literature cannot help estimate optimal samples and entirely new questionnaires should be designed and tested. The limitations of convenience sampling minimize the generalizability of the conclusions since the sample representativeness is uncertain. 45
Researchers often employ 'snowball' sampling techniques with initial surveyees forwarding the questionnaires to other interested respondents, thereby maximizing the sample size. Another common technique for obtaining more responses relies on generating regular social media reminders and resending e-mails to interested individuals and groups. Such tactics can increase the study duration but cannot exclude the participation bias and non-response.
Purposive or targeted sampling is perhaps the most precise technique when knowing the target population size and respondents' readiness to correctly fill the questionnaires and ensure an exact estimate of response rate, close to 100%. 46
DESIGNING QUESTIONNAIRES
Correctness, confidentiality, privacy, and anonymity are critical points of inquiry in questionnaires. 47 Correctly worded and convincingly presented survey invitations with consenting options and reassurances of secure data processing may increase response rates and ensure the validity of responses. 47 Online surveys are believed to be more advantageous than offline inquiries for ensuring anonymity and privacy, particularly for targeting socially marginalized and stigmatized subjects. Online study design is indeed optimal for collecting more responses in surveys of sex- and gender-related and otherwise sensitive topics.
Performing comprehensive literature reviews, consultations with subject experts, and Delphi exercises may all help to specify survey objectives, identify questionnaire domains, and formulate pertinent questions. Literature searches are required for in-depth topic coverage and identification of previously published relevant surveys. By analyzing previous questionnaire characteristics, modifications can be made to designing new self-administered surveys. The justification of new studies should correctly acknowledge similar published reports to avoid redundancies.
The initial part of a questionnaire usually includes a short introduction/preamble/cover letter that specifies the objectives, target respondents, potential benefits and risks, and moderators' contact details for further inquiries. This part may motivate potential respondents to consent and answer questions. The specifics, volume, and format of other parts are dependent on revisions in response to pretesting and pilot testing. 48 The pretesting usually involves co-authors and other contributors, colleagues with the subject interest while the pilot testing usually involves 5-10 target respondents who are well familiar with the subject and can swiftly complete the questionnaires. The guidance obtained at the pretesting and pilot testing allows editing, shortening, or expanding questionnaire sections. Although guidance on questionnaire length and question numbers is scarce, some experts empirically consider 5 domains with 5 questions in each as optimal. 12 Lengthy questionnaires may be biased due to respondents' fatigue and inability to answer numerous and complicated questions. 46
Questionnaire revisions are aimed at ensuring the validity and consistency of questions, implying the appeal to relevant responders and accurate covering of all essential points. 45 Valid questionnaires enable reliable and reproducible survey studies that end up with the same responses to variably worded and located questions. 45
Various combinations of open-ended and close-ended questions are advisable to comprehensively cover all pertinent points and enable easy and quick completion of questionnaires. Open-ended questions are usually included in small numbers since these require more time to respond. 46 Also, the interpretation and analysis of responses to open-ended questions hardly contribute to generating robust qualitative data. 49 Close-ended questions with single and multiple-choice answers constitute the main part of a questionnaire, with single answers easier to analyze and report. Questions with single answers can be presented as 3 or more Likert scales (e.g., yes/no/do not know).
Avoiding too simplistic (yes/no) questions and replacing them with Likert-scale items may increase the robustness of questionnaire analyses. 50 Additionally, constructing easily understandable questions, excluding merged items with two or more points, and moving sophisticated questions to the beginning of a questionnaire may add to the quality and feasibility of the study. 50
Survey studies are increasingly conducted by health professionals to swiftly explore opinions on a wide range of topics by diverse groups of specialists, patients, and public representatives. Arguably, quality surveys with generalizable results can be instrumental for guiding health practitioners in times of crises such as the COVID-19 pandemic when clinical trials, systematic reviews, and other evidence-based reports are scarcely available or absent. Online surveys can be particularly valuable for collecting and analyzing specialist, patient, and other subjects' responses in non-mainstream science countries where top evidence-based studies are scarce commodities and research funding is limited. Accumulated expertise in drafting quality questionnaires and conducting robust surveys is valuable for producing new data and generating new hypotheses and research questions.
The main advantages of surveys are related to the ease of conducting such studies with limited or no research funding. The digitization and social media advances have further contributed to the ease of surveying and growing global interest toward surveys among health professionals. Some of the disadvantages of current surveys are perhaps those related to imperfections of digital platforms for disseminating questionnaires and analysing responses.
Although some survey reporting standards and recommendations are available, none of these comprehensively cover all items of questionnaires and steps in surveying. None of the survey reporting standards is based on summarizing guidance of a large number of contributors involved in related research projects. As such, presenting the current guidance with a list of items for survey reports ( Table 2 ) may help better design and publish related articles.
No. | Items | Notes |
---|---|---|
1 | Title | • Reflect on the survey subject, target respondents (e.g., patients, specialists, public representatives), obtained results, and study design (online, non-web-based, cross-sectional, longitudinal). |
2 | Abstract | • Provide a structured abstract with an introduction, aims, results, and conclusion. |
3 | Keywords | • Add the term "surveys and questionnaires" along with subject keywords to increase retrieval of the survey report. |
4 | Introduction | • Analyze available evidence, relevant reviews, and surveys to justify the need for current study and questionnaire sections. |
5 | Aim | • Present specific and innovative aims. |
6 | Methods | • Highlight study design (e.g., web-based, non-web-based, cross-sectional, longitudinal). |
• Specify the survey datelines and characterize time periods (data collection during a crisis [pandemic, wartime] or certain global movements, campaigns, or interventions). | ||
• Describe the surveyed respondents’ characteristics. | ||
• Characterize the questionnaire domains and the number of questions in each domain. | ||
• Provide details of preserving confidentiality and anonymity | ||
• Describe pretesting and pilot testing (experts and respondents involved), the number of revision rounds, and the average time for filling out the questionnaire. | ||
• Report content and face validity (quality, completeness, and feasibility of the questionnaire and its appeal to relevant respondents). | ||
• Add details of an employed survey platform for web-based surveys (e.g., SurveyMonkey, Google Forms, etc.). | ||
• Report modes of questionnaire distribution (e.g., via certain social media channels, emails, face-to-face interviews, and postal mail). | ||
• Clarify when and how many times survey reminders were circulated. | ||
7 | Adherence to research reporting standards | • Refer to recommendations or their combinations consulted for reporting. |
8 | Ethics section | • Provide ethics committee approval/waiver date, protocol number, and name of the ethics committee. |
• Refer to documents of national health research authorities that regulate the ethics review waiver/exemption. | ||
• Justify the ethics review exemption in view of the survey's non-interventional origin and absence of informational and psychological risks/harms. | ||
• Provide details of monetary or other incentives, written informed consents, confidentiality and anonymity, and mechanisms to avoid multiple entries by the same respondents. | ||
9 | Statistical analyses | • Report descriptive statistics, how categorical data were compared (chi-square or Fisher's exact tests), whether parametric and non-parametric tests and regression analyses were employed, level of significance, and statistical package used. |
10 | Results | • Report response rates in absolute numbers and percentages if the target population was established by methods other than convenience sampling. |
• Reflect on missing data. | ||
• Provide respondents' details to characterize their representativeness and exclude/minimize nonresponse influence. | ||
• Insert eye-catching and color graphs and informative tables pointing to the most remarkable results, without recapitulating the same data in the text. | ||
11 | Discussion | • Clarify what is new. |
• Analyze limitations by reflecting on low response rate, small sample size, non-response, missing data, a long timeline of collecting responses, language of the questionnaire other than English, and generalizability of the survey results. | ||
12 | Author contributions and acknowledgements | • Identify the authors who drafted the questionnaire and survey report. |
• List non-author/technical contributions for questionnaire dissemination, promotion, and data collection. | ||
13 | Disclosure of interests | • Disclose potential conflicts which may affect the validity and reliability of the survey. |
14 | Funding | • Report funding sources, provision of software, and open-access funding, if available. |
15 | Open data sharing | • Add a note about the availability of data for post-publication analyses. |
16 | Appendix | • Submit an English version of the questionnaire. |
Disclosure: The authors have no potential conflicts of interest to disclose.
Author Contributions:
- Conceptualization: Zimba O.
- Formal analysis: Zimba O, Gasparyan AY.
- Writing - original draft: Zimba O.
- Writing - review & editing: Zimba O, Gasparyan AY.
Have a language expert improve your writing
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
- Knowledge Base
Methodology
Research Methods | Definitions, Types, Examples
Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design . When planning your methods, there are two key decisions you will make.
First, decide how you will collect data . Your methods depend on what type of data you need to answer your research question :
- Qualitative vs. quantitative : Will your data take the form of words or numbers?
- Primary vs. secondary : Will you collect original data yourself, or will you use data that has already been collected by someone else?
- Descriptive vs. experimental : Will you take measurements of something as it is, or will you perform an experiment?
Second, decide how you will analyze the data .
- For quantitative data, you can use statistical analysis methods to test relationships between variables.
- For qualitative data, you can use methods such as thematic analysis to interpret patterns and meanings in the data.
Table of contents
Methods for collecting data, examples of data collection methods, methods for analyzing data, examples of data analysis methods, other interesting articles, frequently asked questions about research methods.
Data is the information that you collect for the purposes of answering your research question . The type of data you need depends on the aims of your research.
Qualitative vs. quantitative data
Your choice of qualitative or quantitative data collection depends on the type of knowledge you want to develop.
For questions about ideas, experiences and meanings, or to study something that can’t be described numerically, collect qualitative data .
If you want to develop a more mechanistic understanding of a topic, or your research involves hypothesis testing , collect quantitative data .
Qualitative | to broader populations. . | |
---|---|---|
Quantitative | . |
You can also take a mixed methods approach , where you use both qualitative and quantitative research methods.
Primary vs. secondary research
Primary research is any original data that you collect yourself for the purposes of answering your research question (e.g. through surveys , observations and experiments ). Secondary research is data that has already been collected by other researchers (e.g. in a government census or previous scientific studies).
If you are exploring a novel research question, you’ll probably need to collect primary data . But if you want to synthesize existing knowledge, analyze historical trends, or identify patterns on a large scale, secondary data might be a better choice.
Primary | . | methods. |
---|---|---|
Secondary |
Descriptive vs. experimental data
In descriptive research , you collect data about your study subject without intervening. The validity of your research will depend on your sampling method .
In experimental research , you systematically intervene in a process and measure the outcome. The validity of your research will depend on your experimental design .
To conduct an experiment, you need to be able to vary your independent variable , precisely measure your dependent variable, and control for confounding variables . If it’s practically and ethically possible, this method is the best choice for answering questions about cause and effect.
Descriptive | . . | |
---|---|---|
Experimental |
Here's why students love Scribbr's proofreading services
Discover proofreading & editing
Research method | Primary or secondary? | Qualitative or quantitative? | When to use |
---|---|---|---|
Primary | Quantitative | To test cause-and-effect relationships. | |
Primary | Quantitative | To understand general characteristics of a population. | |
Interview/focus group | Primary | Qualitative | To gain more in-depth understanding of a topic. |
Observation | Primary | Either | To understand how something occurs in its natural setting. |
Secondary | Either | To situate your research in an existing body of work, or to evaluate trends within a research topic. | |
Either | Either | To gain an in-depth understanding of a specific group or context, or when you don’t have the resources for a large study. |
Your data analysis methods will depend on the type of data you collect and how you prepare it for analysis.
Data can often be analyzed both quantitatively and qualitatively. For example, survey responses could be analyzed qualitatively by studying the meanings of responses or quantitatively by studying the frequencies of responses.
Qualitative analysis methods
Qualitative analysis is used to understand words, ideas, and experiences. You can use it to interpret data that was collected:
- From open-ended surveys and interviews , literature reviews , case studies , ethnographies , and other sources that use text rather than numbers.
- Using non-probability sampling methods .
Qualitative analysis tends to be quite flexible and relies on the researcher’s judgement, so you have to reflect carefully on your choices and assumptions and be careful to avoid research bias .
Quantitative analysis methods
Quantitative analysis uses numbers and statistics to understand frequencies, averages and correlations (in descriptive studies) or cause-and-effect relationships (in experiments).
You can use quantitative analysis to interpret data that was collected either:
- During an experiment .
- Using probability sampling methods .
Because the data is collected and analyzed in a statistically valid way, the results of quantitative analysis can be easily standardized and shared among researchers.
Research method | Qualitative or quantitative? | When to use |
---|---|---|
Quantitative | To analyze data collected in a statistically valid manner (e.g. from experiments, surveys, and observations). | |
Meta-analysis | Quantitative | To statistically analyze the results of a large collection of studies. Can only be applied to studies that collected data in a statistically valid manner. |
Qualitative | To analyze data collected from interviews, , or textual sources. To understand general themes in the data and how they are communicated. | |
Either | To analyze large volumes of textual or visual data collected from surveys, literature reviews, or other sources. Can be quantitative (i.e. frequencies of words) or qualitative (i.e. meanings of words). |
Receive feedback on language, structure, and formatting
Professional editors proofread and edit your paper by focusing on:
- Academic style
- Vague sentences
- Style consistency
See an example
If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.
- Chi square test of independence
- Statistical power
- Descriptive statistics
- Degrees of freedom
- Pearson correlation
- Null hypothesis
- Double-blind study
- Case-control study
- Research ethics
- Data collection
- Hypothesis testing
- Structured interviews
Research bias
- Hawthorne effect
- Unconscious bias
- Recall bias
- Halo effect
- Self-serving bias
- Information bias
Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.
Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.
In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .
A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.
In statistics, sampling allows you to test a hypothesis about the characteristics of a population.
The research methods you use depend on the type of data you need to answer your research question .
- If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
- If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
- If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.
Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.
Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).
In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .
In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.
Is this article helpful?
Other students also liked, writing strong research questions | criteria & examples.
- What Is a Research Design | Types, Guide & Examples
- Data Collection | Definition, Methods & Examples
More interesting articles
- Between-Subjects Design | Examples, Pros, & Cons
- Cluster Sampling | A Simple Step-by-Step Guide with Examples
- Confounding Variables | Definition, Examples & Controls
- Construct Validity | Definition, Types, & Examples
- Content Analysis | Guide, Methods & Examples
- Control Groups and Treatment Groups | Uses & Examples
- Control Variables | What Are They & Why Do They Matter?
- Correlation vs. Causation | Difference, Designs & Examples
- Correlational Research | When & How to Use
- Critical Discourse Analysis | Definition, Guide & Examples
- Cross-Sectional Study | Definition, Uses & Examples
- Descriptive Research | Definition, Types, Methods & Examples
- Ethical Considerations in Research | Types & Examples
- Explanatory and Response Variables | Definitions & Examples
- Explanatory Research | Definition, Guide, & Examples
- Exploratory Research | Definition, Guide, & Examples
- External Validity | Definition, Types, Threats & Examples
- Extraneous Variables | Examples, Types & Controls
- Guide to Experimental Design | Overview, Steps, & Examples
- How Do You Incorporate an Interview into a Dissertation? | Tips
- How to Do Thematic Analysis | Step-by-Step Guide & Examples
- How to Write a Literature Review | Guide, Examples, & Templates
- How to Write a Strong Hypothesis | Steps & Examples
- Inclusion and Exclusion Criteria | Examples & Definition
- Independent vs. Dependent Variables | Definition & Examples
- Inductive Reasoning | Types, Examples, Explanation
- Inductive vs. Deductive Research Approach | Steps & Examples
- Internal Validity in Research | Definition, Threats, & Examples
- Internal vs. External Validity | Understanding Differences & Threats
- Longitudinal Study | Definition, Approaches & Examples
- Mediator vs. Moderator Variables | Differences & Examples
- Mixed Methods Research | Definition, Guide & Examples
- Multistage Sampling | Introductory Guide & Examples
- Naturalistic Observation | Definition, Guide & Examples
- Operationalization | A Guide with Examples, Pros & Cons
- Population vs. Sample | Definitions, Differences & Examples
- Primary Research | Definition, Types, & Examples
- Qualitative vs. Quantitative Research | Differences, Examples & Methods
- Quasi-Experimental Design | Definition, Types & Examples
- Questionnaire Design | Methods, Question Types & Examples
- Random Assignment in Experiments | Introduction & Examples
- Random vs. Systematic Error | Definition & Examples
- Reliability vs. Validity in Research | Difference, Types and Examples
- Reproducibility vs Replicability | Difference & Examples
- Reproducibility vs. Replicability | Difference & Examples
- Sampling Methods | Types, Techniques & Examples
- Semi-Structured Interview | Definition, Guide & Examples
- Simple Random Sampling | Definition, Steps & Examples
- Single, Double, & Triple Blind Study | Definition & Examples
- Stratified Sampling | Definition, Guide & Examples
- Structured Interview | Definition, Guide & Examples
- Survey Research | Definition, Examples & Methods
- Systematic Review | Definition, Example, & Guide
- Systematic Sampling | A Step-by-Step Guide with Examples
- Textual Analysis | Guide, 3 Approaches & Examples
- The 4 Types of Reliability in Research | Definitions & Examples
- The 4 Types of Validity in Research | Definitions & Examples
- Transcribing an Interview | 5 Steps & Transcription Software
- Triangulation in Research | Guide, Types, Examples
- Types of Interviews in Research | Guide & Examples
- Types of Research Designs Compared | Guide & Examples
- Types of Variables in Research & Statistics | Examples
- Unstructured Interview | Definition, Guide & Examples
- What Is a Case Study? | Definition, Examples & Methods
- What Is a Case-Control Study? | Definition & Examples
- What Is a Cohort Study? | Definition & Examples
- What Is a Conceptual Framework? | Tips & Examples
- What Is a Controlled Experiment? | Definitions & Examples
- What Is a Double-Barreled Question?
- What Is a Focus Group? | Step-by-Step Guide & Examples
- What Is a Likert Scale? | Guide & Examples
- What Is a Prospective Cohort Study? | Definition & Examples
- What Is a Retrospective Cohort Study? | Definition & Examples
- What Is Action Research? | Definition & Examples
- What Is an Observational Study? | Guide & Examples
- What Is Concurrent Validity? | Definition & Examples
- What Is Content Validity? | Definition & Examples
- What Is Convenience Sampling? | Definition & Examples
- What Is Convergent Validity? | Definition & Examples
- What Is Criterion Validity? | Definition & Examples
- What Is Data Cleansing? | Definition, Guide & Examples
- What Is Deductive Reasoning? | Explanation & Examples
- What Is Discriminant Validity? | Definition & Example
- What Is Ecological Validity? | Definition & Examples
- What Is Ethnography? | Definition, Guide & Examples
- What Is Face Validity? | Guide, Definition & Examples
- What Is Non-Probability Sampling? | Types & Examples
- What Is Participant Observation? | Definition & Examples
- What Is Peer Review? | Types & Examples
- What Is Predictive Validity? | Examples & Definition
- What Is Probability Sampling? | Types & Examples
- What Is Purposive Sampling? | Definition & Examples
- What Is Qualitative Observation? | Definition & Examples
- What Is Qualitative Research? | Methods & Examples
- What Is Quantitative Observation? | Definition & Examples
- What Is Quantitative Research? | Definition, Uses & Methods
What is your plagiarism score?
Copyright © SurveySparrow Inc. 2024 Privacy Policy Terms of Service SurveySparrow Inc.
Inductive and Deductive Method: A Comparison
Parvathi Vijayamohan
Last Updated: 5 October 2024
Table Of Contents
- What is the inductive method?
- What is the deductive method?
- Inductive vs deductive method: Differences
- Inductive vs deductive method: Similarities
- Combining inductive and deductive methods
Have you ever wondered about the difference between the inductive and deductive method? These two methods of logical thinking are like chalk and cheese—yet they still have some things in common.
Let's dive in and explore them in detail!
- Inductive vs deductive method: differences
- Inductive vs deductive method: similarities
- Combining the inductive and deductive methods
What is the Inductive Method?
Inductive reasoning is all about making observations and drawing conclusions from them. It's like putting together a puzzle—you start with specific clues and use them to figure out the bigger picture.
For example: Let's say you notice that every time it rains, the grass gets wet. Based on this observation, you might conclude that rain causes the grass to get wet. This is inductive reasoning; you're making a generalization from specific instances.
The key strengths of the inductive method are:
- It helps develop new ideas and theories.
- It establishes probabilities. Although the conclusion may not be 100% certain, it's likely to be true.
- It's frequently used in everyday decision-making and scientific investigations.
However, inductive reasoning also has some limitations:
- The conclusion may be inaccurate, even if the logic seems sound.
- It can lead to overgeneralization if you don't have enough evidence.
- The conclusion is only probable, not definite.
Related Read: How is the inductive method used in surveys? Read What’s Inductive Research? A Simple Guide .
What is the Deductive Method?
Deductive reasoning works in the opposite way to inductive reasoning. Instead of starting with specific observations, you begin with general principles or premises that you assume to be true.
For example, let's say you know that all cars have engines, and you also know that your automobile is a car. Using deductive reasoning, you can conclude that your car must have an engine.
The main advantages of the deductive method are:
- If the premises are true, the conclusion must also be true.
- It's a reliable way to test hypotheses and theories.
But deductive reasoning also has some drawbacks:
- It's narrowly focused on the conclusion based on the premises. This means you might ignore patterns that come from more open-ended reasoning .
- It doesn't generate new knowledge; it only applies existing rules and theories.
- It may not always be practical for everyday decision-making.
Whether you're exploring new insights through inductive research or confirm known theories with deductive methods, SurveySparrow offers the tools you need. Enter your email below to create your free account.
14-day free trial • Cancel Anytime • No Credit Card Required • No Strings Attached
Inductive vs Deductive Method: Differences
The inductive method starts with specific observations and builds up to broader generalizations. You’re basically spotting patterns or trends in the data. This leads you to make conclusions that are likely but will change if new information comes in.
This approach is great for things like exploratory research or problem-solving. But it’s not universally reliable since you're basing your conclusions on limited evidence.
Inductive Method Examples in Real Life
Ice-creams and headaches:
You notice that every time you eat ice cream, you get a headache. Based on this observation, you might conclude that ice cream causes your headaches. This is an inductive reasoning process.
Deciding when to leave for the supermarket:
You’ve noticed that the supermarket is usually busiest on weekends, while it's much quieter early in the morning on weekdays. Based on these observations, you conclude that if you want to avoid long lines, it’s best to go shopping early in the morning on a weekday.
On the flip side: The deductive method starts with a general rule or theory and works its way down to a conclusion. It’s all about applying logic to known facts or principles. So if the premises are true, the conclusion will be too.
That makes it more reliable, especially in areas like math, logic, or philosophy which have firm rules. You’re more likely to end up with a guaranteed answer—assuming the initial premises hold up.
Deductive Method Examples in Real Life
Dogs and fur:
You know that all mammals have fur, and you also know that dogs are mammals. Using deductive reasoning, you can conclude that dogs must have fur.
Deciding to lock the door based on security concerns:
You know the general rule that unlocked doors increase the risk of break-ins. So, locking your door each time you leave the house will improve your home’s security.
Inductive vs Deductive Method: Similarities
Despite their differences, inductive and deductive reasoning do share some common features:
- Both use premises and conclusions.
- Both aim to determine the truth.
- Both can help draw generalizations and support scientific reasoning.
So, the inductive method starts with specific observations and works towards a general conclusion. The deductive method starts with general premises and works towards a specific conclusion.
In the end, they're both forms of logical thinking that help us make sense of the world around us.
Combining the Inductive and Deductive Method
We often use a combination of inductive and deductive methods to solve problems, and make decisions. For example:
- Start with a general theory or hypothesis (deductive reasoning).
- Make observations to test the theory (inductive reasoning).
- Use the results of the observations to refine or change the theory (deductive reasoning).
- Repeat the process until you have a well-supported conclusion (inductive and deductive reasoning).
From deciding what to wear to choosing where to eat , this back-and-forth plays a key role in how we navigate our life everyday.
Wrapping Up
The inductive and deductive methods are two different approaches to logical thinking. But they're both valuable tools for understanding the world around us.
By understanding the difference between these two methods, and how they can be used together, you'll be equipped to tackle complex problems and make informed decisions.
So next time you're faced with a challenging situation, take a moment to consider whether an inductive or deductive approach (or a combination of both) might be the best way to tackle it.
Content marketer at SurveySparrow.
Parvathi is a sociologist turned marketer. After 6 years as a copywriter, she pivoted to B2B, diving into growth marketing for SaaS. Now she uses content and conversion optimization to fuel growth - focusing on CX, reputation management and feedback methodology for businesses.
You Might Also Like
Brand Experience
A Definitive Guide to Market Research Analysis & Why It is Important
SurveySparrow vs Typeform: A Detailed Comparison
Healthcare Net Promoter Score: Meaning, Calculation and Benchmark
Customer Experience
60+ Pre-Event Survey Questions to Help You Crush Your Next Event
Turn every feedback into a growth opportunity.
14-day free trial • Cancel Anytime • No Credit Card Required • Need a Demo?
COMMENTS
[View more Methods 101 Videos]. An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would "favor or oppose taking military action in Iraq to end Saddam Hussein's rule," 68% said they favored military action while 25% said they opposed military action.
Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.
Survey Research Methods are as follows: Telephone surveys: A survey research method where questions are administered to respondents over the phone, often used in market research or political polling. Face-to-face surveys: A survey research method where questions are administered to respondents in person, often used in social or health research.
Survey research examples and questions Examples serve as a bridge connecting theoretical concepts to real-world scenarios. Let's consider a few practical examples of survey research across various domains. User Experience (UX) Imagine being a UX designer at a budding tech start-up.
Survey Research Definition. Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization's eager to understand what their customers think ...
4. Focus on Closed-Ended Questions. Surveys are, at their core, a quantitative research method.They rely upon closed-ended questions (e.g., multiple-choice or rating-scale questions) to generate quantitative data. Surveys can also leverage open-ended questions (e.g., short-answer or long-answer questions) to generate qualitative data.
Make sure your survey questions will be relevant to all respondents and that you use filter questions when necessary. Effective survey questions and responses take careful construction by researchers, as participants may be confused or otherwise influenced by how items are phrased. ... Social research methods: Qualitative and quantitative ...
Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout. Distribute the survey. Analyse the responses. Write up the results. Surveys are a flexible method of data collection that can be used in many different types of research.
Questionnaires vs. surveys. A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.. Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.
For example "drag and drop the items in this list to show your order of preference.". Be clear about which end of the scale is which. For example, "With the best at the top, rank these items from best to worst". Be as specific as you can about how the respondent should consider the options and how to rank them.
Survey methodologies therefore cover a range of considerations including sampling, research instrument design, improving response rates, ensuring quality in data, and methods of analysis (Groves et al., 2011). One common question format is to collect quantitative data alongside qualitative questions.
Encyclopedia of Survey Research Methods is a comprehensive reference work that covers all aspects of survey research, from design to analysis. Learn from experts in the field and access hundreds of entries on topics such as sampling, questionnaire design, measurement, data collection, and more.
In survey research, social desirability refers to the idea that respondents will try to answer questions in a way that will present them in a favourable light. Perhaps we decide that to understand the transition to college, we need to know whether respondents ever cheated on an exam in high school or college. We all know that cheating on exams ...
In your survey invitation (email) letter, try to include 1-2 sentences describing a purpose or goal of your survey. Pretesting: Before administering a survey, make sure you test your survey in advance. Survey pretesting will help you determine the effectiveness of your survey. Login to LibApps.
The second video in Pew Research Center's Methods 101 series helps explain question wording - a concept at the center of sound public opinion survey research - and why it's important. Writing clear and neutral survey questions is much more difficult than it might seem. We spend a lot of time thinking about the phrasing and ordering of our survey questions.
Types of Survey Questions. Survey questions can be divided into two broad types: structured and unstructured. From an instrument design point of view, the structured questions pose the greater difficulties (see Decisions About the Response Format). From a content perspective, it may actually be more difficult to write good unstructured ...
Data: The data gathered from survey research is mostly quantitative; although it can be qualitative. Impartial Sampling: The data sample in survey research is random and not subject to unavoidable biases. Ecological Validity: Survey research often makes use of data samples obtained from real-world occurrences.
Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" (Check & Schutt, 2012, p. 160). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative research ...
Presents the very latest methodological knowledge on surveys. Provides students and researchers who want to collect, analyze, or read about survey data with a sound basis for evaluating how each aspect of a survey can affect its precision, accuracy, and credibility.
Directed questions. After settling on your research goal and beginning to design a questionnaire, the main considerations are the method of data collection, the survey instrument and the type of question you are going to ask. Methods of data collection include personal interviews, telephone, postal or electronic (Table 1).
A guide for the design and conduct of self-administered surveys of clinicians. This guide includes statements on designing, conducting, and reporting web- and non-web-based surveys of clinicians' knowledge, attitude, and practice. The statements are based on a literature review, but not the Delphi method. +.
The survey is then constructed to test this model against observations of the phenomena. In contrast to survey research, a . survey. is simply a data collection tool for carrying out survey research. Pinsonneault and Kraemer (1993) defined a survey as a "means for gathering information about the characteristics, actions, or opinions of a ...
You can also take a mixed methods approach, where you use both qualitative and quantitative research methods.. Primary vs. secondary research. Primary research is any original data that you collect yourself for the purposes of answering your research question (e.g. through surveys, observations and experiments). Secondary research is data that has already been collected by other researchers (e ...
Related Read: How is the inductive method used in surveys? Read What's Inductive Research? A Simple Guide. What is the Deductive Method? Deductive reasoning works in the opposite way to inductive reasoning. Instead of starting with specific observations, you begin with general principles or premises that you assume to be true.