Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=nav&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 1 of 37
Chapter 8 Quantitative Methods
We turn now from the introduction, the purpose, and the questions and hypotheses to the method section of a
proposal. This chapter presents essential steps in designing quantitative methods for a research proposal or
study, with specific focus on survey and experimental designs. These designs reflect postpositivist
philosophical assumptions, as discussed in Chapter 1 (s9781506386720.i598.xhtml) . For example, determinism
suggests that examining the relationships between and among variables is central to answering questions and
hypotheses through surveys and experiments. In one case, a researcher might be interested in evaluating
whether playing violent video games is associated with higher rates of playground aggression in kids, which
is a correlational hypothesis that could be evaluated in a survey design. In another case, a researcher might be
interested in evaluating whether violent video game playing causes aggressive behavior, which is a causal
hypothesis that is best evaluated by a true experiment. In each case, these quantitative approaches focus on
carefully measuring (or experimentally manipulating) a parsimonious set of variables to answer theoryguided research questions and hypotheses. In this chapter, the focus is on the essential components of a
method section in proposals for a survey or experimental study.
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=nav&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 2 of 37
8.1 Defining Surveys and Experiments
A survey design provides a quantitative description of trends, attitudes, and opinions of a population, or tests
for associations among variables of a population, by studying a sample of that population. Survey designs
help researchers answer three types of questions: (a) descriptive questions (e.g., What percentage of
practicing nurses support the provision of hospital abortion services?); (b) questions about the relationships
between variables (e.g., Is there a positive association between endorsement of hospital abortion services and
support for implementing hospice care among nurses?); or in cases where a survey design is repeated over
time in a longitudinal study; (c) questions about predictive relationships between variables over time (e.g.,
Does Time 1 endorsement of support for hospital abortion services predict greater Time 2 burnout in
nurses?).
An experimental design systematically manipulates one or more variables in order to evaluate how this
manipulation impacts an outcome (or outcomes) of interest. Importantly, an experiment isolates the effects of
this manipulation by holding all other variables constant. When one group receives a treatment and the other
group does not (which is a manipulated variable of interest), the experimenter can isolate whether the
treatment and not other factors influence the outcome. For example, a sample of nurses could be randomly
assigned to a 3-week expressive writing program (where they write about their deepest thoughts and feelings)
or a matched 3-week control writing program (writing about the facts of their daily morning routine) to
evaluate whether this expressive writing manipulation reduces job burnout in the months following the
program (i.e., the writing condition is the manipulated variable of interest, and job burnout is the outcome of
interest). Whether a quantitative study employs a survey or experimental design, both approaches share a
common goal of helping the researcher make inferences about relationships among variables, and how the
sample results may generalize to a broader population of interest (e.g., all nurses in the community).
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=nav&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 3 of 37
8.2 Components of a Survey Study Method Plan
The design of a survey method plan follows a standard format. Numerous examples of this format appear in
scholarly journals, and these examples provide useful models. The following sections detail typical
components. In preparing to design these components into a proposal, consider the questions on the checklist
shown in Table 8.1 (http://content.thuzelearning.com/books/Creswell.5819.18.1/sections/navpoint47#s9781506386720.i1072) as a general guide.
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=nav&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 4 of 37
The Survey Design
The first parts of the survey method plan section can introduce readers to the basic purpose and rationale for
survey research. Begin the section by describing the rationale for the design. Specifically:
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=nav&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 5 of 37
Identify the purpose of survey research. The primary purpose is to answer a question (or questions)
about variables of interest to you. A sample purpose statement could read: The primary purpose of
this study is to empirically evaluate whether the number of overtime hours worked predicts
subsequent burnout symptoms in a sample of emergency room nurses.
Indicate why a survey method is the preferred type of approach for this study. In this rationale, it can
be beneficial to acknowledge the advantages of survey designs, such as the economy of the design,
rapid turnaround in data collection, and constraints that preclude you from pursuing other designs
(e.g., An experimental design was not adopted to look at the relationship between overtime hours
worked and burnout symptoms because it would be prohibitively difficult, and potentially unethical,
to randomly assign nurses to work different amounts of overtime hours.).
Indicate whether the survey will be cross-sectionalwith the data collected at one point in timeor
whether it will be longitudinalwith data collected over time.
Specify the form of data collection. Fowler (2014) identified the following types: mail, telephone,
the Internet, personal interviews, or group administration (see also Fink, 2016; Krueger & Casey,
2014). Using an Internet survey and administering it online has been discussed extensively in the
literature (Nesbary, 2000; Sue & Ritter, 2012). Regardless of the form of data collection, provide a
rationale for the procedure, using arguments based on its strengths and weaknesses, costs, data
availability, and convenience.
The Population and Sample
In the method section, follow the type of design with characteristics of the population and the sampling
procedure. Methodologists have written excellent discussions about the underlying logic of sampling theory
(e.g., Babbie, 2015; Fowler, 2014). Here are essential aspects of the population and sample to describe in a
research plan:
The population. Identify the population in the study. Also state the size of this population, if size can
be determined, and the means of identifying individuals in the population. Questions of access arise
here, and the researcher might refer to availability of sampling framesmail or published listsof
potential respondents in the population.
Sampling design. Identify whether the sampling design for this population is single stage or
multistage (called clustering). Cluster sampling is ideal when it is impossible or impractical to
compile a list of the elements composing the population (Babbie, 2015). A single-stage sampling
procedure is one in which the researcher has access to names in the population and can sample the
people (or other elements) directly. In a multistage or clustering procedure, the researcher first
identifies clusters (groups or organizations), obtains names of individuals within those clusters, and
then samples within them.
Type of sampling. Identify and discuss the selection process for participants in your sample. Ideally
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=nav&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 6 of 37
you aim to draw a random sample, in which each individual in the population has an equal
probability of being selected (a systematic or probabilistic sample). But in many cases it may be
quite difficult (or impossible) to get a random sample of participants. Alternatively, a systematic
sample can have precision-equivalent random sampling (Fowler, 2014). In this approach, you
choose a random start on a list and select every X numbered person on the list. The X number is
based on a fraction determined by the number of people on the list and the number that are to be
selected on the list (e.g., 1 out of every 80th person). Finally, less desirable, but often used, is a
nonprobability sample (or convenience sample), in which respondents are chosen based on their
convenience and availability.
Stratification. Identify whether the study will involve stratification of the population before selecting
the sample. This requires that characteristics of the population members be known so that the
population can be stratified first before selecting the sample (Fowler, 2014). Stratification means that
specific characteristics of individuals (e.g., genderfemales and males) are represented in the
sample and the sample reflects the true proportion in the population of individuals with certain
characteristics. When randomly selecting people from a population, these characteristics may or may
not be present in the sample in the same proportions as in the population; stratification ensures their
representation. Also identify the characteristics used in stratifying the population (e.g., gender,
income levels, education). Within each stratum, identify whether the sample contains individuals
with the characteristic in the same proportion as the characteristic appears in the entire population.
Sample size determination. Indicate the number of people in the sample and the procedures used to
compute this number. Sample size determination is at its core a tradeoff: A larger sample will
provide more accuracy in the inferences made, but recruiting more participants is time consuming
and costly. In survey research, investigators sometimes choose a sample size based on selecting a
fraction of the population (say, 10%) or selecting a sample size that is typical based on past studies.
These approaches are not optimal; instead sample size determination should be based on your
analysis plans (Fowler, 2014).
Power analysis. If your analysis plan consists of detecting a significant association between variables
of interest, a power analysis can help you estimate a target sample size. Many free online and
commercially available power analysis calculators are available (e.g., G*Power; Faul, Erdfelder,
Lang, & Buchner, 2007; Faul, Erdfelder, Buchner, & Lang 2009). The input values for a formal
power analysis will depend on the questions you aim to address in your survey design study (for a
helpful resource, see Kraemer & Blasey, 2016). As one example, if you aim to conduct a crosssectional study measuring the correlation between the number of overtime hours worked and burnout
symptoms in a sample of emergency room nurses, you can estimate the sample size required to
determine whether your correlation significantly differs from zero (e.g., one possible hypothesis is
that there will be a significant positive association between number of hours worked and emotional
exhaustion burnout symptoms). This power analysis requires just three pieces of information:
1. An estimate of the size of correlation (r). A common approach for generating this estimate
is to find similar studies that have reported the size of the correlation between hours worked
and burnout symptoms. This simple task can often be difficult, either because there are no
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=nav&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 7 of 37
published studies looking at this association or because suitable published studies do not
report a correlation coefficient. One tip: In cases where a published report measures
variables of interest to you, one option is to contact the study authors asking them to kindly
provide the correlation analysis result from their dataset, for your power analysis.
2. A two-tailed alpha value (). This value is called the Type I error rate and refers to the risk
we want to take in saying we have a real non-zero correlation when in fact this effect is not
real (and determined by chance), that is, a false positive effect. A commonly accepted alpha
value is .05, which refers to a 5% probability (5/100) that we are comfortable making a
Type I error, such that 5% of the time we will say that theres a significant (non-zero)
relationship between number of hours worked and burnout symptoms when in fact this
effect occurred by chance and is not real.
3. A beta value (). This value is called the Type II error rate and refers to the risk we want to
take in saying we do not have a significant effect when in fact there is a significant
association, that is, a false negative effect. Researchers commonly try to balance the risks of
making Type I versus Type II errors, with a commonly accepted beta value being .20. Power
analysis calculators will commonly ask for estimated power, which refers to 1 beta (1
.20 = .80).
You can then plug these numbers into a power analysis calculator to determine the sample size
needed. If you assume that the estimated association is r = .25, with a two-tailed alpha value of .05
and a beta value of .20, the power analysis calculation indicates that you need at least 123
participants in the study you aim to conduct.
To get some practice, try conducting this sample size determination power analysis. We used the
G*Power software program (Faul et al., 2007; Faul et al., 2009), with the following input
parameters:
Test family: Exact
Statistical test: Correlation: Bivariate normal model
Type of power analysis: A priori: Compute required sample size
Tails: Two
Correlation H1: .25
err prob: .05
Power (1 err prob): .8
Correlation H0: 0
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=nav&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 8 of 37
This power analysis for sample size determination should be done during study planning prior to
enrolling any participants. Many scientific journals now require researchers to report a power
analysis for sample size determination in the Method section.
Instrumentation
As part of rigorous data collection, the proposal developer also provides detailed information about the actual
survey instruments to be used in the study. Consider the following:
Name the survey instruments used to collect data. Discuss whether you used an instrument designed
for this research, a modified instrument, or an instrument developed by someone else. For example,
if you aim to measure perceptions of stress over the last month, you could use the 10-item Perceived
Stress Scale (PSS) (Cohen, Kamarck, & Mermelstein, 1983) as your stress perceptions instrument in
your survey design. Many survey instruments, including the PSS, can be acquired and used for free
as long as you cite the original source of the instrument. But in some cases, researchers have made
the use of their instruments proprietary, requiring a fee for use. Instruments are increasingly being
delivered through a multitude of online survey products now available (e.g., Qualtrics, Survey
Monkey). Although these products can be costly, they also can be quite helpful for accelerating and
improving the survey research process. For example, researchers can create their own surveys
quickly using custom templates and post them on websites or e-mail them to participants to
complete. These software programs facilitate data collection into organized spreadsheets for data
analysis, reducing data entry errors and accelerating hypothesis testing.
Validity of scores using the instrument. To use an existing instrument, describe the established
validity of scores obtained from past use of the instrument. This means reporting efforts by authors
to establish validity in quantitative researchwhether you can draw meaningful and useful
inferences from scores on the instruments. The three traditional forms of validity to look for are (a)
content validity (Do the items measure the content they were intended to measure?), (b) predictive or
concurrent validity (Do scores predict a criterion measure? Do results correlate with other results?),
and (c) construct validity (Do items measure hypothetical constructs or concepts?). In more recent
studies, construct validity has become the overriding objective in validity, and it has focused on
whether the scores serve a useful purpose and have positive consequences when they are used in
practice (Humbley & Zumbo, 1996). Establishing the validity of the scores in a survey helps
researchers to identify whether an instrument might be a good one to use in survey research. This
form of validity is different from identifying the threats to validity in experimental research, as
discussed later in this chapter.
Reliability of scores on the instrument. Also mention whether scores resulting from past use of the
instrument demonstrate acceptable reliability. Reliability in this context refers to the consistency or
repeatability of an instrument. The most important form of reliability for multi-item instruments is
the instruments internal consistencywhich is the degree to which sets of items on an instrument
behave in the same way. This is important because your instrument scale items should be assessing
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=nav&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 9 of 37
the same underlying construct, so these items should have suitable intercorrelations. A scales
internal consistency is quantified by a Cronbachs alpha ()value that ranges between 0 and 1, with
optimal values ranging between .7 and .9. For example, the 10-item PSS has excellent internal
consistency across many published reports, with the original source publication reporting internal
consistency values of = .84.86 in three studies (Cohen, Kamarck, and Mermelstein, 1983). It can
also be helpful to evaluate a second form of instrument reliability, its test-retest reliability. This form
of reliability concerns whether the scale is reasonably stable over time with repeated administrations.
When you modify an instrument or combine instruments in a study, the original validity and
reliability may not hold for the new instrument, and it becomes important to establish validity and
reliability during data analysis.
Sample items. Include sample items from the instrument so that readers can see the actual items
used. In an appendix to the proposal, attach sample items or the entire instrument (or instruments)
used.
Content of instrument. Indicate the major content sections in the instrument, such as the cover letter
(Dillman, 2007, provides a useful list of items to include in cover letters), the items (e.g.,
demographics, attitudinal items, behavioral items, factual items), and the closing instructions. Also
mention the type of scales used to measure the items on the instrument, such as continuous scales
(e.g., strongly agree to strongly disagree) and categorical scales (e.g., yes/no, rank from highest to
lowest importance).
Pilot testing. Discuss plans for pilot testing or field-testing the survey and provide a rationale for
these plans. This testing is important to establish the content validity of scores on an instrument; to
provide an initial evaluation of the internal consistency of the items; and to improve questions,
format, and instructions. Pilot testing all study materials also provides an opportunity to assess how
long the study will take (and to identify potential concerns with participant fatigue). Indicate the
number of people who will test the instrument and the plans to incorporate their comments into final
instrument revisions.
Administering the survey. For a mailed survey, identify steps for administering the survey and for
following up to ensure a high response rate. Salant and Dillman (1994) suggested a four-phase
administration process (see Dillman, 2007, for a similar three-phase process). The first mail-out is a
short advance-notice letter to all members of the sample, and the second mail-out is the actual mail
survey, distributed about 1 week after the advance-notice letter. The third mail-out consists of a
postcard follow-up sent to all members of the sample 4 to 8 days after the initial questionnaire. The
fourth mail-out, sent to all nonrespondents, consists of a personalized cover letter with a handwritten
signature, the questionnaire, and a preaddressed return envelope with postage. Researchers send this
fourth mail-out 3 weeks after the second mail-out. Thus, in total, the researcher concludes the
administration period 4 weeks after its start, providing the returns meet project objectives.
Variables in the Study
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 10 of 37
Although readers of a proposal learn about the variables in purpose statements and research
questions/hypotheses sections, it is useful in the method section to relate the variables to the specific
questions or hypotheses on the instrument. One technique is to relate the variables, the research questions or
hypotheses, and sample items on the survey instrument so that a reader can easily determine how the data
collection connects to the variables and questions/hypotheses. Plan to include a table and a discussion that
cross-reference the variables, the questions or hypotheses, and specific survey items. This procedure is
especially helpful in dissertations in which investigators test large-scale models or multiple hypotheses. Table
8.2 (http://content.thuzelearning.com/books/Creswell.5819.18.1/sections/navpoint-47#s9781506386720.i1081)
illustrates such a table using hypothetical data.
Data Analysis
In the proposal, present information about the computer programs used and the steps involved in analyzing
the data. Websites contain detailed information about the various statistical analysis computer programs
available. Some of the more frequently used programs are the following:
IBM SPSS Statistics 24 for Windows and Mac (www.spss.com www.spss.com (http://www.spss.com) (http://www.spss.com) ). The SPSS
Grad Pack is an affordable, professional analysis program for students based on the professional
version of the program, available from IBM.
JMP (www.jmp.com www.jmp.com (http://www.jmp.com) (http://www.jmp.com) ). This is a popular software program available from
SAS.
Minitab Statistical Software 17 (minitab.com). This is an interactive software statistical package
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 11 of 37
available from Minitab Inc.
SYSTAT 13 (systatsoftware.com). This is a comprehensive interactive statistical package available
from Systat Software, Inc.
SAS/STAT (sas.com). This is a statistical program with tools as an integral component of the SAS
system of products available from SAS Institute, Inc.
Stata, release 14 (stata.com). This is a data analysis and statistics program available from StataCorp.
Online programs useful in simulating statistical concepts for statistical instruction can also be used, such as
the Rice Virtual Lab in Statistics found at http://onlinestatbook.com/rvls.html
(http://onlinestatbook.com/rvls.html) , or SAS Simulation Studio for JMP (www.jmp.com (http://www.jmp.com)
), which harnesses the power of simulation to model and analyze critical operational systems in such areas as
health care, manufacturing, and transportation. The graphical user interface in SAS Simulation Studio for
JMP requires no programming and provides a full set of tools for building, executing, and analyzing results
of simulation models (Creswell & Guetterman, in press).
We recommend the following research tippresenting data analysis plans as a series of steps so that a
reader can see how one step leads to another:
Step 1. Report information about the number of participants in the sample who did and did not return the
survey. A table with numbers and percentages describing respondents and nonrespondents is a useful tool to
present this information.
Step 2. Discuss the method by which response bias will be determined. Response bias is the effect of
nonresponses on survey estimates (Fowler, 2014). Bias means that if nonrespondents had responded, their
responses would have substantially changed the overall results. Mention the procedures used to check for
response bias, such as wave analysis or a respondent/nonrespondent analysis. In wave analysis, the researcher
examines returns on select items week by week to determine if average responses change (Leslie, 1972).
Based on the assumption that those who return surveys in the final weeks of the response period are nearly all
nonrespondents, if the responses begin to change, a potential exists for response bias. An alternative check
for response bias is to contact a few nonrespondents by phone and determine if their responses differ
substantially from respondents. This constitutes a respondent-nonrespondent check for response bias.
Step 3. Discuss a plan to provide a descriptive analysis of data for all independent and dependent variables
in the study. This analysis should indicate the means, standard deviations, and range of scores for these
variables. Identify whether there is missing data (e.g., some participants may not provide responses to some
items or whole scales), and develop plans to report how much missing data is present and whether a strategy
will be implemented to replace missing data (for a review, see Schafer & Graham, 2002).
Step 4. If the proposal contains an instrument with multi-item scales or a plan to develop scales, first evaluate
whether it will be necessary to reverse-score items, and then how total scale scores will be calculated. Also
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 12 of 37
mention reliability checks for the internal consistency of the scales (i.e., the Cronbach alpha statistic).
Step 5. Identify the statistics and the statistical computer program for testing the major inferential research
questions or hypotheses in the proposed study. The inferential questions or hypotheses relate variables or
compare groups in terms of variables so that inferences can be drawn from the sample to a population.
Provide a rationale for the choice of statistical test and mention the assumptions associated with the statistic.
As shown in Table 8.3 (http://content.thuzelearning.com/books/Creswell.5819.18.1/sections/navpoint47#s9781506386720.i1093) , base this choice on the nature of the research question (e.g., relating variables or
comparing groups as the most popular), the number of independent and dependent variables, and the
variables used as covariates (e.g., see Rudestam & Newton, 2014). Further, consider whether the variables
will be measured on an instrument as a continuous score (e.g., age from 18 to 36) or as a categorical score
(e.g., women = 1, men = 2). Finally, consider whether the scores from the sample might be normally
distributed in a bell-shaped curve if plotted out on a graph or non-normally distributed. There are additional
ways to determine if the scores are normally distributed (see Creswell, 2012). These factors, in combination,
enable a researcher to determine what statistical test will be suited for answering the research question or
hypothesis. In Table 8.3 (http://content.thuzelearning.com/books/Creswell.5819.18.1/sections/navpoint47#s9781506386720.i1093) , we show how the factors, in combination, lead to the selection of a number of
common statistical tests. For additional types of statistical tests, readers are referred to statistics methods
books, such as Gravetter and Wallnau (2012).
Step 6. A final step in the data analysis is to present the results in tables or figures and interpret the results
from the statistical test, discussed in the next section
(http://content.thuzelearning.com/books/Creswell.5819.18.1/sections/navpoint-47#s9781506386720.i1091) .
Interpreting Results and Writing a Discussion Section
An interpretation in quantitative research means that the researcher draws conclusions from the results for
the research questions, hypotheses, and the larger meaning of the results. This interpretation involves several
steps:
Report how the results addressed the research question or hypothesis. The Publication Manual of the
American Psychological Association (American Psychological Association [APA], 2010) suggests
that the most complete meaning of the results come from reporting extensive description, statistical
significance testing, confidence intervals, and effect sizes. Thus, it is important to clarify the
meaning of these last three reports of the results. The statistical significance testing reports an
assessment as to whether the observed scores reflect a pattern other than chance. A statistical test is
considered to be significant if the results are unlikely by chance to have occurred, and the null
hypothesis of no effect can be rejected. The researcher sets a rejection level of no effect, such as
p = 0.001, and then assesses whether the test statistic falls into this level of rejection. Typically
results will be summarized as the analysis of variance revealed a statistically significant difference
between men and women in terms of attitudes toward banning smoking in restaurants F (2, 6) =
8.55, p = 0.001.
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 13 of 37
Two forms of practical evidence of the results should also be reported: (a) the effect size and (b) the
confidence interval. A confidence interval is a range of values (an interval) that describes a level of
uncertainty around an estimated observed score. A confidence interval shows how good an estimated
score might be. A confidence interval of 95%, for example, indicates that 95 out of 100 times the
observed score will fall in the range of values. An effect size identifies the strength of the
conclusions about group differences or the relationships among variables in quantitative studies. It is
a descriptive statistic that is not dependent on whether the relationship in the data represents the true
population. The calculation of effect size varies for different statistical tests: it can be used to explain
the variance between two or more variables or the differences among means for groups. It shows the
practical significance of the results apart from inferences being applied to the population.
The final step is to draft a discussion section where you discuss the implications of the results in
terms of how they are consistent with, refute, or extend previous related studies in the scientific
literature. How do your research findings address gaps in our knowledge base on the topic? It is also
important to acknowledge the implications of the findings for practice and for future research in the
area. It may also involve discussing theoretical and practical consequences of the results. It is also
helpful to briefly acknowledge potential limitations of the study, and potential alternative
explanations for the study findings.
Example 8.1 (http://content.thuzelearning.com/books/Creswell.5819.18.1/sections/navpoint47#s9781506386720.i1095) is a survey method plan section that illustrates many of the steps just mentioned.
This excerpt (used with permission) comes from a journal article reporting a study of factors affecting student
attrition in one small liberal arts college (Bean & Creswell, 1980, pp. 321322).
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 14 of 37
Example 8.1 A Survey Method Plan
Methodology
The site of this study was a small (enrollment 1,000), religious, coeducational, liberal arts
college in a Midwestern city with a population of 175,000 people. [Authors identified the
research site and population.]
The dropout rate the previous year was 25%. Dropout rates tend to be highest among
freshmen and sophomores, so an attempt was made to reach as many freshmen and
sophomores as possible by distribution of the questionnaire through classes. Research on
attrition indicates that males and females drop out of college for different reasons (Bean,
1978, in press; Spady, 1971). Therefore, only women were analyzed in this study.
During April 1979, 169 women returned questionnaires. A homogeneous sample of 135
women who were 25 years old or younger, unmarried, full-time U.S. citizens, and Caucasian
was selected for this analysis to exclude some possible confounding variables (Kerlinger,
1973).
Of these women, 71 were freshmen, 55 were sophomores, and 9 were juniors. Of the students,
95% were between the ages of 18 and 21. This sample is biased toward higher-ability students
as indicated by scores on the ACT test. [Authors presented descriptive information about the
sample.]
Data were collected by means of a questionnaire containing 116 items. The majority of these
were Likert-like items based on a scale from a very small extent to a very great extent.
Other questions asked for factual information, such as ACT scores, high school grades, and
parents educational level. All information used in this analysis was derived from
questionnaire data. This questionnaire had been developed and tested at three other
institutions before its use at this college. [Authors discussed the instrument.]
Concurrent and convergent validity (Campbell & Fiske, 1959) of these measures was
established through factor analysis, and was found to be at an adequate level. Reliability of
the factors was established through the coefficient alpha. The constructs were represented by
25 measuresmultiple items combined on the basis of factor analysis to make indicesand
27 measures were single item indicators. [Validity and reliability were addressed.]
Multiple regression and path analysis (Heise, 1969; Kerlinger & Pedhazur, 1973) were used
to analyze the data. In the causal model . . . , intent to leave was regressed on all variables
which preceded it in the causal sequence. Intervening variables significantly related to intent
to leave were then regressed on organizational variables, personal variables, environmental
variables, and background variables. [Data analysis steps were presented.]
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 15 of 37
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 16 of 37
8.3 Components of an Experimental Study Method Plan
An experimental method plan follows a standard form: (a) participants and design, (b) procedure, and (c)
measures. These three sequential sections generally are sufficient (often in studies with a few measures, the
procedure and measures sections are combined into a single procedure section). In this section of the chapter,
we review these components as well as information regarding key features of experimental design and
corresponding statistical analyses. As with the section on survey design, the intent here is to highlight key
topics to be addressed in an experimental method plan. An overall guide to these topics is found by
answering the questions on the checklist shown in Table 8.4
(http://content.thuzelearning.com/books/Creswell.5819.18.1/sections/navpoint-48#s9781506386720.i1101) .
Participants
Readers need to know about the selection, assignment, and number of participants who will take part in the
experiment. Consider the following suggestions when writing the method section plan for an experiment:
Describe the procedures for recruiting participants to be in the study, and any selection processes
used. Often investigators aim to recruit a study sample that shares certain characteristics by formally
stating specific inclusion and exclusion study criteria when designing their study (e.g., inclusion
criterion: participants must be English language speaking; exclusion criterion: participants must not
be children under the age of 18). Recruitment approaches are wide-ranging, and can include random
digit dialing of households in a community, posting study recruitment flyers or e-mails to targeted
communities, or newspaper advertisements. Describe the recruitment approaches that will be used
and the study compensation provided for participating.
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 17 of 37
One of the principal features distinguishing an experiment from a survey study design is the use of
random assignment. Random assignment is a technique for placing participants into study conditions
of a manipulated variable of interest. When individuals are randomly assigned to groups, the
procedure is called a true experiment. If random assignment is used, discuss how and when the
study will randomly assign individuals to treatment groups, which in experimental studies are
referred to as levels of an independent variable. This means that of the pool of participants,
Individual 1 goes to Group 1, Individual 2 to Group 2, and so forth so that there is no systematic bias
in assigning the individuals. This procedure eliminates the possibility of systematic differences
among characteristics of the participants that could affect the outcomes so that any differences in
outcomes can be attributed to the studys manipulated variable (or variables) of interest (Keppel &
Wickens, 2003). Often experimental studies may be interested in both randomly assigning
participants to levels of a manipulated variable of interest (e.g., a new treatment approach for
teaching fractions to children versus the traditional approach) while also measuring a second
predictor variable of interest that cannot utilize random assignment (e.g., measuring whether the
treatment benefits are larger among female compared to male children; it is impossible to randomly
assign children to be male or female). Designs in which a researcher has only partial (or no) control
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 18 of 37
over randomly assigning participants to levels of a manipulated variable of interest are called quasiexperiments.
Conduct and report a power analysis for sample size determination (for a helpful resource, see
Kraemer & Blasey, 2016). The procedures for a sample size power analysis mimic those for a survey
design, although the focus shifts to estimating the number of participants needed in each condition of
the experiment to detect significant group differences. In this case, the input parameters shift to
include an estimate of the effect size referencing the estimated differences between the groups of
your manipulated variable(s) of interest and the number of groups in your experiment. Readers are
encouraged to review the power analysis section earlier in the survey design portion of this chapter
and then consider the following example:
Previously we introduced a cross-sectional survey design assessing the relationship between
number of overtime hours worked and burnout symptoms among nurses. We might decide
to conduct an experiment to test a related question: Do nurses working full time have higher
burnout symptoms compared to nurses working part time? In this case, we might conduct an
experiment in which nurses are randomly assigned to work either full time (group 1) or part
time (group 2) for 2 months, at which time we could measure burnout symptoms. We could
conduct a power analysis to evaluate the sample size needed to detect a significant
difference in burnout symptoms between these two groups. Previous literature might
indicate an effect size difference between these two groups at d = .5, and as with our survey
study design, we can assume a two-tailed alpha = .05 and beta = .20. We ran the calculation
again using the G*Power software program (Faul et al., 2007; Faul et al., 2009) to estimate
the sample size needed to detect a significant difference between groups:
Test family: t tests
Statistical test: Means: difference between two independent means (two groups)
Type of power analysis: A priori: Compute required sample size
Tails: Two
Effect size d: .5
err prob: .05
Power (1 err prob): .8
Allocation ratio N2/N1: 1
With these input parameters, the power analysis indicates a total sample size of 128
participants (64 in each group) is needed in order to detect a significant difference between
groups in burnout symptoms.
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 19 of 37
At the end of the participants section, it is helpful to provide a formal experimental design statement
that specifies the independent variables and their corresponding levels. For example, a formal design
statement might read, The experiment consisted of a one-way two-groups design comparing
burnout symptoms between full-time and part-time nurses.
Variables
The variables need to be specified in the formal design statement and described (in detail) in the procedure
section of the experimental method plan. Here are some suggestions for developing ideas about variables in a
proposal:
Clearly identify the independent variables in the experiment (recall the discussion of variables in
Chapter 3 (s9781506386720.i731.xhtml) ) and how they will be manipulated in the study. One common
approach is to conduct a 2 2 between-subjects factorial design in which two independent variables
are manipulated in a single experiment. If this is the case, it is important to clarify how and when
each independent variable is manipulated.
Include a manipulation check measure that evaluates whether your study successfully manipulated
the independent variable(s) of interest. A manipulation check measure is defined as a measure of
the intended manipulated variable of interest. For example, if a study aims to manipulate self-esteem
by offering positive test feedback (high self-esteem condition) or negative test feedback (low selfesteem condition) using a performance task, it would be helpful to quantitatively evaluate whether
there are indeed self-esteem differences between these two conditions with a manipulation check
measure. After this self-esteem study manipulation, a researcher may include a brief measure of state
self-esteem as a manipulation check measure prior to administering the primary outcome measures
of interest.
Identify the dependent variable or variables (i.e., the outcomes) in the experiment. The dependent
variable is the response or the criterion variable presumed to be caused by or influenced by the
independent treatment conditions. One consideration in the experimental method plan is whether
there are multiple ways to measure outcome(s) of interest. For example, if the primary outcome is
aggression, it may be possible to collect multiple measures of aggression in your experiment (e.g., a
behavioral measure of aggression in response to a provocation, self-reported perceptions of
aggression).
Identify other variables to be measured in the study. Three categories of variables are worth
mentioning. First, include measures of participant demographic characteristics (e.g., age, gender,
ethnicity). Second, measure variables that may contribute noise to the study design. For example,
self-esteem levels may fluctuate during the day (and relate to the study outcome variables of interest)
and so it may be beneficial to measure and record time of day in the study (and then use it as a
covariate in study statistical analyses). Third, measure variables that may be potential confounding
variables. For example, a critic of the self-esteem manipulation may say that the positive/negative
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 20 of 37
performance feedback study manipulation also unintentionally manipulated rumination, and it was
this rumination that is a better explanation for study results on the outcomes of interest. By
measuring rumination as a potential confounding variable of interest, the researcher can
quantitatively evaluate this critics claim.
Instrumentation and Materials
Just like in a survey method plan, a sound experimental study plan calls for a thorough discussion about the
instruments usedtheir development, their items, their scales, and reports of reliability and validity of scores
on past uses. However, an experimental study plan also describes in detail the approach for manipulating the
independent variables of interest:
Thoroughly discuss the materials used for the manipulated variable(s) of interest. One group, for
example, may participate in a special computer-assisted learning plan used by a teacher in a
classroom. This plan might involve handouts, lessons, and special written instructions to help
students in this experimental group learn how to study a subject using computers. A pilot test of
these materials may also be discussed, as well as any training required to administer the materials in
a standardized way.
Often the researcher does not want participants to know what variables are being manipulated or the
condition they have been assigned to (and sometimes what the primary outcome measures of interest
are). It is important, then, to draft a cover story that will be used to explain the study and procedures
to participants during the experiment. If any deception is used in the study, it is important to draft a
suitable debriefing approach and to get all procedures and materials approved by your institutions
IRB (see Chapter 4 (s9781506386720.i817.xhtml) ).
Experimental Procedures
The specific experimental design procedures also need to be identified. This discussion involves indicating
the overall experiment type, citing reasons for the design, and advancing a visual model to help the reader
understand the procedures.
Identify the type of experimental design to be used in the proposed study. The types available in
experiments are pre-experimental designs, quasi-experiments, and true experiments. With preexperimental designs, the researcher studies a single group and implements an intervention during
the experiment. This design does not have a control group to compare with the experimental group.
In quasi-experiments, the investigator uses control and experimental groups, but the design may
have partial or total lack of random assignment to groups. In a true experiment, the investigator
randomly assigns the participants to treatment groups. A single-subject design or N of 1 design
involves observing the behavior of a single individual (or a small number of individuals) over time.
Identify what is being compared in the experiment. In many experiments, those of a type called
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 21 of 37
between-subject designs, the investigator compares two or more groups (Keppel & Wickens, 2003;
Rosenthal & Rosnow, 1991). For example, a factorial design experiment, a variation on the betweengroup design, involves using two or more treatment variables to examine the independent and
simultaneous effects of these treatment variables on an outcome (Vogt & Johnson, 2015). This
widely used experimental design explores the effects of each treatment separately and also the
effects of variables used in combination, thereby providing a rich and revealing multidimensional
view. In other experiments, the researcher studies only one group in what is called a within-group
design. For example, in a repeated measures design, participants are assigned to different treatments
at different times during the experiment. Another example of a within-group design would be a study
of the behavior of a single individual over time in which the experimenter provides and withholds a
treatment at different times in the experiment to determine its impact. Finally, studies that include
both a between-subjects and a within-subjects variable are called mixed designs.
Provide a diagram or a figure to illustrate the specific research design to be used. A standard notation
system needs to be used in this figure. As a research tip, we recommend using the classic notation
system provided by Campbell and Stanley (1963, p. 6):
X represents an exposure of a group to an experimental variable or event, the effects of
which are to be measured.
an observation or measurement recorded on an instrument.
Xs and Os in a given row are applied to the same specific persons. Xs and Os in the same
column, or placed vertically relative to each other, are simultaneous.
The the temporal order of procedures in the experiment
(sometimes indicated with an arrow).
The symbol R indicates random assignment.
Separation of parallel rows by a horizontal line indicates that comparison groups are not
equal (or equated) by random assignment. No horizontal line between the groups displays
random assignment of individuals to treatment groups.
In Examples 8.2 (http://content.thuzelearning.com/books/Creswell.5819.18.1/sections/navpoint48#s9781506386720.i1113) 8.5 (http://content.thuzelearning.com/books/Creswell.5819.18.1/sections/navpoint48#s9781506386720.i1136) , this notation is used to illustrate pre-experimental, quasi-experimental, true
experimental, and single-subject designs.
Example 8.2 Pre-experimental Designs
One-Shot Case Study
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 22 of 37
This design involves an exposure of a group to a treatment followed by a measure.
Group A X_____________________O
One-Group Pretest-Posttest Design
This design includes a pretest measure followed by a treatment and a posttest for a single group.
Group A O1XO2
Static Group Comparison or Posttest-Only With Nonequivalent
Groups
Experimenters use this design after implementing a treatment. After the treatment, the researcher selects
a comparison group and provides a posttest to both the experimental group(s) and the comparison
group(s).
Group A X______________________O
Group B _______________________O
Alternative Treatment Posttest-Only With Nonequivalent Groups
Design
This design uses the same procedure as the Static Group Comparison, with the exception that the
nonequivalent comparison group received a different treatment.
Group A X1_____________________O
Group B X2_____________________O
Example 8.3 Quasi-experimental Designs
Nonequivalent (Pretest and Posttest)
In this design, a popular approach to quasi-experiments, the experimental Group A and the control
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 23 of 37
Group B are selected without random assignment. Both groups take a pretest and posttest. Only the
experimental group receives the treatment.
Group A OXO
___________________________
Group B OO
Single-Group Interrupted Time-Series Design
In this design, the researcher records measures for a single group both before and after a treatment.
Group A OOOOXOOOO
Control-Group Interrupted Time-Series Design
This design is a modification of the Single-Group Interrupted Time-Series design in which two groups
of participants, not randomly assigned, are observed over time. A treatment is administered to only one
of the groups (i.e., Group A).
Group A OOOOXOOOO
__________________________________
Group B OOOOOOOOO
Example 8.4 True Experimental Designs
PretestPosttest Control-Group Design
A traditional, classical design, this procedure involves random assignment of participants to two groups.
Both groups are administered both a pretest and a posttest, but the treatment is provided only to
experimental Group A.
Group A ROXO
Group B ROO
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 24 of 37
Posttest-Only Control-Group Design
This design controls for any confounding effects of a pretest and is a popular experimental design. The
participants are randomly assigned to groups, a treatment is given only to the experimental group, and
both groups are measured on the posttest.
Group A RXO
Group B RO
Solomon Four-Group Design
A special case of a 2 2 factorial design, this procedure involves the random assignment of participants
to four groups. Pretests and treatments are varied for the four groups. All groups receive a posttest.
Group A ROXO
Group B ROO
Group C RXO
Group D RO
Example 8.5 Single-Subject Designs
A-B-A Single-Subject Design
This design involves multiple observations of a single individual. The target behavior of a single
individual is established over time and is referred to as a baseline behavior. The baseline behavior is
assessed, the treatment provided, and then the treatment is withdrawn.
Baseline A Treatment B Baseline A
OOOOOXXXXXOOOOOO
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 25 of 37
Threats to Validity
There are several threats to validity that will raise questions about an experimenters ability to conclude that
the manipulated variable(s) of interest affect an outcome and not some other factor. Experimental researchers
need to identify potential threats to the internal validity of their experiments and design them so that these
threats will not likely arise or are minimized. There are two types of threats to validity: (a) internal threats
and (b) external threats.
Internal validity threats are experimental procedures, treatments, or experiences of the participants
that threaten the researchers ability to draw correct inferences from the data about the population in
an experiment. Table 8.5 (http://content.thuzelearning.com/books/Creswell.5819.18.1/sections/navpoint48#s9781506386720.i1142) displays these threats, provides a description of each one of them, and
suggests potential responses by the researcher so that the threat may not occur. There are those
involving participants (i.e., history, maturation, regression, selection, and mortality), those related to
the use of an experimental treatment that the researcher manipulates (i.e., diffusion, compensatory
and resentful demoralization, and compensatory rivalry), and those involving procedures used in the
experiment (i.e., testing and instruments).
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 26 of 37
Source: Source: Adapted from Creswell (2012).
Potential threats to external validity also must be identified and designs created to minimize these
threats. External validity threats arise when experimenters draw incorrect inferences from the
sample data to other persons, other settings, and past or future situations. As shown in Table 8.6
(http://content.thuzelearning.com/books/Creswell.5819.18.1/sections/navpoint-48#s9781506386720.i1149) ,
these threats arise because of the characteristics of individuals selected for the sample, the
uniqueness of the setting, and the timing of the experiment. For example, threats to external validity
arise when the researcher generalizes beyond the groups in the experiment to other racial or social
groups not under study, to settings not examined, or to past or future situations. Steps for addressing
these potential issues are also presented in Table 8.6
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 27 of 37
(http://content.thuzelearning.com/books/Creswell.5819.18.1/sections/navpoint-48#s9781506386720.i1149) .
Other threats that might be mentioned in the method section are the threats to statistical conclusion
validity that arise when experimenters draw inaccurate inferences from the data because of
inadequate statistical power or the violation of statistical assumptions. Threats to construct validity
occur when investigators use inadequate definitions and measures of variables.
Practical research tips for proposal writers to address validity issues are as follows:
Identify the potential threats to validity that may arise in your study. A separate section in a proposal
may be composed to advance this threat.
Define the exact type of threat and what potential issue it presents to your study.
Discuss how you plan to address the threat in the design of your experiment.
Cite references to books that discuss the issue of threats to validity, such as Cook and Campbell
(1979); Shadish, Cook, & Campbell (2001); and Tuckman (1999).
The Procedure
A researcher needs to describe in detail the sequential step-by-step procedure for conducting the experiment.
A reader should be able to clearly understand the cover story, the design being used, the manipulated
variable(s) and outcome variable(s), and the timeline of activities. It is also important to describe steps taken
to minimize noise and bias in the experimental procedures (e.g., To reduce the risk of experimenter bias, the
experimenter was blind to the participants study condition until all outcome measures were assessed.).
Discuss a step-by-step approach for the procedure in the experiment. For example, Borg and Gall
(2006) outlined steps typically used in the procedure for a pretest-posttest control group design with
matching participants in the experimental and control groups:
1. Administer measures of the dependent variable or a variable closely correlated with the
dependent variable to the research participants.
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 28 of 37
Source: Source: Adapted from Creswell (2012).
2. Assign participants to matched pairs on the basis of their scores on the measures described
in Step 1.
3. Randomly assign one member of each pair to the experimental group and the other member
to the control group.
4. Expose the experimental group to the experimental treatment and administer no treatment or
an alternative treatment to the control group.
5. Administer measures of the dependent variables to the experimental and control groups.
6. Compare the performance of the experimental and control groups on the posttest(s) using
tests of statistical significance.
Data Analysis
Tell the reader about the types of statistical analyses that will be implemented on the dataset.
Report the descriptive statistics. Some descriptive statistics that are commonly reported include
frequencies (e.g., how many male and female participants were in the study?), means and standard
deviations (e.g., whats the mean age of the sample; what are the group means and corresponding
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 29 of 37
standard deviation values for the primary outcome measures?).
Indicate the inferential statistical tests used to examine the hypotheses in the study. For experimental
designs with categorical information (groups) on the independent variable and continuous
information on the dependent variable, researchers use t tests or univariate analysis of variance
(ANOVA), analysis of covariance (ANCOVA), or multivariate analysis of variance (MANOVA
multiple dependent measures). (Several of these tests are mentioned in Table 8.3
(http://content.thuzelearning.com/books/Creswell.5819.18.1/sections/navpoint-47#s9781506386720.i1093) ,
which was presented earlier.) In factorial designs where more than one independent variable is
manipulated, you can test for main effects (of each independent variable) and interactions between
independent variables. Also, indicate the practical significance by reporting effect sizes and
confidence intervals.
For single-subject research designs, use line graphs for baseline and treatment observations for
abscissa (horizontal axis) units of time and the ordinate (vertical axis) target behavior. Researchers
plot each data point separately on the graph, and connect the data points with lines (e.g., see Neuman
& McCormick, 1995). Occasionally, tests of statistical significance, such as t tests, are used to
compare the pooled mean of the baseline and the treatment phases, although such procedures may
violate the assumption of independent measures (Borg & Gall, 2006).
Interpreting Results and Writing a Discussion Section
The final step in an experiment is to interpret the findings in light of the hypotheses or research questions and
to draft a discussion section. In this interpretation, address whether the hypotheses or questions were
supported or whether they were refuted. Consider whether the independent variable manipulation was
effective (a manipulation check measure can be helpful in this regard). Suggest why the results were
significant, or why they were not, linking the new evidence with past literature (Chapter 2
(s9781506386720.i658.xhtml) ), the theory used in the study (Chapter 3 (s9781506386720.i731.xhtml) ), or
persuasive logic that might explain the results. Address whether the results might have been influenced by
unique strengths of the approach, or weaknesses (e.g., threats to internal validity), and indicate how the
results might be generalized to certain people, settings, and times. Finally, indicate the implications of the
results, including implications for future research on the topic.
Example 8.6 (http://content.thuzelearning.com/books/Creswell.5819.18.1/sections/navpoint48#s9781506386720.i1157) is a description of an experimental method plan adapted from a value affirmation
stress study published by Creswell and colleagues (Creswell et al., 2005).
Example 8.6 An Experimental Method Plan
This study tested the hypothesis that thinking about ones important personal values in a self-affirmation
activity could buffer subsequent stress responses to a laboratory stress challenge task. The specific study
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 30 of 37
hypothesis was that the self-affirmation group, relative to the control group, would have lower salivary
cortisol stress hormone responses to a stressful performance task. Here we highlight a plan for
organizing the methodological approach for conducting this study. For a full description of the study
methods and findings, see the published paper (Creswell et al., 2005).
Method
Participants
A convenience sample of eighty-five undergraduates will be recruited from a large public
university on the west coast, and compensated with course credit or $30. This sample size is
justified based on a power analysis conducted prior to data collection with the software
program G*Power (Faul et al., 2007; Faul et al., 2009), based on [specific input parameters
described here for the power analysis]. Participants will be eligible to participate if they meet
the following study criteria [list study inclusion and exclusion criteria here]. All study
procedures have been approved by the University of California, Los Angeles Institutional
Review Board, and participants will provide written informed consent prior to participating in
study related activities.
The study is a 2 4 mixed design, with value affirmation condition as a two-level between
subjects variable (condition: value affirmation or control) and time as a four-level withinsubjects variable (time: baseline, 20 minutes post-stress, 30 minutes post-stress, and 45
minutes post-stress). The primary outcome measure is the stress hormone cortisol, as
measured by saliva samples.
Procedure
To control for the circadian rhythm of cortisol, all laboratory sessions will be scheduled
between the hours of 2:30 pm and 7:30 pm. Participants will be run through the laboratory
procedures one at a time. The cover story consists of telling participants that the study is
interested in studying physiological responses to laboratory performance tasks.
Upon arrival all participants will complete an initial values questionnaire where they will rank
order five personal values. After a 10-minute acclimation period, participants will provide a
baseline saliva sample, for the assessment of salivary cortisol levels. Participants will then
receive instructions on the study tasks and then will be randomly assigned by the
experimenter (using a random number generator) to either a value affirmation or control
condition, where they will be asked to [description of the value affirmation independent
variable manipulation here, along with the subsequent manipulation check measure]. All
participants will then complete the laboratory stress challenge task [description of the stress
challenge task procedures for producing a stress response here]. After the stress task,
participants will complete multiple post-stress task questionnaire measures [describe them
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 31 of 37
here], and then provide saliva samples at 20, 30, and 45 minutes post-stress task onset. After
providing the last saliva sample, participants will be debriefed, compensated, and dismissed.
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 32 of 37
Summary
This chapter identified essential components for organizing a methodological approach and plan for
conducting either a survey or an experimental study. The outline of steps for a survey study began with
a discussion about the purpose, the identification of the population and sample, the survey instruments
to be used, the relationship between the variables, the research questions, specific items on the survey,
and steps to be taken in the analysis and the interpretation of the data from the survey. In the design of
an experiment, the researcher identifies participants in the study, the variablesthe manipulated
variable(s) of interest and the outcome variablesand the instruments used. The design also includes
the specific type of experiment, such as a pre-experimental, quasi-experimental, true experiment, or
single-subject design. Then the researcher draws a figure to illustrate the design, using appropriate
notation. This is followed by comments about potential threats to internal and external validity (and
possibly statistical and construct validity) that relate to the experiment, the statistical analyses used to
test the hypotheses or research questions, and the interpretation of the results.
Writing Exercises
1. Design a plan for the procedures to be used in a survey study. Review the checklist in Table
8.1 (http://content.thuzelearning.com/books/Creswell.5819.18.1/sections/navpoint47#s9781506386720.i1072) after you write the section to determine if all components have been
addressed.
2. Design a plan for procedures for an experimental study. Refer to Table 8.4
(http://content.thuzelearning.com/books/Creswell.5819.18.1/sections/navpoint48#s9781506386720.i1101) after you complete your plan to determine if all questions have been
addressed adequately.
Additional Readings
Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research. In
N. L. Gage (Ed.), Handbook of research on teaching (pp. 176). Chicago: Rand McNally.
This chapter in the Gage Handbook is the classical statement about experimental designs. Campbell and
Stanley designed a notation system for experiments that is still used today; they also advanced the types
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 33 of 37
of experimental designs, beginning with factors that jeopardize internal and external validity, the preexperimental design types, true experiments, quasi-experimental designs, and correlational and ex post
facto designs. The chapter presents an excellent summary of types of designs, their threats to validity,
and statistical procedures to test the designs. This is an essential chapter for students beginning their
study of experimental studies.
Fowler, F. J. (2014). Survey research methods (5th ed.). Thousand Oaks, CA: Sage.
Floyd Fowler provides a useful text about the decisions that go into the design of a survey research
project. He addresses use of alternative sampling procedures, ways of reducing nonresponse rates, data
collection, design of good questions, employing sound interviewing techniques, preparation of surveys
for analysis, and ethical issues in survey designs.
Keppel, G. & Wickens, T. D. (2003). Design and analysis: A researchers handbook (4th ed.).
Englewood Cliffs, NJ: Prentice Hall.
Geoffrey Keppel and Thomas Wickens provide a detailed, thorough treatment of the design of
experiments from the principles of design to the statistical analysis of experimental data. Overall, this
book is for the mid-level to advanced statistics student who seeks to understand the design and
statistical analysis of experiments. The introductory chapter presents an informative overview of the
components of experimental designs.
Kraemer, H. C., & Blasey, C. (2016). How many subjects? Statistical power analysis in research.
Thousand Oaks: Sage.
This book provides guidance on how to conduct power analyses for estimating sample size. This serves
as an excellent resource for both basic and more complex estimation procedures.
Lipsey, M. W. (1990). Design sensitivity: Statistical power for experimental research. Newbury Park,
CA: Sage.
Mark Lipsey has authored a major book on the topics of experimental designs and statistical power of
those designs. Its basic premise is that an experiment needs to have sufficient sensitivity to detect those
effects it purports to investigate. The book explores statistical power and includes a table to help
researchers identify the appropriate size of groups in an experiment.
Neuman, S. B., & McCormick, S. (Eds.). (1995). Single-subject experimental research: Applications for
literacy. Newark, DE: International Reading Association.
Susan Neuman and Sandra McCormick have edited a useful, practical guide to the design of singlesubject research. They present many examples of different types of designs, such as reversal designs
and multiple-baseline designs, and they enumerate the statistical procedures that might be involved in
analyzing the single-subject data. One chapter, for example, illustrates the conventions for displaying
data on line graphs. Although this book cites many applications in literacy, it has broad application in
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 34 of 37
the social and human sciences.
Thompson, B. (2006). Foundations of behavioral statistics: An insight-based approach. New York: The
Guilford.
Bruce Thompson has organized a highly readable book about using statistics. He reviews the basics
about descriptive statistics (location, dispersion, shape), about relationships among variables and
statistical significance, about the practical significance of results, and about more advanced statistics
such as regression, ANOVA, the general linear model, and logistic regression. Throughout the book, he
brings in practical examples to illustrate his points.
https://edge.sagepub.com/creswellrd5e (https://edge.sagepub.com/creswellrd5e)
Students and instructors, please visit the companion website for videos featuring John W. Creswell, fulltext SAGE journal articles, quizzes and activities, plus additional tools for research design.
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 35 of 37
Chapter 9 Qualitative Methods
Qualitative methods demonstrate a different approach to scholarly inquiry than methods of quantitative
research. Although the processes are similar, qualitative methods rely on text and image data, have unique
steps in data analysis, and draw on diverse designs. Writing a method section for a proposal or study for
qualitative research partly requires educating readers as to the intent of qualitative research, mentioning
specific designs, carefully reflecting on the role the researcher plays in the study, drawing from an everexpanding list of types of data sources, using specific protocols for recording data, analyzing the information
through multiple steps of analysis, and mentioning approaches for documenting the methodological integrity
or accuracyor validityof the data collected. This chapter addresses these important components of
writing a good qualitative method section into a proposal or study. Table 9.1
(http://content.thuzelearning.com/books/Creswell.5819.18.1/sections/navpoint-50#s9781506386720.i1178) presents a
checklist for reviewing the qualitative methods section of your project to determine whether you have
addressed important topics.
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 36 of 37
The qualitative method section of a proposal requires attention to topics that are similar to a quantitative (or
mixed methods) project. These involve telling the reader about the design being used in the study and, in this
case, the use of qualitative research and its basic intent. It also involves discussing the sample for the study
and the overall data collection and recording procedures. It further expands on the data analysis steps and the
Print 2/19/19, 7(09 PM
https://content.ashford.edu/print/Creswell.5819.18.1?sections=na&clientToken=78d68d44-e4b2-0880-33c6-4c34c33c853d&np=navpoint-50 Page 37 of 37
methods used for presenting the data, interpreting it, validating it, and indicating the potential outcomes of
the study. In contrast to other designs, the qualitative approach includes comments by the researcher about
their role and their self-reflection (or reflexivity, it is called), and the specific type of qualitative strategy
being used. Further, because the writing structure of a qualitative project may vary considerably from study
to study, the method section should also include comments about the nature of the final written product.
Are you busy and do not have time to handle your assignment? Are you scared that your paper will not make the grade? Do you have responsibilities that may hinder you from turning in your assignment on time? Are you tired and can barely handle your assignment? Are your grades inconsistent?
Whichever your reason is, it is valid! You can get professional academic help from our service at affordable rates. We have a team of professional academic writers who can handle all your assignments.
Students barely have time to read. We got you! Have your literature essay or book review written without having the hassle of reading the book. You can get your literature paper custom-written for you by our literature specialists.
Do you struggle with finance? No need to torture yourself if finance is not your cup of tea. You can order your finance paper from our academic writing service and get 100% original work from competent finance experts.
Computer science is a tough subject. Fortunately, our computer science experts are up to the match. No need to stress and have sleepless nights. Our academic writers will tackle all your computer science assignments and deliver them on time. Let us handle all your python, java, ruby, JavaScript, php , C+ assignments!
While psychology may be an interesting subject, you may lack sufficient time to handle your assignments. Don’t despair; by using our academic writing service, you can be assured of perfect grades. Moreover, your grades will be consistent.
Engineering is quite a demanding subject. Students face a lot of pressure and barely have enough time to do what they love to do. Our academic writing service got you covered! Our engineering specialists follow the paper instructions and ensure timely delivery of the paper.
In the nursing course, you may have difficulties with literature reviews, annotated bibliographies, critical essays, and other assignments. Our nursing assignment writers will offer you professional nursing paper help at low prices.
Truth be told, sociology papers can be quite exhausting. Our academic writing service relieves you of fatigue, pressure, and stress. You can relax and have peace of mind as our academic writers handle your sociology assignment.
We take pride in having some of the best business writers in the industry. Our business writers have a lot of experience in the field. They are reliable, and you can be assured of a high-grade paper. They are able to handle business papers of any subject, length, deadline, and difficulty!
We boast of having some of the most experienced statistics experts in the industry. Our statistics experts have diverse skills, expertise, and knowledge to handle any kind of assignment. They have access to all kinds of software to get your assignment done.
Writing a law essay may prove to be an insurmountable obstacle, especially when you need to know the peculiarities of the legislative framework. Take advantage of our top-notch law specialists and get superb grades and 100% satisfaction.
We have highlighted some of the most popular subjects we handle above. Those are just a tip of the iceberg. We deal in all academic disciplines since our writers are as diverse. They have been drawn from across all disciplines, and orders are assigned to those writers believed to be the best in the field. In a nutshell, there is no task we cannot handle; all you need to do is place your order with us. As long as your instructions are clear, just trust we shall deliver irrespective of the discipline.
Our essay writers are graduates with bachelor's, masters, Ph.D., and doctorate degrees in various subjects. The minimum requirement to be an essay writer with our essay writing service is to have a college degree. All our academic writers have a minimum of two years of academic writing. We have a stringent recruitment process to ensure that we get only the most competent essay writers in the industry. We also ensure that the writers are handsomely compensated for their value. The majority of our writers are native English speakers. As such, the fluency of language and grammar is impeccable.
There is a very low likelihood that you won’t like the paper.
Not at all. All papers are written from scratch. There is no way your tutor or instructor will realize that you did not write the paper yourself. In fact, we recommend using our assignment help services for consistent results.
We check all papers for plagiarism before we submit them. We use powerful plagiarism checking software such as SafeAssign, LopesWrite, and Turnitin. We also upload the plagiarism report so that you can review it. We understand that plagiarism is academic suicide. We would not take the risk of submitting plagiarized work and jeopardize your academic journey. Furthermore, we do not sell or use prewritten papers, and each paper is written from scratch.
You determine when you get the paper by setting the deadline when placing the order. All papers are delivered within the deadline. We are well aware that we operate in a time-sensitive industry. As such, we have laid out strategies to ensure that the client receives the paper on time and they never miss the deadline. We understand that papers that are submitted late have some points deducted. We do not want you to miss any points due to late submission. We work on beating deadlines by huge margins in order to ensure that you have ample time to review the paper before you submit it.
We have a privacy and confidentiality policy that guides our work. We NEVER share any customer information with third parties. Noone will ever know that you used our assignment help services. It’s only between you and us. We are bound by our policies to protect the customer’s identity and information. All your information, such as your names, phone number, email, order information, and so on, are protected. We have robust security systems that ensure that your data is protected. Hacking our systems is close to impossible, and it has never happened.
You fill all the paper instructions in the order form. Make sure you include all the helpful materials so that our academic writers can deliver the perfect paper. It will also help to eliminate unnecessary revisions.
Proceed to pay for the paper so that it can be assigned to one of our expert academic writers. The paper subject is matched with the writer’s area of specialization.
You communicate with the writer and know about the progress of the paper. The client can ask the writer for drafts of the paper. The client can upload extra material and include additional instructions from the lecturer. Receive a paper.
The paper is sent to your email and uploaded to your personal account. You also get a plagiarism report attached to your paper.
Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.
You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.
Read moreEach paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.
Read moreThanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.
Read moreYour email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.
Read moreBy sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.
Read more