School Psychology Quarterly Evaluation of Second Step on Early Elementary Students Academic Outcomes: A Randomized Controlled Trial Clayton R. Cook, Sabina Low, Joanne Buntain-Ricklefs, Kelly Whitaker, Michael D. Pullmann, and Jaclyn Lally Online First Publication, May 24, 2018. http://dx.doi.org/10.1037/spq0000233
CITATION Cook, C. R., Low, S., Buntain-Ricklefs, J., Whitaker, K., Pullmann, M. D., & Lally, J. (2018, May 24). Evaluation of Second Step on Early Elementary Students Academic Outcomes: A Randomized Controlled Trial. School Psychology Quarterly. Advance online publication. http://dx.doi.org/10.1037/spq0000233
Evaluation of Second Step on Early Elementary Students Academic Outcomes: A Randomized Controlled Trial
Clayton R. Cook University of Minnesota
Sabina Low Arizona State University
Joanne Buntain-Ricklefs, Kelly Whitaker, Michael D. Pullmann, and Jaclyn Lally University of Washington
Research has consistently linked social emotional learning to important educational and life outcomes. Early elementary represents an opportune developmental period to proactively support children to acquire social emotional skills that enable academic success. Using data from a large scale randomized controlled trial, the purpose of this study was to investigate the impact of the 4th edition of Second Step on early elementary students academic-related outcomes. Participants were Kindergarten to 2nd grade students in 61 schools (310 teachers; 7,419 students) across six school districts in Washington State and Arizona. Multilevel models (Time Condition) indicated the program had no positive main effect impact on academic outcomes. However, moderator analyses revealed that quality of implementation, specifically a measure of student engagement and dosage, was found to be associated with significant, albeit small, reading and classroom behavior outcomes. Findings from this study provide support for Second Step when implemented in the context of high engagement and higher dosage to have small but potentially meaningful collateral impact on early academic-related outcomes.
Impact and Implications This study examined the collateral impact of a widely used social emotional learning (SEL) program (Second Step) on early elementary childrens academic outcomes. Findings emphasized the impor- tance of specific dimensions of fidelity that may be associated with outcomes, as well as additional research that focused on developing a better understanding of the degree to which SEL programming impacts childrens early academic performance.
Keywords: social emotional learning, social skills, fidelity of implementation, Second Step
In the school context, social, emotional, and behavioral problems present significant, immediate challenges to teaching and learning. Although recent federal and state mandates have prompted increased accountability of academic instruction and student outcomes (e.g., Common Core Standards, teacher evaluation), many school environ- ments continue to be disrupted by student emotional and behavioral difficulties (e.g., bullying, aggression, social withdrawal, defiance; Walker, Ramsey, & Gresham, 2004). Indeed, educators continually
rank emotional and behavioral problems among their top classroom concerns (Buscemi, Bennett, Thomas, & Deluca, 1996; Bushaw & Lopez, 2010; Langdon & Vesper, 2000). Students who exhibit these problems not only miss out on valuable instructional time themselves, but their behaviors can also interfere with teacher delivery of instruc- tional content and inhibit classmate learning (Hinshaw, 1992; Walker et al., 2004).
Although schools are primarily charged with providing in- struction to facilitate the academic achievement of students, there is growing recognition among those involved in education that student social and emotional well-being is instrumental to academic success (Schonert-Reichl, 2017). Owing to this, there is consensus among many researchers, policymakers, and prac- titioners that social emotional learning (SEL) programs should be adopted and integrated with academic practices to promote school success (Brackett & Rivers, 2014). Advocates of SEL have pushed for greater balance between academic learning and social emotional education to develop self-sufficient individu- als who are adequately prepared for work and life (National Research Council, 2012). Consistent with this push, thousands of schools nationwide are adopting and implementing SEL programs to promote both academic and social emotional out-
Editors Note. Editor for this manuscript was Shane R. Jimerson.
Clayton R. Cook, Department of Educational Psychology, University of Minnesota; Sabina Low, T Denny Sanford School of Social and Family Dynamics, Arizona State University; Joanne Buntain-Ricklefs, Kelly Whi- taker, and Michael D. Pullmann, Psychiatry and Behavioral Health De- partment, University of Washington; Jaclyn Lally, College of Education, University of Washington.
Correspondence concerning this article should be addressed to Clayton R. Cook, Department of Educational Psychology, University of Minnesota, Twin Cities, MN. E-mail:[email protected]
T hi
s do
cu m
en t
is co
py ri
gh te
d by
th e
A m
er ic
an P
sy ch
ol og
ic al
A ss
oc ia
ti on
or on
e of
it s
al li
ed pu
bl is
he rs
. T
hi s
ar ti
cl e
is in
te nd
ed so
le ly
fo r
th e
pe rs
on al
us e
of th
e in
di vi
du al
us er
an d
is no
t to
be di
ss em
in at
ed br
oa dl
y.
School Psychology Quarterly 2018 American Psychological Association 2018, Vol. 33, No. 105, 000 1045-3830/18/$12.00 http://dx.doi.org/10.1037/spq0000233
1
comes of children (Collaborative for Academic and Social Emotional Learning, 2014).
In general, SEL is a curricular approach to school-based uni- versal prevention that consists of teaching students core social emotional competencies related to identifying and regulating their emotions, setting and working to achieve positive goals, demon- strating empathy and understanding the perspectives of others, cultivating and sustaining positive relationships, making socially responsible decisions, and handling interpersonal conflicts con- structively (Zins, Bloodworth, Weissberg, & Walberg, 2007). A recent meta-analysis of 213 studies examining the impact of SEL indicated that SEL curricula are not only associated with signifi- cant improvements in students social emotional skills, but they were associated with an average 11 percentile increase on aca- demic achievement (Durlak, Weissberg, Dymnicki, Taylor, & Schellinger, 2011). Furthermore, research has shown that early social behavior strongly predicts academic achievement up to five years later, even after controlling for early academic achievement (Caprara, Barbaranelli, Pastorelli, Bandura, & Zimbardo, 2000; Malecki & Elliot, 2002). Although evidence supports the adoption and use of an SEL curriculum, not all programs are equally effective, and each curriculum must stand on its own empirical support.
Second Step Elementary
Given the recognized benefits of teaching students social emotional skills, several SEL programs have been developed and adopted in the school setting. One of the widely implemented SEL programs for elementary students is Second Step, developed by Committee for Children ([CfC] 2016), which is a nonprofit orga- nization based in Seattle. Second Step is a skills-focused SEL curriculum that emphasizes directly teaching students skills that strengthen their ability to learn, demonstrate empathy and com- passion for others, manage their negative emotions, and solve interpersonal problems. The Second Step logic model (see Figure 1) suggests that students who are provided direct SEL instruction will acquire social emotional skills, opportunities to practice those skills, and receive reinforcement for exhibiting those skills. The Second Step theory of change also suggests that students are likely to experience a range of improved intermediate outcomes,
which would result in a cascade of positive distal outcomes (CfC, 2016).
Previous experimental studies have found mixed support for the earlier versions of the Second Step program to produce positive child outcomes, which is consistent with other smaller or less rigorous studies that too have resulted in mixed findings. For example, Grossman et al. (1997) conducted a randomized control trial and found that physical aggression decreased among students in the Second Step classrooms, when compared to students in the control classrooms, and these positive findings were maintained at a 6-month follow-up. Other studies have demonstrated that stu- dents receiving Second Step lessons demonstrated increased social skills at posttest when compared with children in control class- rooms (Holsen, Iversen, & Smith, 2009; Holsen, Smith, & Frey, 2008). Despite these positive findings, a recent school randomized trial (n 12 schools) by Nebbergall (2009; 3rd Edition Second Step) found no positive or negative effects of Second Step on school achievement or positive behaviors. In the case of this study the control schools were, on average, found to be implementing a fairly high level of SEL programming, albeit not as formalized as the intervention schools, making it challenging to differentiate programming between intervention and control schools. These findings, nonetheless, introduce mixed findings regarding the ef- ficacy of Second Step.
The most recent investigation of Second Step involved a large- scale randomized control trial (61 schools) examining the impact of the new 4th Edition of the program on social-behavioral out- comes over a 1-year period (Low, Cook, Smolkowski, & Buntain- Ricklefs, 2015). Hierarchical models revealed the program had positive main effects on teacher-reported social and behavioral indices, with effect sizes in the small range. The majority of significant findings were moderated effects, with eight out of 11 outcome variables indicating the intervention produced significant improvements in social emotional competence and behavior for children who started the school year with skill deficits relative to their peers.
SEL and Early Elementary Academic Outcomes
Despite the empirical support for Second Step and SEL pro- gramming more broadly, there is less known about the impact of
Figure 1. Logic model for the Second Step program.
T hi
s do
cu m
en t
is co
py ri
gh te
d by
th e
A m
er ic
an P
sy ch
ol og
ic al
A ss
oc ia
ti on
or on
e of
it s
al li
ed pu
bl is
he rs
. T
hi s
ar ti
cl e
is in
te nd
ed so
le ly
fo r
th e
pe rs
on al
us e
of th
e in
di vi
du al
us er
an d
is no
t to
be di
ss em
in at
ed br
oa dl
y.
2 COOK ET AL.
SEL programming on childrens academic outcomes during spe- cific developmental periods, particularly the early elementary school years (Rhoades, Warren, Domitrovich, & Greenberg, 2011). SEL programming for young children in the early elemen- tary grades is particularly important considering the research showing that students who get off to a good start academically are significantly more likely to be successful in the later grades and beyond (Snow, Burns, & Griffin, 1999). SEL provides the basis for teaching young children self-regulatory skills that serve as en- ablers to lifelong learning (Heckman & Masterov, 2004). More- over, the prevention of social and behavioral difficulties is more effective than later efforts to remediate more intensive problems (Greenberg et al., 2003). Early elementary represents an opportune developmental period to proactively support children to begin developing the social emotional competence to prevent social and behavioral problems that interfere with learning, as well as enable them to profit from their early learning experiences.
Further, most SEL research supports the primary effects of SEL programs (i.e., positive impact of social and emotional skills and decreased problem behaviors). Although research has supported the impact of SEL programming, in general on students academic outcomes (Durlak et al., 2011), no research has established a relationship between Second Step and academic achievement. Not- withstanding the prior research linking SEL programming to im- proved outcomes, there remain gaps in the literature and further validation of theory is needed. First, few large scale studies have been performed examining whether SEL programming results in academic outcomes for early elementary students. Second, little research on this developmental period has explicitly examined aspects of fidelity of implementation as it relates to SEL effects on academic outcomes. Third, few research studies examining SEL programs have utilized multilevel models that take into account school-, classroom-, and individual-level effects.
Purpose of the Current Study
Using data from the rigorous, large-scale randomized control trial of the fourth edition of Second Step discussed earlier, the primary purpose of this study was to evaluate the impact of Second Step on early elementary students academic outcomes. A second- ary aim was to examine classroom-level moderators of treatment effectiveness in order to better understand the conditions under which programs like Second Step may produce effects on aca- demic outcomes (Flay et al., 2005). Specifically, we examined the influence of different aspects of fidelity of implementation on academic outcomes, which have been shown to influence program outcomes (Proctor & Brownson, 2012). Student outcome data included direct observations of students on-task and disruptive behaviors in the classroom and curriculum-based measurement probes of academic performance (i.e., reading and mathematics).
Related to the secondary aim, numerous studies have under- scored the importance of assessing fidelity of implementation during efficacy and effectiveness studies based on findings indi- cating that dimensions of fidelity (adherence, dosage, and compe- tency/engagement) account for differential outcomes (Durlak & Dupre, 2008). Different dimensions of fidelity were incorporated into this investigation, with adherence (i.e., implementing core components as planned) and dosage (i.e., number of lessons) capturing traditional dimensions of fidelity and engagement (i.e.,
students level of engagement) reflecting the quality or compe- tency with which Second Step was delivered (Perepletchikova & Kazdin, 2005). Although it is well understood that fidelity affects the outcomes obtained in prevention programs (Durlak & Dupree, 2008), significantly less is known about the factors that influence quality implementation of school-based programs delivered by teachers (Owens et al., 2014). Thus, this study examined different dimensions of fidelity (adherence, engagement, dosage) and their relationship to changes in student academic-related outcomes.
Hypotheses. The specific hypotheses that guided this study were informed by prior literature on SEL programs regarding their collateral impact on short-term academic outcomes (Durlak et al., 2011). Drawing upon Figure 1, it was hypothesized that early elementary students who participated in Second Step would dem- onstrate relatively small improvements in (a) reading and math and (b) classroom behavior (on-task behavior and disruptive behavior). This hypothesis was based on prior research demonstrating smaller effects of universal prevention programs (Neil & Christensen, 2009), particularly studies examining impact on more distal out- comes.
We hypothesized that fidelity of implementation would moder- ate the impact of Second Step on academic outcomes. Specifically, we hypothesized that stronger fidelity in delivering Second Step as measured by a composite of different dimensions of fidelity (e.g., dosage, adherence, engagement) that could differentially be related to better academic-related outcomes. Moreover, we postulated that we would find a specific moderated effect with regard to lesson engagement as a proxy of teacher competency in delivering Sec- ond Step, considering the literature linking practitioner compe- tency to fidelity of implementation (Perepletchikova & Kazdin, 2005; Sanetti & Kratochwill, 2009).
Method
Participants
This study included students in kindergarten through second grade enrolled in five school districts across the Puget Sound area of Washington and in one district in Mesa, Arizona. School dis- tricts ranged from rural to urban settings and were recruited in spring 2012 after approval from the institutional review boards (IRBs). School districts, teachers, students, and parents of the students consented to participate in accordance with IRB proce- dures.
Recruitment and retention. The Washington site was able to secure and maintain the participation of 41 schools across five school systems. On the basis of power analyses to secure the participation of a sufficient number of classrooms and students, on average, six randomly selected classrooms per school participated in data collection, though all classrooms in the intervention schools were provided the intervention. A total of 224 teachers agreed to participate and passive parental permission was obtained for 4,891 students, only 1.4% of parents declined. The Arizona site was able to secure and maintain participation from 20 schools from the Mesa School District. An average of five classrooms per school (minimum 3; maximum 7) participated in data collection, with a total of 97 teachers. Passive parental permission was ob- tained for 2,879 students, only 1% of parents declined. Across both Washington and Arizona sites, a total of five schools (8% of all
T hi
s do
cu m
en t
is co
py ri
gh te
d by
th e
A m
er ic
an P
sy ch
ol og
ic al
A ss
oc ia
ti on
or on
e of
it s
al li
ed pu
bl is
he rs
. T
hi s
ar ti
cl e
is in
te nd
ed so
le ly
fo r
th e
pe rs
on al
us e
of th
e in
di vi
du al
us er
an d
is no
t to
be di
ss em
in at
ed br
oa dl
y.
3SECOND STEP AND ACADEMIC OUTCOMES
recruited school) and 15 teachers (5% of all teachers) opted out of participation in this study.
Student- and teacher-level demographics and descriptive infor- mation are displayed in Tables 1 and 2, with statistical tests (t tests and crosstabulations with chi-square tests) comparing teachers in the Second Step condition with teachers in the control condition. The total child sample was 7,419, with 3,727 students in Second Step and 3,692 students in the control condition. There were more students in the Second Step condition who were in Kindergarten, and fewer who were in 1st grade. With regard to socioeconomic status, 50% and 78% of participating students in Washington and Arizona, respectively, received free and reduced lunch. Although there were some significant differences in the racial and ethnic breakdown of the students in the two sites (see Table 1), the total sample was relatively representative of the ethnicity (nationally 64% non-White) and socioeconomic (nationally 48% of students receive free and reduced lunch) distribution of school-age children in the United States (U.S. Census Bureau, 2011).
The total teacher sample (see Table 2) was 310, with 59 Second Step teachers and 151 control teachers. Teacher demographics
were comparable across conditions, with some significant differ- ences in race/ethnicity.
Procedures and Design
Overview. The study used a large-scale, matched, randomized- controlled design with 61 elementary schools randomly assigned within their district to either the early start (treatment; n 31) or delayed start (control; n 30) conditions (see Low, Cook, Smolkowski, & Buntain-Ricklefs, 2015 for CONSORT diagram). The delayed start condition did not receive Second Step during the time period of this study. Schools within Washington and Arizona were matched on free and reduced lunch and percent of non-White students for design purposes (Murray, 1998). There were no significant differences be- tween treatment and control groups on baseline outcome measures (see results section). The present study includes data from the fall (T1) and spring (T2) assessments gathered in Year 1. The overall study represents an evaluation of the impact of implementing two consec- utive years of Second Step.
Table 1 Child-Level Sample Descriptive Information at Fall Quarter
Control Second Step Total sample Variable n (%) n (%) n (%)
Total students 3,692 3,727 7,419 Grade
Kindergarten 1482 (40.1) 1653 (44.4) 3135 (42.3) 1 1991 (53.9) 1863 (50.0) 3854 (51.9) 2 219 (5.9) 211 (5.7) 430 (5.8)
Sex Male 1772 (48.0) 1788 (48.0) 3,560 (48.0) Female 1657 (44.9) 1704 (45.7) 3,361 (45.3) Missing 263 (7.1) 235 (6.3) 498 (6.7)
Race Asian 368 (10.0) 333 (8.9) 701(9.4) Native Hawaiian or other Asian/Pacific Islander 25 (.7) 46 (1.2) 71 (1.0) Black or African American 232 (6.3) 212 (5.7) 444 (6.0) American Indian or Alaska Native 86 (2.3) 123 (3.3) 209 (2.8) Caucasian/White, non-Hispanic 1137 (30.8) 1542 (41.4) 2679 (36.1) More than one race 196 (5.3) 183 (4.9) 379 (5.1) Hispanic 908 (24.6) 761 (20.4) 1669 (22.5) Missing 740 (20.8) 527 (14.1) 1267 (17.1)
Student special education status Not in special education 2418 (65.5) 2524 (67.7) 4942 (66.6) Special education 321 (8.7) 309 (8.3) 630 (8.5) Missing 953 (25.8) 894 (24.0) 1847 (24.9)
Student English language learner (ELL) status Not an ELL 2075 (56.2) 2221 (59.6) 4296 (57.9) ELL student 829 (22.5) 716 (19.2) 1545 (20.8) Missing 788 (21.3) 790 (21.2) 1578 (21.3)
Age 6.2 (.7) 6.2 (.8) 6.2 (.8) Number of school days missed 9.0 (7.7) 9.2 (7.7) 9.2 (7.8) Fall percentage intervals on-task behavior 83.3 (20.1) 81.9 (20.6) 82.6 (20.4) Fall percent intervals disruptive behavior 8.8 (14.3) 9.5 (15.4) 9.1 (14.9) Fall oral reading fluency words reading correct per minute 24.2 (33.4) 22.8 (33.8) 23.63 (33.6) Fall math percent correct 28.2 (27.2) 26.9 (27.7) 27.5 (27.5) Spring percent intervals on-task behavior 80.1 (22.5) 79.7 (22.6) 79.9 (22.5) Spring percentage intervals disruptive behavior 9.6 (16.5) 8.6 (14.7) 9.1 (15.6) Spring oral reading fluency words reading correct per minute 48.5 (42.3) 48.8 (45.2) 48.7 (43.7) Spring math percent correct 55.9 (32.2) 54.3 (33.4) 55.1 (32.8)
Note. For the following variables, t test or chi-square p .05: grade, race, student English language learner status, Fall percentage intervals on-task behavior, Spring percentage intervals disruptive behavior.
T hi
s do
cu m
en t
is co
py ri
gh te
d by
th e
A m
er ic
an P
sy ch
ol og
ic al
A ss
oc ia
ti on
or on
e of
it s
al li
ed pu
bl is
he rs
. T
hi s
ar ti
cl e
is in
te nd
ed so
le ly
fo r
th e
pe rs
on al
us e
of th
e in
di vi
du al
us er
an d
is no
t to
be di
ss em
in at
ed br
oa dl
y.
4 COOK ET AL.
Training participation. The Second Step curriculum (1 hr) and Proactive Classroom Management (PCM; 3 hr) were provided to participating early start schools: Second Step was consistent with standard support operations provided by Committee for Chil- dren and intended to increase motivation to implement the pro- gram, allow teachers to become familiar with the content, and provide specific examples of how to deliver program with fidelity. All early start schools participated in the training, and all kinder- garten, first, and second grade teachers involved in data collection participated in the webinar, which we determined by attendance sheets collected by school personnel.
The PCM trainings are not standard practice in Second Step implementation, but were a response to district needs at the time of recruitment. A very brief overview of classroom strategies was presented to meet the needs of schools without providing a suffi- ciently strong dosage that one would anticipate having a strong impact on classroom behaviors (consistent with train and hope, Stokes & Baer, 1977). Specifically, PCM strategies were delivered either via DVD or in-person depending on the preference of the school, and focused on practices that would help support, rein- force, and facilitate the student active engagement in learning, including Second Step lessons, such as positive greetings at the
door, attention signals, 5 to 1 ratio, and behavior specific praise (Simonsen, Fairbanks, Briesch, Myers, & Sugai, 2008). As part of the training, a focus was on connecting the use of the PCM strategies to promoting students use of the social emotional skills being taught to them. For greater description of the PCM training and the method utilized to support the implementation of Second Step see Low et al. (2015).
Compensation. Participating schools, teachers, and school li- aisons were given a financial stipend for their involvement in the study. Early start schools were provided the curricula at no-cost, and delayed schools were scheduled to receive the free curricula at the end of data collection.
Measures
Data were collected at three time points during the academic year in the fall, winter and spring. However, for the purposes of this study only two data collection periodsfall and springwere included in the analyses as the winter data collection period involved gathering only limited subset of data (e.g., direct obser- vation of classroom behavior). Data collection was not blind, as
Table 2 Fall Quarter Teacher-Level Sample Descriptive Information
Control Early start Total sample Variable n (%) n (%) n (%)
Total teachers 151 159 310 Site
ASU 48 (31.8) 48 (30.2) 96 (31.0) UW 103 (68.2) 111 (69.8) 214 (69.0)
Sex Male 9 (6.0) 3 (1.9) 12 (3.9) Female 142 (94.0) 156 (98.1) 298 (96.1)
Hispanic or Latino/a? No 142 (94.0) 149 (94.3) 291 (93.9) Yes 9 (6.0) 9 (5.7) 18 (5.8) Missing 0 1 (.7) 1 (.3)
Race Asian 6 (4.0) 3 (1.9) 9 (2.9) Native Hawaiian or other Asian/Pacific Islander. 0 3 (1.9) 3 (1.0) Black or African American 0 2 (1.3) 2 (.6) American Indian or Alaska Native 1 (.7) 1 (.6) 2 (.6) Caucasian/White 128 (84.8) 143 (92.3) 271 (87.4) More than one race (please specify): 10 (6.6) 3 (1.9) 13 (4.2) Other 6 (4.0) 0 6 (1.9) Missing 0 4 (2.6) 4 (1.3)
Highest degree received Bachelor,s degree 48 (33.8) 64 (42.1) 115 (37.1) Masters degree 87 (61.3) 85 (55.9) 185 (59.7) Professional degree 6 (4.2) 3 (2.0) 9 (2.9) Doctorate degree 1 (.7) 0 1 (.3)
Grade(s) taught Kindergarten 61 (40.4) 70 (44.0) 131 (42.3) Kindergartenfirst grade split 4 (2.6) 1 (.6) 5 (1.6) First grade 75 (49.7) 79 (49.7) 154 (49.7) First gradesecond grade split 4 (2.8) 2 (1.3) 6 (1.9) Second grade 7 (4.6) 7 (4.4) 14 (4.5)
Age 42.9 (11.9) 44.3 (12.8) 43.67 (12.38) Missing 2 5 7 Number of years teaching 14.4 (9.4) 15.9 (10.5) 15.19 (10.04)
Note. For race, t test or chi-square p .05. ASU Arizona State University; UW University of Washington.
T hi
s do
cu m
en t
is co
py ri
gh te
d by
th e
A m
er ic
an P
sy ch
ol og
ic al
A ss
oc ia
ti on
or on
e of
it s
al li
ed pu
bl is
he rs
. T
hi s
ar ti
cl e
is in
te nd
ed so
le ly
fo r
th e
pe rs
on al
us e
of th
e in
di vi
du al
us er
an d
is no
t to
be di
ss em
in at
ed br
oa dl
y.
5SECOND STEP AND ACADEMIC OUTCOMES
trained graduate students were knowledgeable of the condition of each of the participating schools.
School demographic and archival data. We collected school- level data from publically available online sources (e.g., National Center for Education Statistics website, school district websites) on the type of school (e.g., public vs. private), number of students, racial/ethnic composition of students, and percentage of students receiving free or reduced-price lunch. These data were used as cova- riates in the multilevel models.
Behavioral observation. To record class-wide and individual student behavior, a behavioral observation system was developed based on the Behavioral Observation of Students in Schools (BOSS; Shapiro & Kratochwill, 2000). The BOSS has been shown to produce acceptable interobserver agreement (IOA) and concur- rent and predictive validity with other measures (Volpe, DiPerna, Hintze, & Shapiro, 2005). The three behavioral coding categories consisted of on-task behavior, off-task behavior, and disruptive behavior. The present manuscript focused on two aspects of class- room behavior that are most closely tied to social emotional competence, on-task behavior, defined as behaviors that were consistent with the current learning task or instructional directive (e.g., listening to instruction, talking to peers about academic topic, reading, writing, raising hand, etc.) and disruptive behavior, de- fined as behaviors not pertinent to the assigned activity/task that negatively impact the learning environment (e.g., blurting, leaving ones seat, distracting peers, and making noises with objects). Off-task was not included because it represents the inverse of on-task behavior.
Observations were conducted in all classrooms (early and de- layed start) across both sites by trained graduate students during core academic instruction time in the fall, winter and spring, but only fall and spring data were used in this study. Each student was observed for 2 min total, divided into 10-s intervals. To obtain class-wide estimates of on-task behavior, observers were in- structed to begin with an identified student in the front or back of the classroom and systematically move to the next student to the left after each interval. After the observers made their way through all students in the class, they repeated the same process until the observation time elapsed. A minimum of 12 intervals of data per student and roughly 300 total intervals per class per data collected period were obtained. Identifiable information were used as part of the observation to link data back to individual students. This observation system allowed for the calculation of class-wide and individual student estimates.
Prior to conducting the observations, graduate students were trained on the observation system. Before beginning baseline data collection, each student was required to reach at least 90% agree- ment during practice trials with an identified observer who served as the anchor measure. IOA data consisting of two observers conducting the observation at the same time on the same students were collected on roughly 20% of the observation sessions. IOA was calculated using the point-by-point method, which consists of calculating agreement for each and every interval. This method has been shown to be a more accurate estimate of the agreement between raters for direct observation systems with interval record- ing formats (Shapiro & Kratochwill, 2000). The results revealed that IOA averaged 88% (minimum 72% and maximum 100%), which was associated with a Kappa value of .71 and is
considered to be an acceptable level of interrater reliability (Bailey & Burch, 2002; Viera & Garrett, 2005).
Curriculum-based measurement (CBM). To assess poten- tial growth in academic performance as a result of Second Step, commercially available CBM probes from Aimsweb were admin- istered: (a) oral reading fluency (words read correct per minute) and (b) math calculation (number of digits correct in a minute and percent correct). R-CBM or oral reading fluency probes represent a standardized, general outcome measure of reading performance that is highly sensitive to students response to instruction (Fuchs & Fuchs, 1999). M-CBM or math computation probes have been shown to be a reliable and valid general outcome measure of overall mathematics computation (Thurber, Shinn, & Smolkowski, 2002). Students received grade-equivalent probes for both of these measures.
The R-CBM is a 1-min timed reading of a short passage, which measures oral reading fluency (ORF). The Reading CBM was administered one-to-one with the student. The trained graduate research assistants administered the R-CBM using a stopwatch, a clipboard, a wipe cloth, grade-leveled passage in a plastic sleeve, a dry erase pen, and a laptop or a recording sheet to record the students words read correctly (WRC) per minute. The M-CBM assessed math computational skills (M-COMP). It was an 8-min test that was group administered. Graduate students used a timer and provided copies of M-COMP testing sheet for the entire class. The graduate research assistants followed the standardized admin- istration directions for R-CBM and M-COMP from the Aimsweb website.
Fidelity of implementation. Teachers were asked to complete weekly implementation logs to record adherence to the program, as well as adaptations and student engagement. Adherence had two components: adherence to the key lesson components (five items; yes/no) and adaptations/modifications (four items on 4-point Lik- ert scale; e.g., To what extent did you leave out parts of the lesson). Engagement had two components: ratings of the degree of student engagement (three items on a 4-point Likert scale; e.g., To what extent were students following along with the lesson) and estimated percentage of students who were engaged in the lesson (0% to 100%). For purposes of comprehension, it is impor- tant to know that the fidelity engagement variable was used to capture students participation in and responsiveness to the les- sons, whereas the on-task behavior dependent variable was used to capture the degree to which students were behaviorally engaged during core instructional time. Teachers were also asked to keep a log on how many lessons they completed by the end of the year and reported this information to a school liaison as an indicator of dosage. We modeled the measure after recommendations from Sanetti and Kratochwill (2011) for developing a reliable and valid measure of fidelity of implementation which included operation- alizing the core components and providing for repeated ongoing assessment of fidelity. Previous research has demonstrated that the fidelity measure is associated with significant differential effects, lending support for the validity of the self-report measure (Low et al., 2015, 2016).
Analyses
Multilevel models (MLM) were run as three-level linear models (school, teacher, and student) with robust standard errors and full
T hi
s do
cu m
en t
is co
py ri
gh te
d by
th e
A m
er ic
an P
sy ch
ol og
ic al
A ss
oc ia
ti on
or on
e of
it s
al li
ed pu
bl is
he rs
. T
hi s
ar ti
cl e
is in
te nd
ed so
le ly
fo r
th e
pe rs
on al
us e
of th
e in
di vi
du al
us er
an d
is no
t to
be di
ss em
in at
ed br
oa dl
y.
6 COOK ET AL.
information maximum likelihood, predicting each of the dependent variables with separate models, using the HLM7 program (Rauden- bush, Bryk, Cheong, Congdon, & du Toit, 2011). Because schools were the level of randomization, condition was entered as a level-3 (school level) variable and predicted scores and standard errors are adjusted for nesting at the school and teacher levels. We used four major dependent variables: reading (R-CBM; words correct per minute); math (M-CBM; percentile correct); academic time en- gaged (BOSS-OT or on task behavior); and disruptive behavior (BOSS-DB). BOSS off-task behavior was not included because it was highly negatively correlated with BOSS-OT (r .9). Because grade level was highly related to all dependent variables (DVs), we transformed all DVs into z scores by grade and semester. The fall score for each DV were entered as a covariate in each model to control for baseline differences. The four mixed models testing main effects were as follows:
Z _ DV.2ijk 000 001 CONDITIONk 100 Z _ DV.1ijk
101 Z _ DV.1ijk CONDITIONk r0jk r1jk Z _ DV.1ijk u00k
u10k Z _ DV.1ijk eijk
Where Z_DV Grade-adjusted z score for each of the DVs (reading words per min, math percentage correct; BOSS Academic Time Engaged, BOSS Disruptive Behavior); CONDITION Dummy code of Early Start or comparison, with Early Start 1; .1 and .2 Fall and spring scores, respectively; 000 intercept; eijk level-1 variance/error; r0jk level-2 variance for intercept; r1jk level-2 variance for spring score; u0.k random variance. It should be noted that the Z_DV.1ijkCONDITIONk term was in- cluded as a covariate to control for baseline scores.
Additional moderator analyses limited to the Early Start group examined possible relationships between intervention fidelity and academic outcomes. The three fidelity subscales (adherence, gen- eralization, and engagement) each had a different possible range. Therefore, we created z scores and created a total score by taking the average of all three subscales. Ten teachers (6.3%) were missing some or all fidelity data. Data appeared to be missing at random because there were no significant differences between those with complete data and those with incomplete data on demographics, grade taught, classroom academic scores at fall and spring, and scores on the My Class Inventory at fall and spring. To preserve statistical power, we used multiple imputation (MI) in SPSS version 21 to create five data sets with estimated scores replacing the missing data for these teachers considering our secondary aim of examining the relationship between different dimensions of fidelity on student academic-related outcomes. MI models were based on linear regression with all predictive existing data, constrained at the minimum and maximum of valid data (Graham, 2009; Raghunathan, Reiter, & Rubin, 2003). MI analy- ses pooled results from all five models during the creation of multilevel models similar to those above, but including each of the three fidelity subscales as covariates. All predictor variables were included in each MLM. Each model included all effects as ran- domly varying in an initial run. To maximize statistical power and model parsimony, we eliminated random variance terms that did not contribute to model fit. This was done by iteratively building models, fixing variance component with the highest significance value p .10, and running the model again, until all remaining
variance components were significant at p .10 and models were considered final. Our final four moderator models were:
Z _ READ.2ijk 000 010 BSECSTEPjk 020 ADHEREjk 030 ENGAGEkk
040 GENERALjk 100 ZORFPM.1ijk rojk
r1jk Z _ READ.1ijk u00k u01k BSECSTEPjk
u04k GENERALjk eijk (1)
Z _ MATH.2ijk 000 010 BSECSTEPjk 020 ADHEREjk
030 ENGAGEjk 040 GENERALjk 100 Z _ MATH.1ijk
rojk r1jk Z _ MATH.1ijk u02k ADHEREjk
u10k ZMTHPR.1jk eijk (2)
Z _ AET.2ijk 000 010 BSECSTEPjk 020 ADHEREjk 030 ENGAGEjk
040 GENERALjk 100 Z _ AET.1ijk rojk
u01k BSECSTEPjk eijk (3)
Z _ DB.2ijk 000 010 BSECSTEPjk 020 ADHEREjk 030 ENGAGEjk
040 GENERALjk 100 ZDB.1ijk r0jk r1jk Z _ DB.1ijk
u01k BSECSTEPjk eijk (4)
Where Z_READ Grade-adjusted z score for reading words per minute; Z_MATH Grade-adjusted z score for math percent correct; Z_AET Grade-adjusted BOSS Academic Time En- gaged z score; Z_DB Grade-adjusted BOSS Disruptive Behav- ior z score; BSECSTEP number of sessions of Second Step delivered (grand-mean centered); ADHERE Adherence score; ENGAGE Engagement score; GENERAL Generalization score; .1 and .2 Fall and spring scores, respectively; 000 intercept; eijk Level 1 variance/error; r0jk Level 2 variance for intercept; r1jk Level 2 variance for spring score; u0.k random variance.
Results
Participant Descriptives, Mobility, and Missing Data
There were 714 students (9.6%) absent during fall data collec- tion. By the spring, 635 students (8.1%) moved out of the district, to a different school, or to a different classroom and left the study; 22 (.3%) moved to a different classroom or a different school in the same condition and remained in the study; 38 (.5%) left the study for unknown reasons; 67 (.9%) left the study because the teacher declined; and 626 (8.4%) were absent during spring data collec- tion. Student attrition was unrelated to condition (2 .38, p .54), gender (2 .31, p .65), and gender (2 .77, p .31). Data was missing on at least one of the four dependent variables for 10.5% to 11.1% of the sample in the fall and 16.1% to 18.0% in the spring. Although this may seem high, relative to other school-based studies conducted with ethnically and socioeconom- ically diverse students, the percentage of missing data is relatively low (e.g., Gillham et al., 2007). Previous research has shown that the mean rate is 25% (Biglan et al., 1991).
Descriptives for untransformed spring outcome scores are dis- played in Table 1. Independent t tests found that the control group had more disruptive behavior, t(5952.34) 2.469, p .014; no differ- ences were found for on-task behavior, reading, or math scores.
T hi
s do
cu m
en t
is co
py ri
gh te
d by
th e
A m
er ic
an P
sy ch
ol og
ic al
A ss
oc ia
ti on
or on
e of
it s
al li
ed pu
bl is
he rs
. T
hi s
ar ti
cl e
is in
te nd
ed so
le ly
fo r
th e
pe rs
on al
us e
of th
e in
di vi
du al
us er
an d
is no
t to
be di
ss em
in at
ed br
oa dl
y.
7SECOND STEP AND ACADEMIC OUTCOMES
Fidelity of Implementation
Of all indicators of implementation, dosage varied the most within schools. The average number of lessons completed across sites was 17.42 (SD 3.72, range 7 to 25). The average number of lessons completed across sites was 17.42 (SD 3.72, range 7 to 25). The school-level (unconditional) intraclass correlation (ICC) for two-level models (teachers nested within schools), calculated as the Level 2 variance divided by the total model variance, was .32, indicating that 32% of the variance in the number of sessions delivered was ac- counted for at the school level; hence, the number of lessons com- pleted was more similar among classrooms in each individual school than across all schools. Most teachers made only a few adaptations, but a few made a number of modifications (M 1.92, SD 1.28, range 0.00 to 6.55, ICC .02). Adherence scores ranged from .7 to 2.0 and averaged 1.77 (SD .25); Engagement ranged from 3.5 to 19.4 and averaged 5.9 (SD 1.2); Generalization ranged from 15.8 to 51.7 and averaged 8.7 (SD 3.3).
Main Effect Analysis
Table 3 displays the most salient statistics from the four main effect models, including the coefficients for the school-level z scores at spring semester and the fixed effect adjustment related to the early start condition. None of the basic models revealed a significant relationship between delivery of the Second Step pro- gram and academic-related outcomes. For instance, to interpret Table 3, the average control school words correct per minute z score in spring semester did not significantly differ from zero (coefficient .012, p .705), and the Second Step group had nonsignificant scores that were .065 standard deviations higher than the control group (coefficient .065, p .161). The other statistics in Table 3 are interpreted in a similar manner.
Moderator Analyses
Table 4 shows the results from four multilevel models of school- level z-scores at spring semester, controlling for fall semester z score, predicted by all three fidelity subscales and the total number of lessons delivered. Degrees of freedom vary widely due to allowing terms to vary randomly (which resulted in smaller de- grees of freedom) or fixing variables.
There was a small but significant relationship between spring reading scores and engagement such that one standard deviation higher engagement z score was associated with a spring reading score that was .126 standard deviation higher (p .008); spring reading scores were also highly correlated with fall reading score (coefficient .858, p .001). No other fidelity subscale or number of lessons delivered significantly predicted spring reading score. There was a significant relationship between spring math score and the number of lessons delivered, such that each addi- tional lesson was related to math scores that were .016 standard deviations higher (p .048); spring math score was also correlated with fall math score (coefficient .607, p .001), but spring math score was not significantly related to any fidelity subscale. Spring academic time engaged (on-task behavior) was signifi- cantly correlated with the number of lessons delivered, such that each additional lesson was related to a .023 standard deviations higher academic time engaged score (p .026); spring academic time engaged was also correlated with fall academic time engaged (coefficient .111, p .001), but was not significantly related to any other fidelity subscale. Spring disruptive behavior was signif- icantly moderated by the engagement subscale such that a one standard deviation higher engagement z score was related to a .132 standard deviation lower disruptive behavior z score (p .046); spring disruptive was also correlated with fall disruptive behavior (coefficient .107, p .001). There was some evidence, though it was not statistically significant, that the number of lessons may have been related to a lower disruptive behavior score (coefficient .016, p .063). Spring disruptive behavior was not related to any other fidelity subscale.
Discussion
There is a need for continued research examining the impact SEL programs, like Second Step, as universal supports that pro- mote better academic outcomes for early elementary students who are in a critical developmental period of acquiring cor
Are you busy and do not have time to handle your assignment? Are you scared that your paper will not make the grade? Do you have responsibilities that may hinder you from turning in your assignment on time? Are you tired and can barely handle your assignment? Are your grades inconsistent?
Whichever your reason is, it is valid! You can get professional academic help from our service at affordable rates. We have a team of professional academic writers who can handle all your assignments.
Students barely have time to read. We got you! Have your literature essay or book review written without having the hassle of reading the book. You can get your literature paper custom-written for you by our literature specialists.
Do you struggle with finance? No need to torture yourself if finance is not your cup of tea. You can order your finance paper from our academic writing service and get 100% original work from competent finance experts.
Computer science is a tough subject. Fortunately, our computer science experts are up to the match. No need to stress and have sleepless nights. Our academic writers will tackle all your computer science assignments and deliver them on time. Let us handle all your python, java, ruby, JavaScript, php , C+ assignments!
While psychology may be an interesting subject, you may lack sufficient time to handle your assignments. Don’t despair; by using our academic writing service, you can be assured of perfect grades. Moreover, your grades will be consistent.
Engineering is quite a demanding subject. Students face a lot of pressure and barely have enough time to do what they love to do. Our academic writing service got you covered! Our engineering specialists follow the paper instructions and ensure timely delivery of the paper.
In the nursing course, you may have difficulties with literature reviews, annotated bibliographies, critical essays, and other assignments. Our nursing assignment writers will offer you professional nursing paper help at low prices.
Truth be told, sociology papers can be quite exhausting. Our academic writing service relieves you of fatigue, pressure, and stress. You can relax and have peace of mind as our academic writers handle your sociology assignment.
We take pride in having some of the best business writers in the industry. Our business writers have a lot of experience in the field. They are reliable, and you can be assured of a high-grade paper. They are able to handle business papers of any subject, length, deadline, and difficulty!
We boast of having some of the most experienced statistics experts in the industry. Our statistics experts have diverse skills, expertise, and knowledge to handle any kind of assignment. They have access to all kinds of software to get your assignment done.
Writing a law essay may prove to be an insurmountable obstacle, especially when you need to know the peculiarities of the legislative framework. Take advantage of our top-notch law specialists and get superb grades and 100% satisfaction.
We have highlighted some of the most popular subjects we handle above. Those are just a tip of the iceberg. We deal in all academic disciplines since our writers are as diverse. They have been drawn from across all disciplines, and orders are assigned to those writers believed to be the best in the field. In a nutshell, there is no task we cannot handle; all you need to do is place your order with us. As long as your instructions are clear, just trust we shall deliver irrespective of the discipline.
Our essay writers are graduates with bachelor's, masters, Ph.D., and doctorate degrees in various subjects. The minimum requirement to be an essay writer with our essay writing service is to have a college degree. All our academic writers have a minimum of two years of academic writing. We have a stringent recruitment process to ensure that we get only the most competent essay writers in the industry. We also ensure that the writers are handsomely compensated for their value. The majority of our writers are native English speakers. As such, the fluency of language and grammar is impeccable.
There is a very low likelihood that you won’t like the paper.
Not at all. All papers are written from scratch. There is no way your tutor or instructor will realize that you did not write the paper yourself. In fact, we recommend using our assignment help services for consistent results.
We check all papers for plagiarism before we submit them. We use powerful plagiarism checking software such as SafeAssign, LopesWrite, and Turnitin. We also upload the plagiarism report so that you can review it. We understand that plagiarism is academic suicide. We would not take the risk of submitting plagiarized work and jeopardize your academic journey. Furthermore, we do not sell or use prewritten papers, and each paper is written from scratch.
You determine when you get the paper by setting the deadline when placing the order. All papers are delivered within the deadline. We are well aware that we operate in a time-sensitive industry. As such, we have laid out strategies to ensure that the client receives the paper on time and they never miss the deadline. We understand that papers that are submitted late have some points deducted. We do not want you to miss any points due to late submission. We work on beating deadlines by huge margins in order to ensure that you have ample time to review the paper before you submit it.
We have a privacy and confidentiality policy that guides our work. We NEVER share any customer information with third parties. Noone will ever know that you used our assignment help services. It’s only between you and us. We are bound by our policies to protect the customer’s identity and information. All your information, such as your names, phone number, email, order information, and so on, are protected. We have robust security systems that ensure that your data is protected. Hacking our systems is close to impossible, and it has never happened.
You fill all the paper instructions in the order form. Make sure you include all the helpful materials so that our academic writers can deliver the perfect paper. It will also help to eliminate unnecessary revisions.
Proceed to pay for the paper so that it can be assigned to one of our expert academic writers. The paper subject is matched with the writer’s area of specialization.
You communicate with the writer and know about the progress of the paper. The client can ask the writer for drafts of the paper. The client can upload extra material and include additional instructions from the lecturer. Receive a paper.
The paper is sent to your email and uploaded to your personal account. You also get a plagiarism report attached to your paper.
Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.
You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.
Read moreEach paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.
Read moreThanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.
Read moreYour email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.
Read moreBy sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.
Read more