Past Research Projects

Validation of the Teamwork Expectations and Attitudes Measure 

Research team: Brittney Stobbe, Jonathan B. K. Lau, Brandon J. Justus, and Shayna A. Rusticus 


Teamwork is essential in any group projects and being able to identify attitudes and expectations could support students, teachers, and instructors with learning. To help identify attitudes and expectations in groups, our study developed a scaled called the Teamwork Expectations and Attitudes Measure (TEAM). First, two pilot studies were conducted to develop and refine the items. Starting with 75-items, the TEAM scale was completed into a 14-item unidimensional scale. Second, the pilot studies were followed up with a validation study which confirmed the unidimensional structure of the scale and provided evidence of convergent, discriminant, and criterion validity. In the end, the purpose of the developing the TEAM scale was to create something that could assist instructors and students on the perception of teamwork in class, which in turn could shred knowledge about the class’ overall view about group work. Therefore, with the significant findings from this study, the TEAM scale can be useful tool in undergraduate environments that need extra support in examining teamwork within classes.  

Justus, B. J., Rusticus, S. A., Stobbe, B. L. P., Lau, J. B. K. (2021, June). Using the Teamwork Expectations and Attitudes Measure (TEAM) to assess student perceptions of working in teams [Poster] 2021 Annual Meeting of the National Council on Measurement in Education, Virtual.

Employable-Skills Self-Efficacy Survey: A Validation Study

Amanda R. Dumoulin and Shayna A. Rusticus


The Employable Skills Self-Efficacy Survey (ESSES; Ciarocco & Strohmetz, 2018) is a scale that measures the self-efficacy of undergraduate psychology students. This measure is intended to assess an important collection of constructs and has many potential benefits, including assisting institutions in ensuring their students accomplish the goals laid out by the American Psychological Association (2013) for undergraduate psychology students. The purpose of this study was to provide additional validity evidence for the ESSES by looking at its internal structure, reliability, and convergent and discriminant validity. As identified through confirmatory factor analysis, the ESSES does not have an eleven-factor structure, but ten of the eleven subscales were found to be unidimensional. However, only three of the unidimensional subscales had acceptable reliability. There was evidence of convergent validity, but limited evidence of discriminant validity. Revisions are necessary before this scale should be used to measure the employable skills self-efficacy of undergraduate psychology students.  

The paper has been accepted for publication in the Scholarship of Teaching and Learning in Psychology.

Dumoulin, A. R., & Rusticus, S. A. (2021, June). Employable Skills Self-Efficacy Survey: A validation study [Poster]. 2021 Annual Meeting of the National Council on Measurement in Education, Virtual.

Validating a Modified Version of the Self-Directed Learning Readiness Scale (MSDLR) for use Among Undergraduate Students 

Amanda R. Dumoulin, Brandon J. Justus, Jonathan B. K. Lau, and Shayna A. Rusticus 


Self-directed learning readiness (SDLR) refers to the degree to which a learner is ready to be accountable for their own learning and learning needs and is a skill that students can develop. Understanding student levels of SDLR can help optimize the learning environment for more effective teaching and learning strategies. The purpose of this study was to provide additional validity evidence for a modified version of the SDLR scale. Evidence of internal structure and relations with other variables was examined in a sample of 203 undergraduate students. A confirmatory factor analysis did not support the three-factor structure of the modified SLDR scale; however, a follow-up exploratory factor analysis suggested that there were three factors, with some items not loading onto their intended factors. Evidence was provided for convergent validity and mixed evidence was found for discriminant validity. Overall, these results suggest that some modifications may be needed for this scale, but there is potential for this measure to be suitable for assessing readiness for self-directed learning.  

Dumoulin, A. R., Justus, B. J., Lau, J. B. K., & Rusticus, S. A. (2021). Validating a modified version of the Self-Directed Learning Readiness scale (MSDLR) for use among undergraduate students. Kwantlen Psychology Student Journal, 3.

Dumoulin, A. R., Justus, B. J., Lau, J. B. K., & Rusticus, S. A. (2021, June). Validating the self-directed learning readiness scale for use with undergraduate students [Poster]. 82nd Canadian Psychological Association Annual National Convention, Virtual.

Does self-directed learning readiness predict undergraduate students’ instructional preferences?  

Brandon J. Justus, Shayna A. Rusticus, and Brittney Stobbe  


 Self-directed learning is a process by which students take the lead, with or without the help of others, in determining their learning needs and managing their learning strategies and outcomes. Relatedly, self-directed learning readiness (SDLR) looks at the attitudes, abilities, and personality characteristics necessary for self-directed learning. In study one, we shortened, and slightly modified, the SDLR scale (Fisher et al., 2001) for use among undergraduate university students and examined its factor structure and reliability.  In a sample of 194 students, the three-factor structure of this scale (self-management, desire to learn, and self-control) was confirmed with acceptable reliability. In study two, we examined whether SDLR subscales predicted a preference for a teacher-directed or student-directed class format in a sample of 256 undergraduate students. We conducted a series of four multiple linear regressions to examine whether the three dimensions of SDLR were predictive of four classroom preference styles (knowledge construction, teacher direction, cooperative learning, and passive learning). Three of these analyses were statistically significant with small to medium effect sizes. These findings have the potential to identify factors that may be linked to greater student engagement, more positive learning environments, and greater success in the learning process.  

Justus, B. J., Rusticus, S. A., & Stobbe, B. (2020, July). Does self-directed learning readiness predict undergraduate students’ instructional preferences? [Poster]. 81st Canadian Psychological Association Annual National Convention, Montréal, Quebec, Canada.

This paper has been accepted for publication in The Canadian Journal for the Scholarship of Teaching and Learning

Comparing Student and Teacher Formed Teams on Group Dynamics, Satisfaction and Performance 

Shayna A. Rusticus and Brandon J. Justus  


We compared student and teacher-formed teams on aspects of group dynamics, satisfaction, and performance. Two sections of an introductory psychology research methods course were randomly assigned to either student-formed (n = 28) or teacher-formed (n = 33) teams. We conducted t-tests on 10 measures related to group dynamics, satisfaction, and success. Academic performance and group work contribution were the only measures found to be statistically different, with the student-formed teams scoring higher than the teacher-formed teams. Follow up individual interviews or focus groups conducted with 13 of these students suggested a slight preference for the teacher-formed method because it was transparent and eliminated the stress of having to choose one’s, team members. We further recommend this method because of its simplicity and closer approximation to real-world scenarios. Several factors identified as being important for effective team functioning, regardless of group formation methods are also discussed.   

Rusticus, S. A. & Justus, B. (2019). Comparing student- and teacher-formed teams on group dynamics, satisfaction, and performance.  Small-Group Research, 50(4), 443–457.  

Establishing Equivalence Thresholds Using a Distribution-Based Approach  

Shayna A. Rusticus, Kyla Javier, and Kevin W. Eva  


Establishing group equivalence, as opposed to group differences, is a common goal in many educational/research contexts. Tests of equivalence are used to address such goals; however, a key methodological consideration is how to operationalize equivalence. This study sought to verify if a distribution-based approach, based on effect size, can establish a generalizable criterion for identifying equivalence. A sample of 331 students was presented with a series of numerical statements or bar graphs representing three measures: (1) overall academic achievement, (2) an individual exam score, and (3) a course evaluation survey. Descriptive statistics and a mixed ANOVA examined the effects on equivalence ratings of (a) the difference between means, (b) spacing of the differences (narrow/wide), and (c) presentation format (bar graph/numerical). Across the measures and conditions, the equivalence threshold (i.e., the point at which 50% of participants rated the mean difference as non-equivalent) ranged from an effect size of d = 0.37 to d = 1.15, suggesting that a single effect size criterion for establishing the equivalence threshold may not be achievable. Guidelines are provided for setting an appropriate equivalence threshold.    

What are the Key Elements of a Positive Learning Environment?  
Perspectives from Students and Faculty 

Shayna A. Rusticus, Tina Charmchi, and Andrea Mah  


The learning environment comprises the psychological, social, cultural, and physical setting in which learning occurs and has an influence on student motivation and success. The purpose of the present study was to qualitatively explore, from the perspectives of both students and faculty, the key elements of the learning environment that supported and hindered student learning. We recruited a total of 22 students and 9 faculty to participate in either a focus group or individual interview session on their perceptions of the learning environment at their university. We analyzed the data using directed content analysis and organized the themes around the three key dimensions of personal development, relationships, and institutional culture. Within each of these dimensions, we identified subthemes that facilitated or hindered student learning, and faculty work, experiences. We also identified and discussed similarities in subthemes identified by students and faculty. 

This study has been been accepted by Learning Environments Research.