AIM Analytics: U-M Community Lightning Talks
December 10 @ 12:00 pm – 1:30 pm

Join us on Monday, December 10 from 12:00 p.m. to 1:30 p.m. in North Quad (105 S State St) Space 2435 for AIM Analytics as we invite members of the U-M community to share interesting projects they are working on in relation to learning analytics.
AIM Analytics is a bi-weekly seminar series for researchers across U-M who are interested in learning analytics. The field of learning analytics is a multi and interdisciplinary field that brings together researchers from education, learning sciences, computational sciences and statistics, and all discipline-specific forms of educational inquiry.
Slides can be found in here.
Lightning Talks to include:
- Social Comparison in MOOCs: Perceived SES, Opinion, and Message Formality by Heeryung Choi
- Abstract: There has been limited research on how perceptions of socioeco- nomic status (SES) and opinion difference could influence peer feedback in Massive Open Online Courses (MOOCs). Using social comparison theory [11], we investigated the influence of ability and opinion-related factors on peer feedback text in a data science MOOC. Perceived SES of peers and the formality of written re- sponses were used as the ability-related factor, while agreement between learners represented the opinion-related factor. We focused on understanding the behaviors of those learners who are most prevalent in MOOCs; those from high socioeconomic countries. Through two studies, we found a strong and repeated influence of agreement on affect and formality in feedback to peers. While a mediation effect of perceived SES was found, a significant effect of formality was not. This work contributes to an understanding of how social comparison theory can be operationalized in online peer writing environments.
- Modeling Gender in Intra and Interpersonal Dynamics during Online Learning Collaborative Interactions by Yiwen Lin
- Abstract: Evidence from past research has suggested that gender differences in collaborative learning often map onto stereotypical gender expectations. For instance, men use more aggressive language while women appear to be more agreeing and emotional. To explore gender differences in collaborative communication, we employed the methodology of Group Communication Analysis (GCA), which allows us to examine multiple sociocognitive aspects of learner interactions. Counter to some previous findings, we did not find significant differences between men and women in the degree of participation. However, our results suggest that women have significantly higher social impact, responsivity and internal cohesion in small group collaborative environment. Comparing the proportion of learner interaction profiles between men and women further strengthen the evidence that women are more likely to engage in effective discourse. Our findings provide implications for pedagogical practice to increases equity and inclusivity in online collaborative learning.
- Beyond A/B Testing: Sequential Randomization for Developing Interventions in Scaled Digital Learning Environments by Timothy NeCamp
- Abstract: Randomized experiments ensure robust causal inference that are critical to effective learning analytics research and practice. How- ever, traditional randomized experiments, like A/B tests, are limiting in large scale digital learning environments. While traditional ex- periments can accurately compare two treatment options, they are less able to inform how to adapt interventions to continually meet learners’ diverse needs. In this work, we introduce a trial design for developing adaptive interventions in scaled digital learning environments – the sequential randomized trial (SRT). With the goal of improving learner experience and developing interventions that benefit all learners at all times, SRTs inform how to sequence, time, and personalize interventions. In this paper, we provide an overview of SRTs, and we illustrate the advantages they hold com- pared to traditional experiments. We describe a novel SRT run in a large scale data science MOOC. The trial results contextualize how learner engagement can be addressed through inclusive culturally targeted reminder emails. We also provide practical advice for researchers who aim to run their own SRTs to develop adaptive interventions in scaled digital learning environments.
- What Can We Learn About Learner Interaction When One Course is Hosted on Two MOOC Platforms? By Yuanru Tan
- Abstract: Since the inception and adoption of MOOCs, pedagogues have criticized the quality of social learning within centralized platforms. Learning analytics researchers have investigated patterns of forum use and their relationship to learner performance. Yet, there are currently no cross-platform comparisons that explain how technical features of MOOC platforms may impact social interaction and the formation of learner networks. To address this issue, we analyzed MOOC discussion forum data from a single data science ethics course that ran concurrently on two different MOOC platforms (edX and Coursera). Using Social Network Analysis methods, the study compares networks of active forum posters using “Direct Reply” and “Star” tie definitions. Results show that the platforms afforded formation of different networks, with higher connectedness and higher network centralization seen on edX. The study presents preliminary results, discusses limitations inherent within the current analysis, and sets further directions of research investigating design features of centralized discussion platforms.
- The Impact of Student Opt-Out on Educational Predictive Models by Warren Li
- Abstract: Privacy concerns may lead people to opt-in or opt-out of having their educational data collected. These decisions may impact the performance of educational predictive models. To understand this, we conducted a survey to determine the propensity of students to withhold or grant access to their data for the purposes of training predictive models. We simulated the effects of opt-out on the accuracy of educational predictive models by dropping a random sample of data over a range of increments, and then contextualize our findings using the survey results. We find that grade predictive models are fairly robust and that kappa scores do not decrease unless there is significant opt-out, but when there is, the deteriorating performance disproportionately affects certain subsamples of the population.