Research in Education

An Egocentric Argument for Learning Analytics

Unfortunately, research tells us that we're not good at estimating how others will perceive what we say. That spells trouble for teachers. Not to worry though, learning analytics provides a robust and objective solution to that problem.

Prof. Jason Pearson, PhD

Someone in a meeting gesturing with their hands.

Unfortunately, research tells us that we’re not good at estimating how others will perceive what we say. That spells trouble for teachers. Not to worry though, learning analytics provides a robust and objective solution to that problem.

Egocentrism

In 2005, Kruger et al. published a now well-known study in the Journal of Personality and Social Psychology titled “Egocentrism Over E-Mail: Can We Communicate as Well as We Think?“. It is one of the most entertaining accounts that provides compelling evidence demonstrating just how bad we are at intuiting how others will perceive what we say.

Allow me to summarize one of the major findings…

Deep Thoughts

Do you recall the popular “Deep Thoughts by Jack Handey” segments on Saturday Night Live in the 90s? In these short, interstitial clips, comedian Jack Handey would read aloud the so-called “Deep Thought”, which was usually a humorous one-liner, as the text of it scrolled across the screen while easy-listening music played softly in the background. These proved to be important feedstock for the study I’d like to describe. Here is one such example.

I guess of all my uncles, I liked Uncle Caveman the best. We called him Uncle Caveman because he lived in a cave, and because sometimes he’d eat one of us. Later on we found out he was a bear. (Handey, 1992)

What Kruger and their colleagues were studying was precisely how well such humour could be communicated over email. However, their results are similarly applicable to communication in general, and where is communication more important than in teaching?

In one experiment, researchers divided participants into two groups, the “video” group and the control group. Within each group there were designated email senders and receivers. The senders were asked to send a series of “Deep Thoughts” by email (one by one) to the receivers. Before sending though, they had to score how funny they thought each message was and how funny they thought the receiver would think the same message was. Finally, the receiver would actually score the message on the same scale after reading it.

The key difference between these groups though was that in the video group, senders were able to watch the video clip from when the “Deep Thought” originally aired on Saturday Night Live. Therefore, they were able to hear the dry delivery of the text by the comedian’s voice, the subtle background music, the juxtaposition of text with the peaceful background imagery, and most importantly the live audience laughter.

The control group, however, had nothing more to refer to than the literal text of each “Deep Thought” to observe.

Context Matters

Perhaps not surprisingly, the video group wildly overestimated how funny they thought the receiver would think the message was. Even though they correctly assumed that the receiver would not find the text as funny as they did (indicating that they were not entirely ignorant of the fact that they were conveying the message without context), their judgment was significantly clouded by their own experience.

The context of their own exposure to the “Deep Thought” rendered them completely unable to relate to their audience.

In other words, we are not able to disentangle our own experience from how we think others will perceive our message.

This is a critically important lesson for us as teachers.

What I take away from this is that we, as teachers, are certainly misinterpreting what we believe our students are (or should be) hearing. We have the benefit of years of educational experience, including undergraduate and graduate school. We have extensive postdoctoral and research experience. We’ve read numerous textbooks with differing editorial approaches to the same topic. We’ve had related conversations with our colleagues. We’ve likely even taught the same material many times previously.

The result is that we have the benefit of a rich underlying context framing the topic we’re trying to teach. As such we choose our words carefully but it is a truly impossible task to accurately intuit how our students will hear and interpret those words. We intuitively and subconsciously expect that students will be able to “fill in the blanks” so to speak, and be able to glean a clear and complete picture of our lecture material from our carefully crafted description.

We cannot possibly offer our students the same context from which we are drawing on as we teach, though.

We are not able to disentangle our own experience from how we think students will perceive our message

What is perhaps even more alarming is that a conscious attempt at correcting for this bias of egocentrism gets harder as our own experience and context for the material grows deeper. As we broaden and deepen our own understanding of our science, it is more difficult for us to imagine the perspectives of our novice students and meet them on their level.

This is evident when we compare the video and control groups described above. Both groups were aware that the messages they were sending were meant to be funny, and so both exhibited some degree of egocentrism. However, the added context provided to the video group  greatly exacerbated the extent to which their perceptions of their audiences’ opinions were wrong. Additional context surrounding the message further clouds our ability to understand how those without that context will perceive our words.

Of course we are not entirely without altruism as instructors, far from it. In fact, I often speak to instructors from around the world and “care for the student” is at the center of many of those conversations. How then can we decrease the extent of egocentrism in ourselves as educators?

I propose we use objectivity to combat this subjectivity. It is too late to rely on midterm or exam grades to confirm a posteriori whether or not students “got it”. A well crafted test can indeed measure learning but when it is the all-or-nothing high stakes exam, we introduce a host of other problems and really only measure the outcome. What we should be doing is using data to intervene to change the outcome.

Ideally we would have our finger on the metaphorical pulse of student learning in real time. For that we require innovative new technology that is capable of making such measurements.

Solutions

Peer to Peer Learning

I would be remiss not to mention peer learning as an antidote to egocentrism. My sense is that the success of peer to peer learning [(Beatty, Gerace, Leonard, & Dufresne, 2006; Caldwell, 2007; Crouch & Mazur, 2001; Newbury & Heiner, 2012; Wieman et al., 2009)] is at least in part due to the manufactured absence of egocentrism in this model. It is a truism that students will not suffer from nearly the same extent of egocentric bias as their instructors do, and so having students teach each other (whenever possible) is a sound strategy for avoiding such issues. This is not without logistical challenges though and the fact that the quality of instruction cannot generally be vetted in such a scenario introduces another factor to carefully consider. Nonetheless, the research is clear on its effectiveness.

Learning Analytics

I believe in data-driven decision making. This is especially true amid growing evidence of the fallacy in our own intuition coupled with the surging capacity of technology to solve ever more problems in our daily lives. Using AI-powered technology to track the performance profiles of my students gives me an objective viewpoint of their strengths and weaknesses. Without having to rely on anecdotal evidence from one-off conversations or hoping that struggling students will come forward with questions, I can confidently assess in real time where my full cohort of students are exhibiting misconceptions.

This goes well beyond a grade book. This is a temporal assessment of student activity in their online learning platform tracked from day 1. I’m not just talking about a simple record of whether or not a question was answered correctly, I’m talking about using innovative algorithms to classify student errors and condense them over an entire curriculum to map whether or not students are struggling with  the concept of thermodynamics or the use of significant figures when solving thermodynamics problems (for example).

That then becomes the data that I arm myself with as I prepare future lectures, help sessions, assignments or tutorials. Knowing exactly where students are struggling is a superpower that I can use to truly meet students where they are on their learning journey and abscond with any preconceived notions I might have had about what topics were “difficult” or “easy”. In reality, nothing in our classes is universally considered “easy” by everyone.

I can then confidently determine how my lectures, course materials, labs, assignments, etc have resulted in students achieving (or not) the learning outcomes I created for my course and iterate accordingly based on the data and not my own egocentric ideas of what I think is “difficult” versus “easy”.

You can learn more about learning analytics within the Stemble platform by booking a demo.

A Final “Deep Thought”

I often hear academics lamenting about modern students, and how different they are today compared to years ago. It is obviously true that student demographics have changed over time, however I don’t ever hear discussions about ourselves as instructors and how different *we *are as the years go by. I can’t help but wonder what fraction of what we perceive as changes in students are actually changes within ourselves? Either the pure changes in our own foundational context that we bring with us to the classroom, or the more subtle relative change between us (who keep gaining experience year after year) and the students (who maintain their age year over year as the previous class gives way to the next).

Perhaps our egocentrism clouds the relativistic reality of the relationship we have with our students.

Prof. Jason Pearson, PhD
Prof. Jason Pearson, PhD Co-Founder
Jason Pearson is a Professor of Chemistry at the University of Prince Edward Island, where he leads an interdisciplinary research group in computational methods for chemistry. He is also co-founder of Stemble Learning, a data-driven teaching and learning platform for chemistry classes and laboratories. He is the winner of numerous SoTL awards including the Reg Friesen Award from the Chem. Ed. Division of the CSC, The Hessian Award, The Janet Pottie Murray Award for Educational Leadership, and the Brightspace Award for Innovation in Teaching and Learning from the STLHE.