Category Archives: Uncategorized

Assessment is Uncomfortable

When I was a new teacher, I used what I now call the “oh crap” model of assessment: I would teach for several days, students and I would have great conversations, and then I would realize after a while “Oh crap! I haven’t put any grades in the gradebook!” I would hurriedly make a quiz or homework assignment, get it out there to students, put those grades in the gradebook, and rush back to teaching and those great conversations. 

Eventually I wised up (at least a bit) and started planning farther ahead. I stopped viewing assessment as an annoying responsibility instead of an important part of teaching and learning. I thought about how I wanted to operationally define learning in my classes and designed assessments that gave students a chance to show what they knew and could do. But it took me too long to realize why I needed better classroom assessment habits. It should be so obvious: teachers need evidence about what students are learning. 

Source: https://www.flickr.com/photos/bgentry/3853508189/

As teachers we need to know what students are learning so that they don’t “hit that bridge” when test time comes (and, more importantly, when they need to use what they learn from our classes in their lives). Why did I ignore this obvious need as a young teacher? 

Because assessment is uncomfortable. 

I think people who have never been in charge of a classroom may not be able to relate to this teaching reality: teaching is intimidating and scary, especially as a new teacher. We don’t know if we’re “doing it right.” We may have gone through years of training, but there is no amount of training that can prepare you for the moment when you are flying solo and teaching your own classes. We desperately want to do right by our students. We care. We know there is important knowledge and skills students need to learn. Checking to see what they actually learned is important, but it’s also uncomfortable. We may end up staring at evidence that what we are doing isn’t working. It is much easier to just keep “teaching” instead of stopping to check to see what students are learning. Cognitive psychologists teach us about confirmation bias: we usually look for evidence that confirms our existing beliefs. Confirmation bias can lead us as teachers to keep cruising along, teaching in comfortable ways, instead of seeking out potentially uncomfortable evidence about what our students aren’t learning. 

Assessment Discomfort Influences How Students Study

This assessment discomfort can influence how we teach, but it also influences students. Two recent, fantastic books about how students study lay out the empirical evidence for studying methods in elegant, well supported arguments: Regan Gurung in Study Like a Champ  and Daniel Willingham in Outsmart your Brain explain the mountains of cognitive psychology evidence behind advice for students about how to study (short version: studying techniques that require more thinking result in more learning. These are both GREAT books every teacher should read!) But they also explain why students tend to avoid these effective studying methods: they are uncomfortable! It’s comforting to read a chapter, think you understood it, then re-read the chapter to make sure we “got it.” This re-reading gives us a warm, comforting feeling of “oh yeah, I remember that!” and we convince ourselves we learned something. In contrast, reading a chapter, closing the textbook, getting out a blank sheet of paper (or new google doc) and writing down everything we remember from the chapter probably feels awful. This much more effective studying method is uncomfortable: we are likely to produce evidence of how little we learned from one reading of a chapter. That stinks. It’s uncomfortable. But we learn MUCH more from this uncomfortable studying technique than the comfortable, feel good re-reading of a chapter. 

Assessment Discomfort Influences Teacher Conversations

I love talking with teachers about their teaching. Usually these conversations involve lessons they just designed, compelling stories they love to use with students, and new resources they are excited to share. Rarely do these conversations involve assessment evidence (often only when I bring it up, and this suggestion is sometimes met with uncomfortable silence). Why? Because it’s a conversation killer. If someone is telling you about a new insight they are excited about and you immediately ask “Wait, what’s your evidence that’s true?” it tends to stop the conversation. My friend (and great teacher) Casey Swanson said one of the best pieces of advice he got was when an experienced teacher said “You talk a lot about teaching. How about we talk about learning for a while instead?” Teachers like to talk about teaching. Our first impulse usually isn’t to talk about evidence of student learning, but that (potentially uncomfortable) conversation might be more important than sharing another great lesson plan. 

I learned long ago to never include the word “assessment” in any conference proposal I write. Every time I used assessment in the title of a proposal it got rejected. We want our students to overcome the discomfort caused by testing themselves because we know they learn best when they figure out what they don’t know. As teachers, we should overcome our discomfort at testing our teaching. Talking about student learning, and evidence of student learning, may start the most important conversations. 

Demos for Teachers #5: Schema theory and memory

Source: https://sites.google.com/view/efratfurst/understanding-understanding

I learned about schema theory and memory from S. Chew’s article “Having Knowledge Is Not the Same as Using It” (thanks Steve! Download available here from Researchgate) and I keep thinking about applications for teaching and learning. It’s my new favorite thing!

As Steve describes in the article, schema (mental rules we use to organize the world) influence how we encode (or don’t encode) new information about the world. After I read Steve’s article, I got curious about how I missed learning about schema theory, and I think I figured it out: for some reason, schema theory is not referenced in the context of memory in the AP Psychology curriculum (that’s my primary teaching context), but it is in the IB curriculum. This seems like an oversight! Schema theory is MUCH more useful and important than many of the details AP Psychology teachers have to teach (e.g. the difference between retroactive and proactive interference – ugh!)

In his article, Steve describes research from Bransford and Johnson (1972) that involves reading a paragraph aloud to participants and testing their memory of the information from the paragraph. Their study was designed to measure the impact of listeners having or not having a useful schema for the paragraph before they get to hear it. If you didn’t jump ahead to to the article already, you can try this for yourself: read the paragraph below and think about how confident you might be if someone gave you a pop quiz about this information:

“The procedure is actually quite simple. First you arrange items into different groups. Of course,
one pile may be sufficient depending on how much there is to do. If you have to go somewhere
else due to lack of facilities that is the next step, otherwise, you are pretty well set. It is important not to overdo things. That is, it is better to do too few things at once than too many. In the short run this may not seem important, but complications can easily arise. A mistake can be expensive as well. At first, the whole procedure will seem complicated. Soon, however, it will become just another facet of life. It is difficult to foresee any end to the necessity for this task in the immediate future, but then, one never can tell. After the procedure is completed one arranges the materials into different groups again. Then they can be put into their appropriate places. Eventually they will be used once more and the whole cycle will then have to be repeated. However, this is part of life”. (Bransford & Johnson, 1972, p. 722)

The feeling of confusion or lack of confidence you may be feeling right now may be similar to how many of Bransford and Johnson’s participants felt. Some of the participants heard that paragraph without any previous information. But others got this crucial piece of information – a schema – before hearing the paragraph: “This paragraph is about washing clothes.” Knowing that schema before hearing the paragraph impacted what they learned:

Chew, S. (2022) Having Knowledge Is Not the Same as Using It, The Teaching Professor, Dec. 12, 2022

Steve does a great job in the article discussing aspects of this study that are important for teachers: participants who knew the topic of the paragraph before hearing it did better on both the comprehension and the recall measure in the study. But participants who got that information after they heard the paragraph did no better than participants who never learned the topic! There are schema students need BEFORE they go through a learning experience in order for that experience to be useful. If students don’t have the right schema, the learning experience might be a waste of time.

I talked about this experiment with science teachers recently and it started a great conversation. I first handed out a slip of paper to each teacher (pg. 2 of this handout). The teachers didn’t realize that half of them got the version of the instruction with the schema and the other half got the version without (I dealt off the top of the pile to half the room and off the bottom to the other half). Then I read the paragraph (pg. 1 of this handout) and was going to tell them whether they were group A or group B and ask that they fill out this google form (we didn’t go through with the data collection b/c it would have been tricky to ask everyone to use devices at that moment in time).

The science teachers and I had a great discussion about the implications of schema theory for teaching and learning. They immediately understood the importance of this theory for their classrooms, but they pointed out a subtlety I didn’t think about before: they often want to give students the opportunity to experience a phenomenon as a “hook” toward the beginning of a lesson or unit, and then follow up with direct instruction, etc. about the details and terminology about the phenomenon. They noted that students absolutely need the required schema before they experience the phenomena (so that they can think about it in useful ways), but they don’t need to know EVERYTHING before they get a chance to think about a concept. In fact, experiencing the phenomena and processing it as a group with their teacher can help students cement the schema in their long term memory, which they can then recall and use as they dive into the details about the concept and learn new vocabulary and concepts. Go Schema theory!

References:

  • Bransford, J. D., & Johnson, M. K. (1972). Contextual prerequisites for understanding: Some investigations of comprehension and recall. Journal of Verbal Learning & Verbal Behavior, 11(6), 717–726.
  • Chew, S. (2022) Having Knowledge Is Not the Same as Using It, The Teaching Professor, Dec. 12, 2022

Demos for teachers #4: Task switching

Source: https://www.learningscientists.org/blog/2017/7/28-1?rq=task+switching

UPDATED POST: my friend Steve Chew sent me information about this task switching demonstration, so I’m updating this post to include two versions (thanks Steve!)

This demonstration is a fast, foolproof way to help students and teachers experience the cost of switching between cognitive tasks. Even when it feels effortless and instant, interrupting a task costs us time and cognitive effort, even if it “just takes a second.”

VERSION #1: I found this activity on the fabulous The Learning Scientists blog (here’s a google doc version). The demonstration has a few steps:

  • Ask students to get out their phones and get to the stopwatch app (or pair students up and have them time each other on the tasks).
  • Task 1 = “Count as quickly as you can from 1 through 26. Try to time exactly how long it takes you – feel free to do this 2-3 times to make sure you get an accurate time. Write down your best time”
  • Task 2 = “Recite the alphabet from A to Z as quickly as you can. Again: try to time yourself precisely. Write down your best time.”
  • Task 3 = “Now you’re going to alternate those two tasks – counting from 1 to 26 and saying the alphabet. This will sound like ‘1, A, 2, B, 3, C, etc.’ Concentrate, don’t give up, and make sure you get all the way to 26 and Z! Time yourself and write down your time.”
  • Note: alternatively, on each of the three tasks you can as students to average their times across a few trials – good data collection practice, and a review of measures of central tendency!

It will be obvious to everyone that task 3 is difficult! Try to encourage everyone to persevere and finish task 3 – switching between task 1 and task 2 takes cognitive effort and many people will be tempted to give up.

You may want to collect the times on each task and average the times in order to compare those data with the graph at the top of the page from the Learning Scientists blog. But even before you analyze the data, everyone will notice that task 3 tasks much more than double the amount of time than task 1 and task 2.

You may want to use this experience to start a discussion about why task 3 takes so much more time than task 1 and 2 combined. Teachers and students may already understand that the term “multitasking” is problematic: we can’t selectively attend to more than one cognitive task at a time in our working memory. Some people think they are better or worse multitaskers, but it’s more accurate to think about “task switching:” some of us may be faster at switching between tasks than other people.

But no matter how quickly we learn to switch between tasks, there is always a cost! During task 3, we have to switch between task 1 and and 2. Each of those “switches” takes a very short time, and then switching back takes another additional short amount of time. Adding up all the time it takes to make those “micro-switches” each time explains why task 3 takes so much time and why it takes so much cognitive effort. By themselves, tasks 1 and 2 are simple, like many of the cognitive tasks we do daily. It’s incredibly tempting to leave one of these simple tasks “just for a moment” to get another simple task done immediately. While I was writing this paragraph, I saw a zoom notification on my phone that a school principal wanted help finding a report online, so I switched from my laptop to send her a quick email. The irony of switching tasks while writing a blog about task switching exemplifies how tempting task switching is. It’s often unavoidable, but acknowledging the cost is important. Teachers can help students realize that switching their attention to their phone , even just for a moment, has a real cost in time and cognitive effort. That realization might help us resist some of these task switching temptations.

VERSION #2: I like the simplicity of the 1-26/A-Z version but this similar activity might also be useful (thanks to Pat. O’Meara for finding this one! Adapted from this video: Try the Myth of Multitasking Exercise!)

  • Get out a sheet of paper and draw three lines across it horizontally so that it looks like this:
  • Tell students you’re going to time how long it takes us as a group to write the phrase “Task switching steals time” in the top row. Turn your paper over when you are done. Ready, Go!
  • Write about how long it took the class on the board (doesn’t need to be exact). 
  • Next task: in the next row, write the numbers 1 through 23. Turn your paper over when you are done. Ready, Go!
  • Write about how long it took the class on the board (doesn’t need to be exact). 
  • In the 3rd and 4th rows, you are going to alternate between letters and numbers. In the 3rd row, you are going to write the same phrase one letter at a time. After you write the S in the 3rd row, you will write a 1 in the 4th row. You may want to demonstrate this for them on the board. Keep doing that until you are done writing all the letters in the phrase in the 3rd row and the numbers 1-23 in the 4th row. Write legibly! Ready, Go!
  • Write about how long it took the group on the board (doesn’t need to be exact) – this last task will take them significantly longer! 

VERSION #3: Steve chew figured out that this one is the original version! Steve found out that David Meyer (U. Michigan) developed this activity and used it in his classes around 2008. Dr. Meyer’s version is similar to version 1 above, but is more concise:

  • Partner students up, one time keeping and one participating, then switch roles
  • Task 1 = count from 1 to 10, then immediately say the alphabet out loud from A-J
  • Task 2 = alternate between counting and the alphabet – e.g. 1-A, 2-B. etc.
  • Data analysis: divide the time for task 2 by task 1.
  • Ask students how many have a ratio of less than 1 (meaning that task 1 took more time than task 2) – there won’t be any!
  • Gather some “ratio” data from students (either from all students or a sample of students, anonymously if needed). Point out that if a ratio is 2, that means it took 2 times as long to do task 2 than task 1. That is the cost of task switching!

Steve included this graphic from Dr. Meyer’s class, which might be a great way to extend this demonstration into a discussion of the cost of task switching during studying!

UPDATE: Peps McCrea discussed the cost of task switching in his most recent “evidence snacks” email newsletter: “Interruptions Leave a Wake.” He shares this article at the end – useful research about the task-switching cost of classroom interruptions by announcement over the intercom, etc. Thanks Peps!

Demos for Teachers #3: Deep Processing

This is my favorite cognitive psychology/teaching and learning demonstration because the resulting data is evidence of possibly the most important lesson we should take away from memory theory. As Daniel Willingham said: “Memory is the residue of thought.” One of the most important ideas we should build learning experiences on is this: the more deeply we think about something, the more likely we are to encode those thoughts into long term memory.

This is a foundational idea, but we often forget it. As students, it’s tempting to use study methods that feel good and are less effortful, but effortful studying is more likely to involve deep processing and increased learning. As teachers, it’s easy to fall prey to the curse of knowledge and fail to attend to the cognitive work students need to do to learn.

In about 15-30 minutes, this depth of processing demonstration can help “prove” the learning value of deep processing by gathering data “live” in a class or professional learning session. Here’s a more complete write up of the activity – Depth of Processing

  • Look through the slides for the activity to make sure the directions make sense, and make sure everyone has a way to write down responses (paper/pencil or digital response)
  • By the end of the activity, everyone will have a list of words they remembers from the 20 “A/B” word list, and they will have “scored” how many A words and B words they remembered.
  • Ask participants to report their number of A and B words somehow (recording the distributions on a white board or google form)
  • At this point it should be pretty obvious that participants remembered more B words than A words (usually about 2 more words on average).

As you discuss with participants why we remember more B words than A words, the point of the activity should emerge out of the data: we remember more B words because the activity requires participants to deeply process B words, while A words are only shallowly processed.

When I get to discus this activity with teachers and students, we often gradually get to a simultaneously mundane and profound realization: the “harder” we think about something, the more likely we are to learn it. While this shouldn’t be a surprising conclusion, it starts great conversations about studying and teaching. If we accept this basic, foundational fact, then one of our most important jobs as students becomes figuring out how to deeply process what we need to learn. How to pick study methods that “force” us to deeply process information. And our job as teachers becomes crafting learning experiences that increase the chances that students will do the deep processing cognitive work that is likely to result in encoding to long term memory.

References:

Demos for teachers #2: Close your Eyes…

Image souce: the great Oliver Caviglioli https://www.olicav.com/#/posters/

This is the demonstration I keep in my “back pocket” and break out when I cover a class for another teacher. It doesn’t require any equipment and it’s bomb proof: it works every time, and you can use it to start discussions about hypothesis testing, or just get students cognitively engaged in thinking about how their own memory systems work.

The demonstration can also start super useful discussions with students about how they might do more useful cognitive work in class, and with teachers about how they might communicate ideas with students more effectively. It is a replication (sorta) of research done by Alan Baddeley. see the more complete write-up of the demonstration here – Baddeley’s Three Systems of Working Memory . Here’s a brief outline:

  • Ask students to close their eyes.
  • Instruct them to mentally count (don’t count out loud) the number of windows in the place where they live. They can open their eyes when they are done.
  • Ask them to close their eyes again and say “Please count the number of words in the sentence that I just said.” Repeat that sentence a couple times.
  • As they try to do that task, you might see them counting on their fingers. That’s a good sign.
  • When most of them are done, ask them to open their eyes.
  • Ask “How many of you used your fingers when you counted the windows?” One or two people might raise their hands (often no one will).
  • Ask “How many of you used your fingers when you counted the words?” Almost everyone will raise their hands.
  • Start a discussion about why that happened. These are two very similar mental counting tasks. Why did almost all of us use our fingers on the word task but not the windows task?

At this point you can list proposed explanations (hypotheses) on the board and alter the task to test them one by one, if you have time. Some explanations may get close to Baddeley’s and other cognitive psychologists’ findings: it turns out that working memory is a complex system with important “moving parts.” I like to think of these aspects of working memory as a boss and two employees (the great graphic by O. Caviglioli at the top of this blog post might help visualize these employees):

  • The boss is the “central executive” – this part of working memory monitors incoming information/stimuli and figures out what to do with it.
  • The visuospatial sketchpad employee: deals with images
  • The phonological loop employee: deals with numbers or words.

The specialties of your two employees explain this strange difference between these mental counting tasks. During the “count the windows” task, the central executive tells the visuospatial sketchpad employee to picture each window, and the phonological loop employee counts each one. Easy! No fingers needed! BUT during the “count the words ” task, the central executive needs someone to repeat the sentence that was just said since the sentence is made up of words. The phonological loop employee is perfect for that task. But now the central executive has a problem: the visuospatial sketchpad employee can’t count! So almost all of us need to use our fingers to count the words in the sentence. (Note: if you’d like to read more about this research, check out Dual Coding theory)

Cool, right? But it’s more than just cool: this detail about how our working memory operates implies some potentially important lessons for students and teachers:

Implications for studying: since we can’t concentrate on words coming in from two different sources at the same time, we may want to be careful about taxing our “cognitive load” for incoming verbal information. This may be why many of us find it difficult to read or process words while there is music with lyrics playing in the background. But the potentially more important implication for studying is that we may want to use graphics or diagrams as we study and try to make sense of words in our notes or a textbook. We can use our visuospatial sketchpad (images/diagrams) and our phonological loop (words and numbers) simultaneously to deeply process meanings and make them easier to recall later. Check out this Learning Scientists blog post for great “dual coding and studying” ideas.

Implications for teaching: After reading dual coding theory, I realized to my horror that one of my teaching habits is a BAD idea: I used to project slides with a bunch of words on them, and I would then talk over the slide, including examples of the concept, elaborating on the ideas, etc. I thought I was helping, but what I was probably doing for most students is overloading their phonological loops. Many students were probably trying to process the words on the slide (in their working memory, using their phonological loop), AND trying to understand what I was saying at the same time, overloading the same working memory system. Oops. Now I try to include only a few words on a slide (or an image) and then I talk through the concept, so that students’ phonological loops and working memories stand a better chance at processing the concept we’re discussing. Related idea: Just add blank slides.

Demos for teachers #1: Tappers and listeners

I love talking with teachers and administrators about cognitive psychology research and how these findings relate to teaching, learning, and assessment. One effective way to start these discussions is to use a “demonstration” – reproducing (at least in a limited way) a key study related to a cognitive psychology concept, and talking about what the data mean about how we think and learn.

This first demonstration – Tappers and Listeners – demonstrates the overconfidence effect. I have a more complete write up of the demonstration here – – but here’s a brief summary:

  • Students work in pairs – one student is the tapper and the other is the listener.
  • Tappers think of a very common song. They get a minute to communicate the song to the listener by tapping the rhythm.
  • Before the task, tappers estimate the % chance the listener will guess the song. Record each estimate.
  • After the task, calculate the % of listeners who successfully guessed the song.
  • The resulting data will most likely show evidence of overconfidence. The tappers’ prediction of success will be significantly higher than the actual success rate. When I use this demonstration with large enough groups of (15+), the tappers group is usually about 60-80% confident in their success, and they are only about 30-50% successful.

This demonstration is a mini-replication of E. L. Newton’s dissertation research . Generating and seeing data of the overconfidence effect “live” with a group of teachers is a great way to start discussions about how overconfidence may impact teaching and learning:

Overconfidence and studying: students are often overconfident “study-ers.” Many students use less effective study methods (like re-reading notes or a section of a textbook repeatedly) and are (over) confident that the study method will help them recall the material. This overconfidence may prevent them from trying more effective study methods (like free recall/retrieval practice). (Reference: Roediger and Karpicke, 2006.)

Overconfidence and teaching: this demonstration can also lead to a potentially important conversation about teaching. The overconfidence effect and this demonstration support the conclusion that humans are overconfident about many of our predictions. Teachers are human (last time we checked) so we should probably admit that the overconfidence effect probably leads us to some inaccurate estimates, including our perceptions about whether students are learning what we intend. If teachers are in some sense, “tappers,” and if tappers are overconfident, then we should remember that it’s likely that we overestimate how many students are “hearing the song” we are tapping. If we “go with our gut” about whether a class is “with us,” we may be fooling ourselves. This admission might lead to a discussion about the importance of quick, instant ways of checking for understanding from all students.

References:

Newton, E. L. (1990). The rocky road from actions to intentions. (dissertation).

Roediger, H. L., & Karpicke, J. D. (2006). Test-Enhanced Learning: Taking Memory Tests Improves Long-Term Retention. Psychological Science17(3), 249–255. https://doi.org/10.1111/j.1467-9280.2006.01693.x

Changing my mind – grading practices

T. Guskey is one of my favorite grading authors/thinkers/researchers. His advice is clear, backed up by research, and useful for teachers. So I got interested when I saw that he co-authored a paper about standards based grading: Is Standards-Based Grading Effective?

In this article, Guskey and Link address several important topics about standards based grading (SBG) including:

  • What is Standards Based Grading? Their 3 essential criteria make sense to me. SBG = reporting on specific standards rather than a single course grade, reporting using a limited # of performance categories (rather than a % scale), and reporting academic achievement separate from behavioral categories.
  • Does SBG improve achievement? Love this quote:”No grading system by itself improves student learning.” but “better grading can provide clearer and more accurate information on students’ learning that can then be used as a basis for making improvements.” pg. 3
  • What does SBG say about classroom assessment practices? They include great reminders that classroom assessments always need to be aligned well with instruction, in SBG and non-SBG contexts. I like the distinction they make between SBG and some more general grading practices and advice that some people associate with SBG.

One specific pieces of advice in the article contradicted something I thought I knew: I suspected that if a school/district wanted to move towards SBG, they need to change their report card format at the same time. My thinking was that there needed to be a place to report standards based grades in a clear way, and that meant that schools/districts needed to change the report card format from a traditional A-F percentage based report card to a more standards based report card.

Guskey and Link directly challenge my assumption! “Although seemingly counterintuitive, reporting practices should be the last thing to change with SBG… teachers cannot report on standards until everything else aligns with them… the curricular, instructional and assessment practices used to generate these grades must be clearly addressed alongside grading and reporting policies to affect student performance. Problems arise when educators change grading practices and move ahead with SBG and reporting without addressing the critical components of curriculum, instruction, and assessment first.” (pg. 5)

So I need to keep thinking about this! When I talk with teachers who are interested in SBG, our conversations will be more about “addressing the critical components of curriculum, instruction, and assessment first,” rather than worrying about limitations of the current report card format!

Too short? Or just right? 

Lately I’m wondering about what kinds of resources might be most useful as I talk with teachers about the science of learning and memory theory. Specifically: is shorter sometimes better? 

There are plenty of great, longer explanations of how memory research relates to teaching and learning. I collected some of my favorites in this “choose your own adventure” style slide deck. That collection includes the books that started the ball rolling for me: Make it Stick  and How we Learn. Reading those books flipped on a light bulb for me: I taught memory theory every year in my psychology classes, but until I read those books, I didn’t make the connection between what I was teaching and how I was teaching. 

Those books got me started reading everything I could get my hands on about memory theory, cognitive psychology, and teaching and learning. The more you look, the more you’ll find. My most recent favorite finds are Peps McCrea’s great, tiny books (like Memorable Teaching), articles about cognitive load theory (like Cognitive load theory: Research that teachers really need to understand), and researchers who explicitly try to tie the big ideas from cognitive psych. To their teaching (like Stephen Chew’s The world’s best 24 minute videos for teachers and administrators – my title, not his!) 

But not many teachers have the time to dive into the literature like this. Lately I am getting interested in whether some VERY short presentations of the science of learning might be more effective. I stumbled across these two articles and I’m thinking about asking some teachers to read them and talk about how the ideas might relate to their teaching: 

10 Things To Know About Teaching And Learning by Alex Quigley. This blog post is simply a list of 10 statements about teaching and learning with references to psychological principles. It doesn’t get much more clear than just listing a statement like “We need to make helpful connections between the complex stuff” and pointing readers toward “schema building” (the term “deep processing” or “semantic encoding” could have worked here too). 

How Cognitive Load Theory Changed my Teaching by Zach Groshell. I love how Z. Groshell shows how he uses cognitive load theory to make practical decisions as a teacher in very short paragraphs. There are good book-length resources that use this same idea (like Small Teaching and How I Wish I’d Taught Maths) but these few paragraphs show one teacher’s rationale clearly. 

One thing I worry about: teaching and learning are so complex and contextual that any teacher could find exceptions to the “rules” or statements in these short blog posts. It would be easy to quibble with the concise conclusions in these posts. Books and longer articles have space to discuss nuances and details about cognitive psychology. I wonder: are these short summaries more prone to “lethal mutations” of what we should learn from research? 

I’m excited to see whether these very short summaries of cognitive psych. research start useful conversations with teachers. I’ll report back, and if any of you have tried anything similar, I’d love to hear about it.  

PREP: A simple plan for slides and space for students to write

Readers of this blog already know that I’m teaching again for the first time in 16 years. As I was thinking about how to organize the AP Research class, I think I figured out a simple, flexible scheme for organizing class materials and provide space for students to write about what they are thinking.

Context: AP Research is the second course in the “AP Capstone” experience the College Board developed for high school juniors and seniors. Juniors take a class called AP Seminar that familiarizes them with the idea of scholarly research and thinking. I get to teach 4 seniors who took the AP Seminar class last year. The goal of AP Research is to help students choose a research topic, do a literature review, identify a gap in the literature, design a research question, choose a research methodology for that question, gather data, analyze those data, and form conclusions. Students write this all up as a formal 5,000ish word journal article. 

Here’s the dilemma I faced as I set up the class: 

  • This is a brand new class in my district, so I didn’t have any ideas or materials to inherit from colleagues
  • The College Board has some materials and I attended a good training workshop over the summer. The instructor shared some useful materials he uses in his class, but there was still a LOT of day to day planning left to do (and I foolishly/stubbornly like to make my own stuff instead of using materials other people give me). 
  • I knew I was going to be planning the course as I was teaching it. I wanted to leave room and flexibility for bringing in new topics the class brought up during discussions and brainstorms that occurred on my drive to school (happens often!) 
  • One of the requirements from the college board for the course is a way for students to keep track of how their thinking changes as they develop their project. College Board calls this the “PREP” document. 

Here’s the solution I came up with: 

  • I organized my syllabus around fairly general weekly goals. This helped me know what the big goals were for each week, but left me plenty of room to “play” with day by day planning as we tried to meet those goals. 
  • I decided to make one big set of google slides for each semester. Each slide is labeled with the corresponding week, and I list dates in the notes section of the slide (to give myself an idea of my pacing when I go back to these slides later). Here’s the first 3ish weeks of slides from the class. The whole set of slides got REALLY long, but it worked well. I liked having everything in one spot. 
  • I made a PREP document that includes space for students to write about what we are doing in class every week. I put bookmarks on pg. 1 to each week to make sure it’s easy to navigate in the document. In each week, I have space for 6 “tasks” per week. As I planned how the week was going to go on the slides, I inserted instructions on a slide for students to go a specific task in PREP and write about what they are thinking. They quickly got used to going to the PREP document and writing in a specific “task space.” Here’s an example: 

Students knew they were supposed to go to their copy of the PREP document and write their answers to that question in this space: 

  • Whenever I wanted students to write about what they were thinking (even unplanned, spontaneous times during a discussion, etc.) I could quickly ask them to jump to their PREP document and write in the next task “slot.” I then modified the relevant slide to note what writing prompt or discussion topic they were writing about. 

Advantages of this system: 

  • VERY flexible. I can set up my slides for the week and include writing tasks on specific slides, and students have a hand-dandy place for that writing. If my plans change, all I have to do is change the slides. I have 6 open-ended places for students to write every week, and so far that’s been plenty. 
  • All my slides for a semester are in one place (I started a new set of slides for 2nd semester just because the google slide set was pushing 500 slides and that seemed a bit big). 
  • If a student is gone, I can tell them what slides to look through for the day(s) they missed and they have the writing tasks already listed. 
  • If I need them to write something that is “bigger” than what is convenient in the PREP document, I ask them to just insert a link to a separate google doc/spreadsheet/etc. in that space in the PREP document. 
  • At the end of the course, I plan on asking students to make their own copy of the slides, and put that copy and their PREP document in a google drive folder they will always have access to. With these 2 documents, they will have access to everything we did in the course. 

Limitations/disadvantages: 

  • When I want to look at student writing, I open an individual student’s PREP document and click on the bookmark for the relevant week. That’s not a big deal for me: I keep a tab open with the folder that has all my student’s PREP documents in there and the system works pretty efficiently BUT I only have 4 students. In a big class, this might get tedious? Not sure. 
  • I get the sense that this system might feel “informal” to some of my students sometimes. It’s possible that if I set up individual, separate documents for them to write in they might see them as more “real” assignments and they might write more (or write more formally?) Not sure. 

I like the “flow” of the big set of slides and connections to flexible spaces for students to write about what they are thinking. I’ll tweak this system the next time I get to teach, but I think I’ll keep the overall idea of this slides/PREP document pairing in place for my next teaching experience. 

Teaching Matters (a PsychSessions Podcast)- Questions and (Some) Answers about the AP Psych and IB Psych Exams

In this first episode of season 2 – “Standardized Testing: Questions and (Some) Answers AP Psych and IB Psych Exams” – Eric and I talk about the AP Psychology and IB psychology exams and how they were impacted by Covid (the podcast episode was posted in September, 2021). Our conversation moves from specifics about how Covid impacted the Spring 2021 AP and IB (International Baccalaureate) exams and how College Board responded differently to the pandemic. We end up talking about what getting college credit for introductory psychology means in terms of a test score (AP or IB). (Note: if you’d like to know more about the IB Psychology program, listen to this episode with our special guest, fantastic high school psychology teacher Casey Swanson: “International Baccalaureate Psychology with Special Guest Casey Swanson.”) 

I didn’t think of this connection during our conversation, but the discussion about test scores and credit connect with another topic we frequently discuss: grades. Eric and I end up talking about grades often (e.g. this episode). Both the AP Psychology and IB psychology tests face a big challenge: their goal is to assess knowledge and skills students learn during college introductory psychology courses. The amount of knowledge and number of skills addressed in college psychology courses is way too broad to be adequately assessed on one test, so test developers have to sample the knowledge/skills, choosing some ideas that get represented on the test, and many others that do not. This sampling creates one of the challenges: can colleges be confident that the knowledge and skills sampled on an AP or IB exam align with what they want students to learn in their introductory psychology courses? 

But there’s another level of challenge that we didn’t talk about on the podcast: the test score. Both the AP and IB psychology tests include different parts: multiple choice, short/long answer essays, data/source analysis, etc. But the results from each of those different parts get smooshed into one overall composite score, and that composite score is what colleges use to determine if a student should be granted credit for the introductory course. This smoosh doesn’t have to happen! Since there are different parts of the test, those different parts could get scored separately – students could get multiple scores from a test, each score representing their achievement on a different element of the test. This scoring change would enable colleges to look at multiple scores when making decisions about what credit to grant, and these decisions could be better informed by more specific information about what students know and can do. This issue is related to the discussion about traditional grading practices (one A-F grade for a whole class) vs. “standards based” grading practices (multiple grades based on different skills or bodies of knowledge). 

Example: 

Here’s an overly simplistic diagram of the current “smoosh it all into one score” system. (I’m most familiar with the AP Psychology test, so this diagram probably matches that test best). 

Here’s one possible “non-smoosh” system: 

There would be dozens (hundreds?) of decisions and complications with implementing this kind of a change, but it might be a useful thought experiment. Why reduce the complexity in an AP or IB test into a single score? Why ask colleges to make a complex decision, like whether to grant credit for an introductory class, based on a single composite grade? Psychologists know that it is important to measure variables carefully (to create accurate operational definitions). Measuring something as complex as the knowledge and skills involved in an introductory college class deserves something more than a single composite score.