Monthly Archives: July 2019

Notes from the LSi Formative Assessment Conference – Day 3

Last day! And it was really a half day: one breakout session first thing in the morning and then the final keynote speaker.

Those of you who know me won’t be surprised that I filled my half day with Dylan Wiliam 🙂 The morning breakout session was a Q&A. Each table generated one question to ask him and he used the whole 1.5 hours answering these 5-10 questions.

I asked him two questions:

Question #1: Let’s say a teacher wants to change ONE teaching habit this year (knowing that habit change is difficult and takes a long time to do), what one habit related to formative assessment would you advise that teacher to change? D. Wiliam’s Answer: [pause…pause…] wait time. [laughter from the crowd]. He explained that changing wait time is hard to do and potentially very useful. He clarified something I’ve never heard before about wait time: we think of the wait time that occurs after we ask a question (he called this “thinking time”), but there’s also wait time that occurs after a student answers (he called this “elaboration time”). He recommended that we focus on this second aspect of wait time. When a student responds, wait before you indicate “correctness.” Have that student keep thinking. Get other students to comment on that answer. Get everyone thinking about that answer (deep processing!). He cited a study by M. Rowe “Slowing down is a way of Speeding up” that I need to find and read.

Question #2: For this question, I actually… lied a bit. Yep, I lied to Dylan Wiliam. Please don’t tell him. I told him that I was a former high school psychology teacher (true) and that I was working with a group of other psychology teachers (partial lie – I’m working with a bunch of former students of mine who now work as teacher coaches, district office admin, etc. ) to develop professional develop stuff based on the 3-box memory model – I asked if he knew of other work like this and if using this model seemed like a good idea (note: I “lied” b/c I didn’t want to spend the group’s time listening to me explaining how this group got together, what our roles are, etc.) D. Wiliam’s Answer: he said he thought it was a good idea to use the 3 box model and that the tricky decisions will be how far to go into the details. He cited a potential point of disagreement between Bjork (desirable difficulties) and Sweller (cognitive load theory). He advised us to make a “clean” version of the model and use that as a place to hang concepts and research on to (and that’s our plan!). I got the idea to ask Wiliam this question because of a slide he used very briefly during one of his presentations:

It’s great to get some “affirmation” from D. Wiliam that our plan to base a chunk of professional development on the 3 box memory model might be a great idea! (I bet I’ll write about these plans on this blog in the future)

D. Wiliam’s Keynote: the last event of the conference was a keynote address called “Negotiating the Essential Tensions.” It was good, but… too ambitious? Wiliam tried to cover a LOT of ground, shared a LOT of research, and I took a LOT of notes. I think I wish he would have focused on a few issues and gone deeper. But: he’s Dylan Freaking Wiliam, and it was a fascinating, enagaging presentation!

One aspect of the keynote that will stick with me: his cautions about jumping to “sweeping” (and standardized) answers about grading. He used some theory, analysis, and research to eloquently review some of the very complex issues involved in grading. Highlights:

  • NO grading system is perfect, which means that every system involves trade offs. If we’re not clear about what those trade offs are and which ones we’re choosing, we’re not making informed choices about our grading system.
  • Educators have been worried about the validity of grades for a LONG time (and Dressel 1957 ).
  • There are important interaction effects between feedback and grading (Butler 1988)
  • Everything is contextual: there are realistic scenarios when feedback might be more useful if it violates some of the “rules” we learn about feedback: sometimes feedback might be more useful if it is LESS specific, LESS timely, and LESS constructive. Context = important. The purpose of feedback is to help students improve performance on a task they haven’t done yet, not the task they just completed. He says Hattie doesn’t make this point, and he should (there’s a Kluger and Denisi meta-analysis that makes this point, but I haven’t found it yet).
  • The old phrase “You don’t fatten the pig by weighing it.” is … wrong. Retrieval practice research indicates that students learn from tests. Frequent, low or no stakes testing is a really good idea (and might be one of the most powerful things we can do). Roediger and Karpicke 2006 and Bangert-Drowns, Kulik, and Kulik 1991

Whew! It was a great conference, I’m glad I went, and I’m walking away with a bunch of ideas. Thanks to my district for supporting my travel, and thanks for reading all this!

Notes from the LSi Formative Assessment Conference – Day 2

I continue to be a lucky fellow! Day 2 of the conference started with a keynote from Dylan Wiliam, one of my education research/writer heroes. He used his keynote to highlight and expand on the themes he wrote about in Creating the Schools our Children Need

If you haven’t read it yet, get to it! One of my favorite education books. In the book and his keynote talk, Wiliam walked us through research about why many current large scale education reforms (like hiring “smarter” teachers, firing “bad” teachers, funding voucher/charter schools, copying education systems of other countries, and offering more differentiated/personalized instruction) aren’t effective. The rest of the book and talk was devoted to what might work: investing in current teachers, promoting knowledge-rich curricula, and implementing effective formative assessment processes. Wiliam acknowledges that this is tough work (it involves changing habits, which is always tough) and argues (with evidence at every point) that teachers need choice, flexibility, chances to take small steps, and supportive accountability as they figure out how to integrate “short cycle” formative assessment habits in their teaching. If you ever get a chance to hear Dylan Wiliam present, do it! The keynote was filled with data, analysis, and clear thinking.

The rest of the morning and part of the afternoon were devoted to breakout sessions. I got to hear Jay McTighe talk about designing performance assessment tasks. He includes multiple resources on his website and he’s worked with many (many!) districts. I liked hearing about his practical experiences and he provides multiple intriguing examples of performance assessments. One thought/concern: I didn’t hear much during the presentation about research on performance assessments as an intervention, and I’d like to hear more. I think some of the research about how effective different kinds of performance assessments are would help refine the idea quite a bit and help teachers/administrators figure out how to use the idea most effectively.

And a specific/related concern: there is a “debate” between folks who say they advocate “direct instruction” and “discovery/problem based/inquiry based/project based/etc.” learning. I know some teachers who identify as “project based” teachers. During his talk, McTighe talked about “authentic and inauthentic” tasks. I don’t think McTighe thinks about it this way, but I worry that some teachers/administrators see information about authentic tasks, performance, assessment, DoK level 3 and 4 tasks, etc. and they wrongly assume that these kinds of assessments and activities are “better” or more important than other kinds of activities and assessments. This is a generalization, but I think this “debate” is a false dichotomy: we shouldn’t be thinking about direct instruction or problem based learning, etc. Underneath it all, the reality of learning is that for every learning task, students need to be able to retrieve some skills and information from their long term memory and get it into their working memory in order to do the cognitive task we asked them to do. If those skills or information aren’t in their long term memory or they can’t retrieve them, then we need to help them with that, and that probably means some direct instruction. Despite the title, I think this article by Kirschner, Sweller, and Clark shows convincing evidence that we need to be thoughtful about what skills and knowledge students need before they try to tackle important “inquiry based” tasks.

Students absolutely need chances to use skills and knowledge in order to answer important questions, and problem based learning etc. activities can be great ways to do that. But it’s not an either or. It’s silly to talk about direct instruction or problem based learning being better or more important. I would like to hear J. McTighe talk about this underlying learning reality more.

Then I went to hear S. Brookhart talk about “Comprehensive and Balanced Assessment Systems,” a paper that was developed based on last year’s LDi Formative Assessment conference.

This paper is worth reading: Sue led us on a group discussion activity that challenged us to think comprehensively about our district’s assessment system. We examined how comprehensive (what is measures at what levels where) and balanced (what data are returned from what assessments appropriate for what purposes) our systems are and identify gaps. I think this activity would have been more effective if Sue would have been able to model the analysis/discussion on an example, but it was a good discussion.

Ready for day 3 tomorrow!

Notes from the LSi Formative Assessment Conference – Day 1

Well. I took about 17 pages of notes from day 1 about the action-packed day 1 of the LSi Formative Assessment conference, so this summary is going to be a bit… challenging, I think. Here we go:

Day 1 was technically a “pre-conference” event, but it felt like most of the folks attending the conference (about 300?) are here already. Tom Guskey led a day-long talk/discussion about grading practices, and it was wide-ranging and useful. I like Dr. Guskey’s writing and thinking, and his 5 hour presentation addressed many of the tricky topics involved in classroom grading.

(note: these are slides from Dr. Guskey’s website – I don’t think they are exactly the slides he used, but they are close)

Grading and Reporting Effective Policies and Practices
Developing Standards-based Grading & Reporting Systems

I liked Dr. Guskey’s realistic emphasis on the complexity of discussions about grading practices. He emphasized that these discussions are personal, important, emotional, and touch on many aspects of school and district “systems.” He recommended that schools/districts start by discussing the purpose of grades, and think about the various ways we communicate that purpose: as you discuss purpose(s), think about how you communicate this in the gradebook, the report card, and the transcript. He didn’t say this, but it feels to me like we need to be honest with ourselves about how much “courage” (or “capital?”) we have at every level. If we want to talk about changes in grading, we need to ask ourselves if we have the will and autonomy to implement changes in our online gradebook system? In our report card? In student transcripts? If so, cool (maybe). If not, maybe we should wait until we have the will and autonomy to do so.

At every stage in the discussion, Dr. Guskey emphasized that we need to start with the PURPOSE of grades. There are several possible purposes:

  1. Communicate to parents
  2. Provide info to students about proficiency
  3. Select groups of students for instruction
  4. Provide incentives for students
  5. Evaluate effectiveness of programs
  6. Document effort or responsibility 

Grades can’t do meet all these purposes! There are trade-offs: if you want grades to accurately communicate about proficiency, your grades may not be able to evaluate the effectiveness of programs (you may not have the range/variability you need for that purpose), or document effort or responsibility. He recommended that we pick a few purposes, and be honest about the trade-offs.

Dr. Guskey then led us through some of the advice in his excellent book On Your Mark. Short summary: percentage or points grade based systems (the most commonly used systems in secondary schools) are… limited. This one hits pretty close to home for me: our district uses a percentage based system, and we’re not changing any time soon for various reasons. His arguments are compelling and well supported. I think his recommendation – changing report cards to allow teachers to assign separate grades/marks for Academic Proficiency, Process (work/study habits), and Progress (growth over time) – make a LOT of sense. And I don’t think my district will be able to do this any time soon. He ended this section of the talk with some great examples of how districts in Kentucky have been able to change their report cards to be more standards based and “humanized.” I admire their work. Example:

Here are T. Guskey’s 15 steps for developing a standards based reporting system. Good advice, I say:

  1. What is the purpose of the report card?
  2. How often will the report card be completed? (parents want it more often, teachers want it less often)
  3. Will the report card include individual standards for each grade level, or strands applicable across grade levels? (recommendation: put strands on report card, and standards in the gradebook) 
  4. How many standards/strands will be included for each subject area or course?
  5. Will they be end of year or benchmark standards? (if you choose end of year, student performance won’t look good at the end of 1st quarter) 
  6. What product strands will be reported? (when you are developing rubrics, don’t worry about proficient – tell me what you expect at the top Rob – whoa!) 
  7. What process strands will be reported? 
  8. Will progress be reported?
  9. How many perf levels will be reported per standard?
  10. How will levels be labeled?
  11. Will teachers comments be included? Class and student?
  12. How will information be arranged on the report?
  13. What are parents expected to do with the info? 
  14. What policies need to be revised in order to support the new report card? 
  15. How will parents/families be involved in revisiting the report card?

The day ended with a fabulous “dream team” panel: T. Guskey, D. Wiliam, S. Brookhart, and J. McTighe. Their discussion was wide ranging, fascinating, and too complex to summarize here, so I’ll just list some of my favorite ideas:

  • D. Wiliam: It’s easy to be critical of grading systems, but if you look at them from the inside, you understand why.
  • J. McTighe: Think about means and ends – practice is a means to an end. You need to be really clear about the end, and even if the practice is different than the end (e.g. running through tires) You give them feedback that relates to the eventual goal, not that specific practice. Don’t give them feedback on how well they run through the tires, give them feedback that will help them run better in the game based on what you saw after they run through the tires
  • D. Wiliam: When teachers observe other teachers, the 1st question is “how’d I do?” The best administrators respond “How well did you think you did?” If they know that, your job is done. The goal if feedback is to make itself redundant.
  • T. Guskey: be careful with the word potential. You don’t know what a student’s potential is. D. Wiliam: why be limited by your potential?
  • S. Brookhart: I would be happy if districts just took the current system and made grades more reflective of learning. If you can’t do SBG, just do what you can do well.
  • J. McTighe: what is WRONG with SBG? They are still based on grade level standards, and those grade level standards are norm referenced. Let’s do what they do in athletics or music – some up with a learning progression 

There’s a lot more to say, but I don’t think I’ll say it here. I’m excited for day 2!

“Live-blogging” – notes from the LSI Formative Assessment Conference

I’m a lucky fellow: I get to attend the 2019 LSI Formative Assessment Conference at the University of Maryland.

Some of my favorite researchers/educators/writers are speaking at the conference. D. Wiliam, S. Brookhart, and T. Guskey are excellent scholars and writers, and I think they consistently translate research into usable ideas for K-12 folks. I’m not as familiar with J. McTighe’s work yet (other than Understanding by Design), but I’m excited to learn from him.

I’d like to post more often to this blog, so I’m officially challenging myself to “live-blog” (almost) this conference: I’m going to post something to this blog every night based on what I learn at the conference. Hope it’s useful for someone out there, and wish me luck!