Student Diagnostic Activities

The Challenge

A perennial issue for STEM writing instructors is undergraduates' tendency to:

  • Over-estimate their scientific writing skills;
  • Incorrectly assume that if they understand the explanation of writing instructions, they can execute them accurately (i.e., "I understand" vs. "I can do"); and
  • Under-estimate the time and effort needed to write well.

While it is tempting to attribute these to lack of preparation or low motivation, studies of cognitive biases suggest that novice writers lack sufficient information and direct experience to correctly gauge their abilities or effort required. This plus their preconceptions and cognitive biases affect their writing performance as much as (and likely more than) their personal motivation and prior preparation.

Students need to have a more accurate view of their current writing skills level BEFORE they work on high-stakes assignments. If their first meaningful feedback comes on a high-stakes assignment, they are more likely to perform poorly, be discouraged and give up rather than try to improve.

Most teachers consider students' past experience and current skills as they design their instructional strategy. What we have seen is that:

  • The observations and data instructors are using are not shared transparently with their students; and
  • The instructor's responses to an entire class do not provide individual students with sufficient information about their strengths and weaknesses.

 

Our Approach

SEM's strategy is to make the process of data collection and sharing (what we call diagnostics) a more explicit component of writing training by:

  • Embedding data collection activities in other low-stakes training and practice assignments;
  • Sharing the outcomes data with students both individually AND as a group; and
  • Using the data to identify students’ most common knowledge gaps, then adjusting our instructional strategy to match their current needs.

Specific activities that have embedded diagnostics are described further under the Undergraduate Training Activities page. We also use several external sources of diagnostic data.

Collecting small, frequent data samples provides actionable insights into students’ skills relative to our expectations earlier, and lets us track whether a cohort is progressing as planned. The trade-off with this approach is it requires considerable time and effort to collate and share the summarized data with instructors, and share individual results with each student.

 

Lessons Learned

One of our main barriers to implementation has been instructors' (especially GTAs') reluctance to collect and use diagnostic data. We think that some resistance comes from an outdated view that STEM teaching is fundamentally different from research, and that systematic data collection or evaluation does not produce useful insights. As the influence of DBER/SOTL/SFES research grows, we are optimistic that more instructors will incorporate diagnostic data sources into their teaching practice, and share those data with their students.

A related challenge we have faced is helping instructors learn how to think about student assignments as a source of aggregate data for a cohort, not just individual students.

 

Looking Ahead

We hope to develop a dashboard application in R Shiny that can provide students and instructors both with better summaries of diagnostic data collected; contact us if you are interested in collaborating on that project. Check the list of To Do items for the Diagnostics project for other potential needs.

Our initial plan for countering the challenges described under Lessons Learned is to incorporate more training materials from mixed methods research. Look under Where to Learn More for starting resources.

As always, if you have a diagnostic tool or strategy that works for your writing program, please share it with our community in the Comments for the sub-project.

 


Where to Learn More

  1. University of Nebraska Lincoln's Writing Center has a good short guide to constructing a rubric for GRADING writing. Their framework provides a starting point for instructors to ask the questions, "what parts of writing do I see errors in most often? Which errors are most limiting my students' development as writers?" Answers to these questions can point out the topics or skills where students could benefit from diagnostic activities.

  2. NSF's User-Friendly Handbook for Mixed Method Evaluations (1997) is a bit dated, but remains a good starting point for learning basic methods and srategies.

  3. The BioTAP Network has an extensive Guide to Research Methods page that covers quantitative, qualitative, and mixed methods, along with examples of how they are implemented.
     

Created by Dan Johnson Last Modified Wed June 15, 2022 10:29 am by Dan Johnson

Comments

There are no comments on this entry.