Student & Program Assessment

The Challenge

One of the main problems we hoped to address with the STEM Writing Project is how to assess student writing in multi-section introductory BIO100 courses enrolling dozens or hundreds of students. Standard assessment strategies used by college writing programs are unsustainable when classes are this large (see Refs. 1-6), and when building writing skills is just one of several learning goals. How then can STEM instructors in large introductory courses assess:

  • Impact of their writing training program on individual students?
  • Progress of individual students over time?
  • Progress of a cohort of dozens or hundreds of novice writers?
  • Fidelity of program implementation by multiple instructors?

 

Our Approach

To reduce the scale of the problem we adopted a data science-oriented approach. First we built a data archive of >4000 biology lab reports written by undergraduates who were trained using the SEM protocol, plus relevant metadata. This has been our main dataset for evaluating new analysis methods, and for testing program-level assessments.

To assess fidelity of implementation we use a mixed methods strategy combining:

Use the links to learn more about our protocols.

 

Lessons Learned, & Looking Ahead

We are pursuing several computational methods for assessing student writing in larger courses.

  • Proxy metrics that describe writing complexity based on sentence-level features. We have already identified several proxy metrics that we can use to track student development as writers over time in lieu of "close reading." (Manu. in review; learn more here.)
  • Automated move analysis to characterize paragraph-level text structure.
  • Supervised and unsupervised feature identification and text classification. Our long-term goal is to offer automated no-stakes feedback on basic elements of students' draft work so they can make revisions prior to submission for grading.

Another project is exploring whether automated text classification can identify the subject and structure of GTA comments on student reports. Automated comment tagging would eliminate manual comment scoring, and allow us to look at all of a GTA's comments on student work, not just a sample.  (Learn more here.)

Check the list of To Do items for the Assessment project and let us know if you want to take on one or more items.

 


Where to Learn More

  1. Bredtmann, J., Crede, C. J., & Otten, S. 2013. Methods for evaluating educational programs: does Writing Center participation affect student achievement? Evaluation and Program Planning, 36(1):115–123. https://doi.org/10.1016/j.evalprogplan.2012.09.003

  2. Fulwiler, T. 1988. Evaluating Writing Across the Curriculum Programs. In: S. H. McLeod (ed.). Strengthening Programs for Writing Across the Curriculum: New Directions for Teaching and Learning, no. 36. San Francisco: Jossey Bass, pp. 61-75.

  3. Gleason, B. 2000. Evaluating Writing Programs in Real Time: The Politics of Remediation. College Composition and Communication. 51:560. DOI: 10.2307/358912.

  4. Martella, R.C., Waldron-Soler, K.M. 2005. Language for Writing Program Evaluation. Journal of Direct Instruction, 5(1):81-96.

  5. McLeod, S.H. 1992. Evaluating Writing Programs: Paradigms, Problems, Possibilities. Journal of Advanced Composition, 12(2), 373–382. http://www.jstor.org/stable/20865864

  6. Wolcott, W. 1996. Evaluating a Basic Writing Program. Journal of Basic Writing, 15(1): 57-69. DOI: 10.37514/JBW-J.1996.15.1.05

Created by Dan Johnson Last Modified Wed June 15, 2022 10:24 am by Dan Johnson

Comments

There are no comments on this entry.