Conclusion
In previous studies with the claim evidence reasoning framework both McNeill and Krajcik (2006) and Novak (2009) observed that explicitly breaking down scientific explanation into these three components and explicitly scaffolding students can overtime improve students’ ability to make scientific explanations. As scientific literacy is “the ability to analyze, interpret, and communicate scientific ideas,” developing students understanding of scientific explanation and ability to make scientific explanation directly translates to scientific literacy (Holliday et al., 1994).
Prior to conducting my inquiry project, I had read in the literature that students struggle the most with the reasoning component of scientific explanation (Moje, 2004, p.233). From my observations of the McDonald lab discussion and students’ papers about GMOs, I found my students struggling with not only reasoning but also appropriate use of evidence to support claims. My assessment of students after the introduction of the claim evidence reasoning framework using the base explanation rubric from McNeill and Krajcik’s study, further supported my observations. I found that out of 55 students that average for claim was 2.9, for evidence 1.7, and for reasoning 1.4 (with the highest possible score for each section of three). Evidence: After several different activities centered on explanation and argumentation, I observed a slight increase in students’ ability to use “appropriate and sufficient evidence to support a claim” (McNeill & Krajcik, 2006, p. 28). In the zebrafish discussion I found an average score for evidence of 2.5 and in the human evolution paper a score of 2.3 out of 3. This is higher then the average score of 1.7 for evidence seen right after the introduction to the CER framework. I observed this most notably during the DNA debate where students continually referenced evidence from the articles and their background knowledge to support their side of the argument. |
Reasoning:
In terms of reasoning, I also observed a significant increase both in use of the base explanation rubric and from my own observations. In the GMO pre-assessment activity, I found out of 26 students, an average for claim of 2.5, for evidence 2.1, and for reasoning 1.4 using the base explanation rubric. After the introduction to the CER framework with the CRISPR activity the reasoning component remained low with an average of 1.4 out of 3 for reasoning. In the evolution paper post assessment, I found an average of 2.2 out of 3 for reasoning. This was the first time the average reasoning level was above 2.0. I observed that students improved tying their evidence and only their evidence mentioned back to their claim. This category improved at the slowest pace compared to evidence. This is possibly because it is the most challenging component for students and requires more time for students to develop this higher order skill compared to the other components. Using the same base explanation rubric, McNeill and Krajcik (2006) found that “students’ evidence and reasoning scores both begin and end lower compared to their claim scores”, however there was an increase in both evidence and reasoning overtime (p. 14). Similarly, I found that students were strongest in the claim category and their ability to use evidence and reasoning improved over time according to the base explanation rubric and my own observations. |