"At its core, scientific inquiry is the same in all fields. Scientific research, whether in education, physics, anthropology, molecular biology, or economics, is a continual process of rigorous reasoning supported by a dynamic interplay among methods, theories, and findings. It builds understanding in the form of models or theories that can be tested." —— Scientific Research in Education, National Research Council, 2002
|Actions caused by the intervention|
Computational process analytics (CPA) is a research method that we are developing in the spirit of the above quote from the National Research Council report. It is a whole class of data mining methods for quantitatively studying the learning dynamics in complex scientific inquiry or engineering design projects that are digitally implemented. CPA views performance assessment as detecting signals from the noisy background often present in large learner datasets due to many uncontrollable and unpredictable factors in classrooms. It borrows many computational techniques from engineering fields such as signal processing and pattern recognition. Some of these analytics can be considered as the computational counterparts of traditional assessment methods based on student articulation, classroom observation, or video analysis.
|Actions unaffected by the intervention|
Computational process analytics has wide applications in education assessments. High-quality assessments of deep learning hold a critical key to improving learning and teaching. Their strategic importance has been highlighted in President Obama’s remarks in March 2009
: “I am calling on our nation’s Governors and state education chiefs to develop standards and assessments that don’t simply measure whether students can fill in a bubble on a test, but whether they possess 21st century skills like problem-solving and critical thinking, entrepreneurship, and creativity.” However, the kinds of assessments the President wished for often require careful human scoring that is far more expensive to administer than multiple-choice tests. Computer-based assessments, which rely on the learning software to automatically collect and sift learner data through unobtrusive logging, are viewed as a promising solution to assessing increasingly prevalent digital learning.
While there have been a lot of work on computer-based assessments for STEM education, one foundational question has rarely been explored: How sensitive can the logged learner data be to instructions?
|Actions caused by the intervention.|
According to the assessment guru Popham
, there are two main categories of evidence for determining the instructional sensitivity of an assessment tool: judgmental evidence and empirical evidence. Computer logs provide empirical evidence based on user data recording—the logs themselves provide empirical data for assessment and their differentials
before and after instructions provide empirical data for evaluating the instructional sensitivity. Like any other assessment tools, computer logs must be instructionally sensitive if they are to provide reliable data sources for gauging student learning under intervention.
|Actions unaffected by the intervention.|
Earlier studies have used CAD logs to capture the designer’s operational knowledge and reasoning processes. Those studies were not designed to understand the learning dynamics occurring within a CAD system and, therefore, did not need to assess students’ acquisition and application of knowledge and skills through CAD activities. Different from them, we are studying the instructional sensitivity of CAD logs, which describes how students react to interventions with CAD actions
. Although interventions can be either carried out by human (such as teacher instruction or group discussion) or generated by the computer (such as adaptive feedback or intelligent tutoring), we have focused on human interventions in this phase of our research. Studying the instructional sensitivity to human interventions will enlighten the development of effective computer-generated interventions for teaching engineering design in the future (which is another reason, besides cost effectiveness, why research on automatic assessment using learning software logs is so promising).
The study of instructional effects on design behavior and performance is particularly important, viewing from the perspective of teaching science through engineering design, a practice now mandated by the newly established Next Generation Science Standards of the United States. A problem commonly observed in K-12 engineering projects, however, is that students often reduce engineering design challenges to construction or craft activities that may not truly involve the application of science. This suggests that other driving forces acting
|Distribution of intervention effect across 65 students.|
on learners, such as hunches and desires for how the design artifacts should look, may overwhelm the effects of instructions on how to use science in design work. Hence, the research on the sensitivity of design behavior to science instruction requires careful analyses using innovative data analytics such as CPA to detect the changes, however slight they might be. The insights obtained from studying this instructional sensitivity may result in the actionable knowledge for developing effective instructions that can reproduce or amplify those changes.
Our preliminary CPA results have shown that CAD logs created using our Energy3D CAD tool
are instructionally sensitive. The first four figures embedded in this post show two pairs of opposite cases with one type of action sensitive to an instruction that occurred outside the CAD tool and the other not. This is because the instruction was related to one type of action and had nothing to do with the other type. The last figure shows that the distribution of instructional sensitivity across 65 students. In this figure, the largest number means higher instructional sensitivity. A number close to one means that the instruction has no effect. From the graph, you can see that the three types of actions that are not related to the instruction fluctuate around one whereas the fourth type of action is strongly sensitive to the instruction.
These results demonstrate that software logs can not only record what students do with the software but also capture the effects of what happen outside the software.