Posts Tagged ‘Learning by Design’

Measuring the effects of an intervention using computational process analytics

September 15th, 2013 by Charles Xie
"At its core, scientific inquiry is the same in all fields. Scientific research, whether in education, physics, anthropology, molecular biology, or economics, is a continual process of rigorous reasoning supported by a dynamic interplay among methods, theories, and findings. It builds understanding in the form of models or theories that can be tested."  —— Scientific Research in Education, National Research Council, 2002
Actions caused by the intervention
Computational process analytics (CPA) is a research method that we are developing in the spirit of the above quote from the National Research Council report. It is a whole class of data mining methods for quantitatively studying the learning dynamics in complex scientific inquiry or engineering design projects that are digitally implemented. CPA views performance assessment as detecting signals from the noisy background often present in large learner datasets due to many uncontrollable and unpredictable factors in classrooms. It borrows many computational techniques from engineering fields such as signal processing and pattern recognition. Some of these analytics can be considered as the computational counterparts of traditional assessment methods based on student articulation, classroom observation, or video analysis.

Actions unaffected by the intervention
Computational process analytics has wide applications in education assessments. High-quality assessments of deep learning hold a critical key to improving learning and teaching. Their strategic importance has been highlighted in President Obama’s remarks in March 2009: “I am calling on our nation’s Governors and state education chiefs to develop standards and assessments that don’t simply measure whether students can fill in a bubble on a test, but whether they possess 21st century skills like problem-solving and critical thinking, entrepreneurship, and creativity.” However, the kinds of assessments the President wished for often require careful human scoring that is far more expensive to administer than multiple-choice tests. Computer-based assessments, which rely on the learning software to automatically collect and sift learner data through unobtrusive logging, are viewed as a promising solution to assessing increasingly prevalent digital learning.

While there have been a lot of work on computer-based assessments for STEM education, one foundational question has rarely been explored: How sensitive can the logged learner data be to instructions?

Actions caused by the intervention.
According to the assessment guru Popham, there are two main categories of evidence for determining the instructional sensitivity of an assessment tool: judgmental evidence and empirical evidence. Computer logs provide empirical evidence based on user data recording—the logs themselves provide empirical data for assessment and their differentials before and after instructions provide empirical data for evaluating the instructional sensitivity. Like any other assessment tools, computer logs must be instructionally sensitive if they are to provide reliable data sources for gauging student learning under intervention. 


Actions unaffected by the intervention.
Earlier studies have used CAD logs to capture the designer’s operational knowledge and reasoning processes. Those studies were not designed to understand the learning dynamics occurring within a CAD system and, therefore, did not need to assess students’ acquisition and application of knowledge and skills through CAD activities. Different from them, we are studying the instructional sensitivity of CAD logs, which describes how students react to interventions with CAD actions. Although interventions can be either carried out by human (such as teacher instruction or group discussion) or generated by the computer (such as adaptive feedback or intelligent tutoring), we have focused on human interventions in this phase of our research. Studying the instructional sensitivity to human interventions will enlighten the development of effective computer-generated interventions for teaching engineering design in the future (which is another reason, besides cost effectiveness, why research on automatic assessment using learning software logs is so promising).

The study of instructional effects on design behavior and performance is particularly important, viewing from the perspective of teaching science through engineering design, a practice now mandated by the newly established Next Generation Science Standards of the United States. A problem commonly observed in K-12 engineering projects, however, is that students often reduce engineering design challenges to construction or craft activities that may not truly involve the application of science. This suggests that other driving forces acting
Distribution of intervention effect across 65 students.
on learners, such as hunches and desires for how the design artifacts should look, may overwhelm the effects of instructions on how to use science in design work. Hence, the research on the sensitivity of design behavior to science instruction requires careful analyses using innovative data analytics such as CPA to detect the changes, however slight they might be. The insights obtained from studying this instructional sensitivity may result in the actionable knowledge for developing effective instructions that can reproduce or amplify those changes.

Our preliminary CPA results have shown that CAD logs created using our Energy3D CAD tool are instructionally sensitive. The first four figures embedded in this post show two pairs of opposite cases with one type of action sensitive to an instruction that occurred outside the CAD tool and the other not. This is because the instruction was related to one type of action and had nothing to do with the other type. The last figure shows that the distribution of instructional sensitivity across 65 students. In this figure, the largest number means higher instructional sensitivity. A number close to one means that the instruction has no effect. From the graph, you can see that the three types of actions that are not related to the instruction fluctuate around one whereas the fourth type of action is strongly sensitive to the instruction.

These results demonstrate that software logs can not only record what students do with the software but also capture the effects of what happen outside the software.

Energy3D Version 2.0 released

May 9th, 2013 by Charles Xie
We are proud to release Energy3D version 2.0, available for download from our website. Energy3D is a computer-aided design and fabrication tool for making small model green buildings. This version added new energy assessment features that allow students to evaluate the energy performances of their  designs and investigate the effect of passive solar heating. Currently however, this energy assessment tool is limited to only 12 selected cities around the world.

The next release will feature powerful passive solar heating simulation that can be applied to a wide variety of settings ranging from a single family house to a dense urban area.

Energy3D runs on both Windows and Mac OS X. Java 7 is required. It may also run on Linux (some of our users actually got it to run on Linux), but it has not been thoroughly tested on Linux.

Significant gender differences found (confirmed?) in CAD research

March 13th, 2013 by Charles Xie
A student design
In a pilot study conducted in December 2012, high school students in an engineering class used our Energy3D CAD tool to do an urban solar design project -- they must consider the sun path in four seasons and the existing buildings in the neighborhood as the design constraints to optimize solar penetration to the new buildings and minimize obstruction of sunlight to the existing buildings.

Energy3D can log any student actions and intermediate steps, which provide extremely detailed information about student design processes. With such a high-resolution lens, we could characterize student patterns and analyze how they solve the design challenge closely. For example, the CAD log allows us to reconstruct the entire design process of each student and show it in an unprecedentedly fine-grained timeline graph. A timeline graph may show how students went through different iterative steps while shaping their designs. For instance, did they consider the interactions among the buildings they designed? Did they go back to revise a previously erected building that may be affected by a newly added one? The timeline data we have collected show that the students' designs demonstrated more iterative features as they moved on to explore and design alternatives following the initial attempts (perhaps encouraged by the gained familiarity with and confidence in the CAD tool).

A design timeline (click to enlarge)
Our analyses also suggest that there appears to be a significant gender difference in both design products and processes. The main differences are: 1) The boys tended to push the limit of the software and produced unconventional designs that looked "cool" but did not necessarily meet the design specifications; and 2) The girls spent more time carefully revising their designs than building new structures. While these findings may not be surprising to some seasoned educators, the significance is that this may be the first time this kind of gender difference was revealed or confirmed by empirical data from CAD logs. Using CAD logs may provide a fairer basis of assessing student performance based on the entire learning process rather than just looking at their final products or self reports.

Summary of the results
The implication of this study is that if we can identify patterns in student design learning and understand their cognitive meanings, we could devise a software system that can provide real-time feedback to help students learn in the future. For example, could the software prompt students to consider the design criteria more when it detects that students are ignoring them? Could the software stimulate students to think out of the box more when it detects that students are underexploring the design space?

For more information about this research project, visit: http://energy.concord.org/research.html.

Constructive chemistry funded by the National Science Foundation

January 17th, 2013 by Charles Xie
One of the most effective pedagogies in science education is to challenge students to design and construct something that performs a function, solves a problem, or proves a hypothesis. Learning by design is a very compelling way of engaging students to learn science profoundly. Given the extensive incorporation and emphasis of engineering design across disciplines in the Next Generation Science Standards, design-based learning will only grow more important in US science education.

The problem, however, is that many science concepts are related to things that are too small, too big, too complex, too expensive, or too dangerous to be built in the classroom realistically. (If you are a LEGO fan, you may argue that LEGO can be used to build anything, but most LEGO models simulate the appearance but not the function -- a LEGO bike probably cannot roll and LEGO molecules probably do not assemble themselves. To scientists and engineers, functions are all that matters.)

Three approaches of using science models.
A good solution is to have students design computer models that work in cyberspace. This virtualization allows students to take on any design challenge without regard to the expense, hazard, and scale of the challenge. If the computer modeling environment is supported by computational science derived from fundamental laws, it will have the predictive power that permits anyone to design and test any model that falls within the range governed by the laws. Software systems that provide user interfaces for designing, constructing, testing, and evaluating solutions iteratively can potentially become powerful learning systems as they create an abundance of opportunities to motivate students to learn and apply the pertinent science concepts actively. This is the vision of "Constructive Science" that I had dreamed about almost four years ago. This constructive approach opens up a much larger learning space that can result in deeper and broader learning--beyond simply observing and interacting with existing science simulations that were created to assist teaching and learning.

This dream got a shot in the arm today by a small grant awarded by the National Science Foundation. This TUES Type-1 grant will support a collaboration with Bowling Green State University and Dakota County Technical College to pilot test the idea of "Constructive Chemistry" at the college level. Choosing chemistry as a test bed to explore this Constructive Science approach is most appropriate, as chemistry is all about atoms and molecules that are just too small to make any design-based learning option other than computational modeling viable. Decades of research in computational chemistry has developed the computational power needed to make the science right. We believe that using these computational methods should yield chemistry simulations that are sufficiently authentic for teaching and learning.