Posts Tagged ‘Big data’

The first paper on learning analytics for assessing engineering design?

January 30th, 2014 by Charles Xie
Figure 1
The International Journal of Engineering Education published our paper ("A Time Series Analysis Method for Assessing Engineering Design Processes Using a CAD Tool") on learning analytics and educational data mining for assessing student performance in complex engineering design projects. I believe this is the first time learning analytics was applied to the study of engineering design -- an extremely complicated process that is very difficult to assess using traditional methodologies because of its open-ended and practical nature.

Figure 2
This paper proposes a novel computational approach based on time series analysis to assess engineering design processes using our Energy3D CAD tool. To collect research data without disrupting a design learning process, design actions and artifacts are continuously logged as time series by the CAD tool behind the scenes, while students are working on an engineering design project such as a solar urban design challenge. These "atomically" fine-grained data can be used to reconstruct, visualize, and analyze the entire design process of a student with extremely high resolution. Results of a pilot study in a high school engineering class suggest that these data can be used to measure the level of student engagement, reveal the gender differences in design behaviors, and distinguish the iterative (Figure 1) and non-iterative (Figure 2) cycles in a design process.

From the perspective of engineering education, this paper contributes to the emerging fields of educational data mining and learning analytics that aim to expand evidence approaches for learning in a digital world. We are working on a series of papers to advance this research direction and expect to help with the "landscaping" of  those fields.

Computational process analytics: Compute-intensive educational research and assessment

October 5th, 2013 by Charles Xie
Trajectories of building movement (good)
Computational process analytics (CPA) differs from traditional research and assessment methods in that it is not only data-intensive, but also compute-intensive. A unique feature of CPA is that it automatically analyzes the performance of student artifacts (including all the intermediate products) using the same set of science-based computational engines that students used to solve problems. The computational engines encompass every single details in the artifacts and their complex interactions that are highly relevant to the nature of the problems students solved. They also recreate the scenarios and contexts of student learning (e.g., the calculated results in such a post-processing analysis are exactly the same as those presented as feedback to students while they were solving the problems). As such, the computational engines provide holistic, high-fidelity assessments of students' work that no human evaluator can ever beat -- while no one can track numerous variables students might have created in long and deep learning processes in a short evaluation time, a computer program can easily do the job. Utilizing disciplinarily intelligent computational engines to do performance assessment was a major breakthrough in CPA as this approach really has the potential to revolutionize computer-based assessment.

No building movement (bad)
To give an example, this weekend I am busy running all the analysis jobs on my computer to process 1 GB of data logged by our Energy3D CAD software. I am trying to reconstruct and visualize the learning and design trajectories of all the students, projected onto many
different axes and planes of the state space. To do that, an estimate of 30-40 hours of CPU time on my Lenovo X230 tablet, which is a pretty fast machine, is needed. Each step loads up a sequence of artifacts, runs a solar simulation for each artifact, and analyzes the results (since I have automated the entire process, this is actually not as bad as it sounds). Our assumption is that the time evolution of the performance of these artifacts would approximately reflect the time evolution of the performance of their designers. We should be able to tell how well a student was learning by examining if the performance of her artifacts shows a systematic trend of improvement, or is just random. This is way better than the performance assessment based on just looking at students' final products.

After all the intermediate performance data were retrieved through post-processing the artifacts, we can then analyze them using our Process Analyzer -- a visual mining tool being developed to show the analysis results in various visualizations (it is our hope that the Process Analyzer will eventually become a powerful assessment assistant to teachers as it would free teachers from having to deal with an enormous amount of raw data or complicated data mining algorithms). For example, the two images in this post show that one student went through a lot of optimization in her design and the other did not (there is no trajectory in the second image).

National Science Foundation funds research that puts engineering design processes under a big data "microscope"

September 20th, 2013 by Charles Xie
The National Science Foundation has awarded us $1.5 million to advance big data research on engineering design. In collaboration with Professors ┼×enay Purzer and Robin Adams at Purdue University, we will conduct a large-scale study involving over 3,000 students in Indiana and Massachusetts in the next five years.

This research will be based on our Energy3D CAD software that can automatically collect large process data behind the scenes while students are working on their designs. Fine-grained CAD logs possess all four characteristics of big data defined by IBM:
  1. High volume: Students can generate a large amount of process data in a complex open-ended engineering design project that involves many building blocks and variables; 
  2. High velocity: The data can be collected, processed, and visualized in real time to provide students and teachers with rapid feedback; 
  3. High variety: The data encompass any type of information provided by a rich CAD system such as all learner actions, events, components, properties, parameters, simulation data, and analysis results; 
  4. High veracity: The data must be accurate and comprehensive to ensure fair and trustworthy assessments of student performance.
These big data provide a powerful "microscope" that can reveal direct, measurable evidence of learning with extremely high resolution and at a statistically significant scale. Automation will make this research approach highly cost-effective and scalable. Automatic process analytics will also pave the road for building adaptive and predictive software systems for teaching and learning engineering design. Such systems, if successful, could become useful assistants to K-12 science teachers.

Why is big data needed in educational research and assessment? Because we all want students to learn more deeply and deep learning generates big data.

In the context of K-12 science education, engineering design is a complex cognitive process in which students learn and apply science concepts to solve open-ended problems with constraints to meet specified criteria. The complexity, open-endedness, and length of an engineering design process often create a large quantity of learner data that makes learning difficult to discern using traditional assessment methods. Engineering design assessment thus requires big data analytics that can track and analyze student learning trajectories over a significant period of time.
Deep learning generates big data.

This differs from research that does not require sophisticated computation to understand the data. For example, in typical pre/post-tests using multiple-choice assessment, the selection data of individual students are directly used as performance indices -- there is basically no depth in these self-evident data. I call this kind of data usage "data picking" -- analyzing them is just like picking up apples already fallen to the ground (as opposed to data mining that requires some computational efforts).

Process data, on the other hand, contain a lot of details that may be opaque to researchers at first glance. In the raw form, they often appear to be stochastic. But any seasoned teacher can tell you that they are able to judge learning by carefully watching how students solve problems. So here is the challenge: How can computer-based assessment accomplish what experienced teachers (human intelligence plus disciplinary knowledge plus some patience) can do based on observation data? This is the thesis of computational process analytics, an emerging subject that we are spearheading to transform educational research and assessment using computation. Thanks to NSF, we are now able to advance this subject.