Tag Archives: Data Mining

Artificial intelligence research for engineering design

Have you ever thought about what a pity it is when a senior engineer with 40 years of problem-solving experience retires? Have you ever thought about what a loss it is when a senior teacher with 40 years of teaching experience retires? Imagine what we could do for humanity if we find a way to somehow preserve their experience, expertise, and intelligence automatically before these incredible treasures are taken to the graveyard...

Heat map visualizations of different patterns of design task transition
Funded by the National Science Foundation, I have been working on the research and development of artificial intelligence (AI) for engineering design for a number of years and have been developing the Visual Process Analytics for visualizing and analyzing engineering design process data. This exciting intersection among AI (basically everything about how intelligence can be realized), engineering (basically a generative and creative discipline), and cognitive science (basically everything about how humans acquire intelligence) is full of tremendous challenges, but it also creates unprecedented opportunities that constantly entice and enlighten me.

I have recently written a short article to explain my research to the lay people (mostly educators, but the implications are not limited only to education). Check it out at http://energy.concord.org/~xie/papers/aired.pdf

What’s new in Visual Process Analytics Version 0.3

Visual Process Analytics (VPA) is a data mining platform that supports research on student learning through using complex tools to solve complex problems. The complexity of this kind of learning activities of students entails complex process data (e.g., event log) that cannot be easily analyzed. This difficulty calls for data visualization that can at least give researchers a glimpse of the data before they can actually conduct in-depth analyses. To this end, the VPA platform provides many different types of visualization that represent many different aspects of complex processes. These graphic representations should help researchers develop some sort of intuition. We believe VPA is an essential tool for data-intensive research, which will only grow more important in the future as data mining, machine learning, and artificial intelligence play critical roles in effective, personalized education.

Several new features were added to Version 0.3, described as follows:

1) Interactions are provided through context menus. Context menus can be invoked by right-clicking on a visualization. Depending on where the user clicks, a context menu provides the available actions applicable to the selected objects. This allows a complex tool such as VPA to still have a simple, pleasant user interface.

2) Result collectors allow users to gather analysis results and export them in the CSV format. VPA is a data browser that allows users to navigate in the ocean of data from the repositories it connects to. Each step of navigation invokes some calculations behind the scenes. To collect the results of these calculations in a mining session, VPA now has a simple result collector that automatically keeps track of the user's work. A more sophisticated result manager is also being conceptualized and developed to make it possible for users to manage their data mining results in a more flexible way. These results can be exported if needed to be analyzed further using other software tools.

3) Cumulative data graphs are available to render a more dramatic view of time series. It is sometimes easier to spot patterns and trends in cumulative graphs. This cumulative analysis applies to all levels of granularity of data supported by VPA (currently, the three granular levels are Top, Medium, and Fine, corresponding to three different ways to categorize action data). VPA also provides a way for users to select variables from a list to be highlighted in cumulative graphs.

Many other new features were also added in this version. For example, additional information about classes and students are provided to contextualize each data set. In the coming weeks, the repository will incorporate data from more than 1,200 students in Indiana who have undertaken engineering design projects using our Energy3D software. This unprecedented large-scale database will potentially provide a goldmine of research data in the area of engineering design study.

For more information about VPA, see my AERA 2016 presentation.

Time series analysis tools in Visual Process Analytics: Cross correlation

Two time series and their cross-correlation functions
In a previous post, I showed you what autocorrelation function (ACF) is and how it can be used to detect temporal patterns in student data. The ACF is the correlation of a signal with itself. We are certainly interested in exploring the correlations among different signals.

The cross-correlation function (CCF) is a measure of similarity of two time series as a function of the lag of one relative to the other. The CCF can be imagined as a procedure of overlaying two series printed on transparency films and sliding them horizontally to find possible correlations. For this reason, it is also known as a "sliding dot product."

The upper graph in the figure to the right shows two time series from a student's engineering design process, representing about 45 minutes of her construction (white line) and analysis (green line) activities while trying to design an energy-efficient house with the goal to cut down the net energy consumption to zero. At first glance, you probably have no clue about what these lines represent and how they may be related.

But their CCFs reveal something that appears to be more outstanding. The lower graph shows two curves that peak at some points. I know you have a lot of questions at this point. Let me try to see if I can provide more explanations below.

Why are there two curves for depicting the correlation of two time series, say, A and B? This is because there is a difference between "A relative to B" and "B relative to A." Imagine that you print the series on two transparency films and slide one on top of the other. Which one is on the top matters. If you are looking for cause-effect relationships using the CCF, you can treat the antecedent time series as the cause and the subsequent time series as the effect.

What does a peak in the CCF mean, anyways? It guides you to where more interesting things may lie. In the figure of this post, the construction activities of this particular student were significantly followed by analysis activities about four times (two of them are within 10 minutes), but the analysis activities were significantly followed by construction activities only once (after 10 minutes).

Time series analysis tools in Visual Process Analytics: Autocorrelation

Autocorrelation reveals a three-minute periodicity
Digital learning tools such as computer games and CAD software emit a lot of temporal data about what students do when they are deeply engaged in the learning tools. Analyzing these data may shed light on whether students learned, what they learned, and how they learned. In many cases, however, these data look so messy that many people are skeptical about their meaning. As optimists, we believe that there are likely learning signals buried in these noisy data. We just need to use or invent some mathematical tricks to figure them out.

In Version 0.2 of our Visual Process Analytics (VPA), I added a few techniques that can be used to do time series analysis so that researchers can find ways to characterize a learning process from different perspectives. Before I show you these visual analysis tools, be aware that the purpose of these tools is to reveal the temporal trends of a given process so that we can better describe the behavior of the student at that time. Whether these traits are "good" or "bad" for learning likely depends on the context, which often necessitates the analysis of other co-variables.

Correlograms reveal similarity of two time series.
The first tool for time series analysis added to VPA is the autocorrelation function (ACF), a mathematical tool for finding repeating patterns obscured by noise in the data. The shape of the ACF graph, called the correlogram, is often more revealing than just looking at the shape of the raw time series graph. In the extreme case when the process is completely random (i.e., white noise), the ACF will be a Dirac delta function that peaks at zero time lag. In the extreme case when the process is completely sinusoidal, the ACF will be similar to a damped oscillatory cosine wave with a vanishing tail.

An interesting question relevant to learning science is whether the process is autoregressive (or under what conditions the process can be autoregressive). The quality of being autoregressive means that the current value of a variable is influenced by its previous values. This could be used to evaluate whether the student learned from the past experience -- in the case of engineering design, whether the student's design action was informed by previous actions. Learning becomes more predictable if the process is autoregressive (just to be careful, note that I am not saying that more predictable learning is necessarily better learning). Different autoregression models, denoted as AR(n) with n indicating the memory length, may be characterized by their ACFs. For example, the ACF of AR(2) decays more slowly than that of AR(1), as AR(2) depends on more previous points. (In practice, partial autocorrelation function, or PACF, is often used to detect the order of an AR model.)

The two figures in this post show that the ACF in action within VPA, revealing temporal periodicity and similarity in students' action data that are otherwise obscure. The upper graphs of the figures plot the original time series for comparison.

Visual Process Analytics (VPA) launched

Visual Process Analytics (VPA) is an online analytical processing (OLAP) program that we are developing for visualizing and analyzing student learning from complex, fine-grained process data collected by interactive learning software such as computer-aided design tools. We envision a future in which every classroom would be powered by informatics and infographics such as VPA to support day-to-day learning and teaching at a highly responsive level. In a future when every business person relies on visual analytics every day to stay in business, it would be a shame that teachers still have to read through tons of paper-based work from students to make instructional decisions. The research we are conducting with the support of the National Science Foundation is paving the road to a future that would provide the fair support for our educational systems that is somehow equivalent to business analytics and intelligence.

This is the mission of VPA. Today we are announcing the launch of this cyberinfrastructure. We decided that its first version number should be 0.1. This is just a way to indicate that the research and development on this software system will continue as a very long-term effort and what we have done is a very small step towards a very ambitious goal.

VPA is written in plain JavaScript/HTML/CSS. It should run within most browsers -- best on Chrome and Firefox -- but it looks and works like a typical desktop app. This means that while you are in the middle of mining the data, you can save what we call "the perspective" as a file onto your disk (or in the cloud) so that you can keep track of what you have done. Later, you can load the perspective back into VPA. Each perspective opens the datasets that you have worked on, with your latest settings and results. So if you are half way through your data mining, your work can be saved for further analyses.

So far Version 0.1 has seven analysis and visualization tools, each of which shows a unique aspect of the learning process with a unique type of interactive visualization. We admit that, compared with the daunting high dimension of complex learning, this is a tiny collection. But we will be adding more and more tools as we go. At this point, only one repository -- our own Energy3D process data -- is connected to VPA. But we expect to add more repositories in the future. Meanwhile, more computational tools will be added to support in-depth analyses of the data. This will require a tremendous effort in designing a smart user interface to support various computational tasks that researchers may be interested in defining.

Eventually, we hope that VPA will grow into a versatile platform of data analytics for cutting-edge educational research. As such, VPA represents a critically important step towards marrying learning science with data science and computational science.

Seeing student learning with visual analytics

Technology allows us to record almost everything happening in the classroom. The fact that students' interactions with learning environments can be logged in every detail raises the interesting question about whether or not there is any significant meaning and value in those data and how we can make use of them to help students and teachers, as pointed out in a report sponsored by the U.S. Department of Education:
New technologies thus bring the potential of transforming education from a data-poor to a data-rich enterprise. Yet while an abundance of data is an advantage, it is not a solution. Data do not interpret themselves and are often confusing — but data can provide evidence for making sound decisions when thoughtfully analyzed.” — Expanding Evidence Approaches for Learning in a Digital World, Office of Educational Technology, U.S. Department of Education, 2013
A radar chart of design space exploration.
A histogram of action intensity.
Here we are not talking about just analyzing students' answers to some multiple-choice questions, or their scores in quizzes and tests, or their frequencies of logging into a learning management system. We are talking about something much more fundamental, something that runs deep in cognition and learning, such as how students conduct a scientific experiment, solve a problem, or design a product. As learning goes deeper in those directions, data produced by students grows bigger. It is by no means an easy task to analyze large volumes of learner data, which contain a lot of noisy elements that cast uncertainty to assessment. The validity of an assessment inference rests on  the strength of evidence. Evidence construction often relies on the search for relations, patterns, and trends in student data.With a lot of data, this mandates some sophisticated computation similar to cognitive computing.

Data gathered from highly open-ended inquiry and design activities, key to authentic science and engineering practices that we want students to learn, are often intensive and “messy.” Without analytic tools that can discern systematic learning from random walk, what is provided to researchers and teachers is nothing but a DRIP (“data rich, information poor”) problem.

A scatter plot of action timeline.
Recognizing the difficulty in analyzing the sheer volume of messy student data, we turned to visual analytics, a whole category of techniques extensively used in cutting-edge business intelligence systems such as software developed by SAS, IBM, and others. We see interactive, visual process analytics key to accelerating the analysis procedures so that researchers can adjust mining rules easily, view results rapidly, and identify patterns clearly. This kind of visual analytics optimally combines the computational power of the computer, the graphical user interface of the software, and the pattern recognition power of the brain to support complex data analyses in data-intensive educational research.

A digraph of action transition.
So far, I have written four interactive graphs and charts that can be used to study four different aspects of the design action data that we collected from our Energy3D CAD software. Recording several weeks of student work on complex engineering design challenges, these datasets are high-dimensional, meaning that it is improper to treat them from a single point of view. For each question we are interested in getting answers from student data, we usually need a different representation to capture the outstanding features specific to the question. In many cases, multiple representations are needed to address a question.

In the long run, our objective is to add as many graphic representations as possible as we move along in answering more and more research questions based on our datasets. Given time, this growing library of visual analytics would develop sufficient power to the point that it may also become useful for teachers to monitor their students' work and thereby conduct formative assessment. To guarantee that our visual analytics runs on all devices, this library is written in JavaScript/HTML/CSS. A number of touch gestures are also supported for users to use the library on a multi-touch screen. A neat feature of this library is that multiple graphs and charts can be grouped together so that when you are interacting with one of them, the linked ones also change at the same time. As the datasets are temporal in nature, you can also animate these graphs to reconstruct and track exactly what students do throughout.

The National Science Foundation funds SmartCAD—an intelligent learning system for engineering design

We are pleased to announce that the National Science Foundation has awarded the Concord Consortium, Purdue University, and the University of Virginia a $3 million, four-year collaborative project to conduct research and development on SmartCAD, an intelligent learning system that informs engineering design of students with automatic feedback generated using computational analysis of their work.

Engineering design is one of the most complex learning processes because it builds on top of multiple layers of inquiry, involves creating products that meet multiple criteria and constraints, and requires the orchestration of mathematical thinking, scientific reasoning, systems thinking, and sometimes, computational thinking. Teaching and learning engineering design becomes important as it is now officially part of the Next Generation Science Standards in the United States. These new standards mandate every student to learn and practice engineering design in every science subject at every level of K-12 education.
Figure 1

In typical engineering projects, students are challenged to construct an artifact that performs specified functions under constraints. What makes engineering design different from other design practices such as art design is that engineering design must be guided by scientific principles and the end products must operate predictably based on science. A common problem observed in students' engineering design activities is that their design work is insufficiently informed by science, resulting in the reduction of engineering design to drawing or crafting. To circumvent this problem, engineering design curricula often encourage students to learn or review the related science concepts and practices before they try to put the design elements together to construct a product. After students create a prototype, they then test and evaluate it using the governing scientific principles, which, in turn, gives them a chance to deepen their understanding of the scientific principles. This common approach of learning is illustrated in the upper image of Figure 1.

There is a problem in the common approach, however. Exploring the form-function relationship is a critical inquiry step to understanding the underlying science. To determine whether a change of form can result in a desired function, students have to build and test a physical prototype or rely on the opinions of an instructor. This creates a delay in getting feedback at the most critical stage of the learning process, slowing down the iterative cycle of design and cutting short the exploration in the design space. As a result of this delay, experimenting and evaluating "micro ideas"--very small stepwise ideas such as those that investigate a design parameter at a time--through building, revising, and testing physical prototypes becomes impractical in many cases. From the perspective of learning, however, it is often at this level of granularity that foundational science and engineering design ultimately meet.

Figure 2
All these problems can be addressed by supporting engineering design with a computer-aided design (CAD) platform that embeds powerful science simulations to provide formative feedback to students in a timely manner. Simulations based on solving fundamental equations in science such as Newton’s Laws model the real world accurately and connect many science concepts coherently. Such simulations can computationally generate objective feedback about a design, allowing students to rapidly test a design idea on a scientific basis. Such simulations also allow the connections between design elements and science concepts to be explicitly established through fine-grained feedback, supporting students to make informed design decisions for each design element one at a time, as illustrated by the lower image of Figure 1. These scientific simulations give the CAD software tremendous disciplinary intelligence and instructional power, transforming it into a SmartCAD system that is capable of guiding student design towards a more scientific end.

Despite these advantages, there are very few developmentally appropriate CAD software available to K-12 students—most CAD software used in industry not only are science “black boxes” to students, but also require a cumbersome tool chaining of pre-processors, solvers, and post-processors, making them extremely challenging to use in secondary education. The SmartCAD project will fill in this gap with key educational features centered on guiding student design with feedback composed from simulations. For example, science simulations can be used to analyze student design artifacts and compute their distances to specific goals to detect whether students are zeroing in towards those goals or going astray. The development of these features will also draw upon decades of research on formative assessments of complex learning.

On the instructional sensitivity of computer-aided design logs

Figure 1: Hypothetical student responses to an intervention.
In the fourth issue this year, the International Journal of Engineering Education published our 19-page-long paper on the instructional sensitivity of computer-aided design (CAD) logs. This study was based on our Energy3D software, which supports students to learn science and engineering concepts and skills through creating sustainable buildings using a variety of built-in design and analysis tools related to Earth science, heat transfer, and solar energy. This paper proposed an innovative approach of using response functions -- a concept borrowed from electrical engineering -- to measure instructional sensitivity from data logs (Figure 1).

Many researchers are interested in studying what students learn through complex engineering design projects. CAD logs provide fine-grained empirical data of student activities for assessing learning in engineering design projects. However, the instructional sensitivity of CAD logs, which describes how students respond to interventions with CAD actions, has never been examined, to the best of our knowledge.
Figure 2. An indicator of statistical reliability.

For the logs to be used as reliable data sources for assessments, they must be instructionally sensitive. Our paper reports the results of our systematic research on this important topic. To guide the research, we first propose a theoretical framework for computer-based assessments based on signal processing. This framework views assessments as detecting signals from the noisy background often present in large temporal learner datasets due to many uncontrollable factors and events in learning processes. To measure instructional sensitivity, we analyzed nearly 900 megabytes of process data logged by Energy3D as collections of time series. These time-varying data were gathered from 65 high school students who solved a solar urban design challenge using Energy3D in seven class periods, with an intervention occurred in the middle of their design projects.

Our analyses of these data show that the occurrence of the design actions unrelated to the intervention were not affected by it, whereas the occurrence of the design actions that the intervention targeted reveals a continuum of reactions ranging from no response to strong response (Figure 2). From the temporal patterns of these student responses, persistent effect and temporary effect (with different decay rates) were identified. Students’ electronic notes taken during the design processes were used to validate their learning trajectories. These results show that an intervention occurring outside a CAD tool can leave a detectable trace in the CAD logs, suggesting that the logs can be used to quantitatively determine how effective an intervention has been for each individual student during an engineering design project.

Design replay: Reconstruction of students’ engineering design processes from Energy3D logs

One of the useful features of our Energy3D software is the ability to record the entire design process of a student behind the scenes. We call the reconstruction of a design process from fine-grained process data design replay.

Design replay is not a screencast technology. The main difference is that it records a sequence of CAD models, not in any video format such as MP4. This sequence is played back in the original CAD tool that generated it, not in a video player. As such, every snapshot model is fully functional and editable. For instance, a viewer can pause the replay and click on the user interface of the CAD tool to obtain or visualize more information, if necessary. In this sense, design replay can provide far richer information than screencast (which records as much information as the pixels in the recording screen permit).

Design replay provides a convenient method for researchers and teachers to quickly look into students' design work. It compresses hours of student work into minutes of replay without losing any important information for analyses. Furthermore, the reconstructed sequence of design can be post-processed in many ways to extract additional information that may shed light on student learning, as we can use any model in the recorded sequence to calculate any of its properties.

The three videos embedded in this post show the design replays of three students' work from a classroom study that we just completed yesterday in a Massachusetts high school. Sixty-seven students spent approximately two weeks designing zero-energy houses -- a zero-energy house is a highly energy-efficient house that consumes net zero (or even negative) energy over a year due to its use of passive and active solar technologies to conserve and generate energy. These videos may give you a clue how these three students solved the design challenge.

Learning analytics is the "crystallography" for educational research

To celebrate 100 years of dazzling history of crystallography, the year of 2014 has been declared by UNESCO as the International Year of Crystallography. To this date, 29 Nobel Prizes have been awarded to scientific achievements related to crystallography. On March 7th, the Science Magazine honored crystallographers with a special issue.

Why is crystallography such a big deal? Because it enables scientists to "see" atoms and molecules and discover the molecular structures of substances. One of the most famous examples is the discovery of the DNA helix by Rosalind Franklin in 1952, followed by Crick, Watson, and Wilkins' double helix model. Enough ink has been spilled on the importance of this discovery.

Science fundamentally relies on techniques such as crystallography for detecting and visualizing invisible things. Educational research needs this kind of techniques, too, to decode students' minds that are opaque to researchers. Up to this point, educational researchers depend on methods such as pre/post-tests, observations, and interviews. But these traditional methods are either insufficient or inefficient for measuring learning in complex processes such as scientific inquiry and engineering design. To achieve a level of truly "no child left behind," we will need to develop a research technique that can monitor every student for every minute in the classroom.

Such a technique has to be based on an integrated informatics system that can engage students with meaningful learning tasks, tease out what are in their minds, and capture every bit of information that may be indicative of learning. This involves development in all areas of learning sciences, including technology, curriculum, pedagogy, and assessment. Eventually, what we have is a comprehensive set of data through which we will sift to find patterns of learning or evaluate the effectiveness of an intervention.

The whole process is not unlike crystallography. At the end, it is the learning analytics that concludes the research. Today we are seeing a lot of learner data, but we probably have no idea what they actually mean. We can either say there is no significance in those data and shrug off, or we can try to figure out the right kind of data analytics to decipher them. Which attitude to choose probably depends on which universe we live in. But the history of crystallography can give us a clue. It was Max von Laue who created the first X-ray diffraction pattern in 1912. He couldn't interpret it, however. It wasn't until William Henry Bragg and William Lawrence Bragg's groundbreaking work later in the same year that scientists became able to infer molecular structures from those patterns. In educational research, the equivalent of this is the learning analytics -- a critical piece that will give data meaning.

For more information, read my new article "Visualizing Student Learning."