Category Archives: Main Blog

The National Science Foundation awards grant to study virtual worlds that afford knowledge integration

The Concord Consortium is proud to announce a new project funded by the National Science Foundation, “Towards virtual worlds that afford knowledge integration across project challenges and disciplines.” Principal Investigator Janet Kolodner and Co-PI Amy Pallant will explore how the design of project challenges and the contexts in which they are carried out can support knowledge integration, sustained engagement, and excitement. The goal is to learn how to foster knowledge integration across disciplines when learners encounter and revisit phenomena and processes across several challenges.

Aerial Geography and Air QualityIn this model, students explore the effect of wind direction and geography on air quality as they place up to four smokestacks in the model.

We envision an educational system where learners regularly engage in project-based education within and across disciplines, and in and out of school. We believe that, with such an educational approach, making connections across learning experiences should be possible in new and unexplored ways. If challenges are framed appropriately and their associated figured worlds (real and virtual) and scaffolding are designed to afford it, such education can help learners integrate the content and practices they are learning across projects and across disciplines. “Towards virtual worlds” will help move us towards this vision.

This one-year exploratory project focuses on the possibilities for knowledge integration when middle schoolers who have achieved water ecosystems challenges later attempt an air quality challenge. Some students will engage with EcoMUVE, where learners try to understand why the fish in a pond are dying, and others will engage with Living Together from Project-Based Inquiry Science (PBIS), where learners advise about regulations that should be put in place before a new industry is allowed to move into a town. A subset of these students will then encounter specially crafted air quality challenges based on High-Adventure Science activities and models. These, we hope, will evoke reminders of experiences during their water ecosystem work. We will examine what learners are reminded of, the richness of their memories, and the appeal for learners of applying what they are learning about air quality to better address the earlier water ecology challenge. Research will be carried out in Boston area schools.

Sideview Pollution Control Devices In this model, students explore the effects of installing pollution control devices, such as scrubbers and catalytic converters, on power plants and cars. Students monitor the level of primary pollutants (brown line) and secondary pollutants (orange line) in the model over time, via the graph.

The project will investigate:

  1. What conditions give rise to intense and sustained emotional engagement?
  2. What is remembered by learners when they have (enthusiastically) engaged with a challenge in a virtual figured world and reflected on it in ways appropriate to learning, and what seems to affect what is remembered?
  3. How does a challenge and/or virtual world need to be configured so that learners notice—while not being overwhelmed by—phenomena not central to the challenge but still important to making connections with content outside the challenge content?

Our exploration will help us understand more about the actual elements in the experiences of learners that lead to different emotional responses and the impacts of such responses on their memory making and desires.

Lessons we learn about conditions under which learners form rich memories and want to go back and improve their earlier solutions to challenges will form some of the foundations informing how to design virtual worlds and project challenges with affordances for supporting knowledge integration across projects and disciplines. Exemplar virtual worlds and associated project challenges will inform design principles for the design and use of a new virtual world genre — one with characteristics that anticipate cross-project and cross-discipline knowledge integration and ready learners for future connection making and knowledge deepening.

Simulating the Hadley Cell using Energy2D

Download the models
Although it is mostly used as an engineering tool, our Energy2D software can also be used to create simple Earth science simulations. This blog post shows some interesting results about the Hadley Cell.

The Hadley Cell is an atmospheric circulation that transports energy and moisture from the equator to higher latitudes in the northern and southern hemispheres. This circulation is intimately related to the trade winds, hurricanes, and the jet streams.

As a simple way to simulate zones of ocean that have different temperatures due to differences in solar heating, I added an array of constant-temperature objects at the bottom of the simulation window. The temperature gradually decreases from 30 °C in the middle to 15 °C at the edges. A rectangle, set to be at a constant temperature of -20 °C, is used to mimic the high, chilly part of the atmosphere. The viscosity of air is deliberately set to much higher than reality to suppress the wild fluctuations for a somehow averaged effect. The results show a stable flow pattern that looks like a cross section of the Hadley Cell, as is shown in the first image of this post.

When I increased the buoyant force of the air, an oscillatory pattern was produced. The system swings between two states shown in the second and third images, indicating a periodic reinforcement of hot rising air from the adjacent areas to the center (which is supposed to represent the equator).

Of course, I can't guarantee that the results produced by Energy2D are what happen in nature. Geophysical modeling is an extremely complicated business with numerous factors that are not considered in this simple model. Yet, Energy2D shows something interesting: the fluctuations of wind speeds seem to suggest that, even without considering the seasonal changes, this nonlinear model already exhibits some kind of periodicity. We know that it is all kinds of periodicity in Mother Nature that help to sustain life on the Earth.

Simulating geometric thermal bridges using Energy2D

Fig. 1: IR image of a wall junction (inside) by Stefan Mayer
One of the mysterious things that causes people to scratch their heads when they see an infrared picture of a room is that the junctions such as edges and corners formed by two exterior walls (or floors and roofs) often appear to be colder in the winter than other parts of the walls, as is shown in Figure 1. This is, I hear you saying, caused by an air gap between two walls. But not that simple! While a leaking gap can certainly do it, the effect is there even without a gap. Better insulation only makes the junctions less cold.

Fig. 2: An Energy2D simulation of thermal bridge corners.
A typical explanation of this phenomenon is that, because the exterior surface of a junction (where the heat is lost to the outside) is greater than its interior surface (where the heat is gained from the inside), the junction ends up losing thermal energy in the winter more quickly than a straight part of the walls, causing it to be colder. The temperature difference is immediately revealed by a very sensitive IR camera. Such a junction is commonly called a geometric thermal bridge, which is different from material thermal bridge that is caused by the presence of a more conductive piece in a building assembly such as a steel stud in a wall or a concrete floor of a balcony.

Fig. 3: IR image of a wall junction (outside) by Stefan Mayer
But the actual heat transfer process is much more complicated and confusing. While a wall junction does create a difference in the surface areas of the interior and exterior of the wall, it also forms a thicker area through which the heat must flow through (the area is thicker because it is in a diagonal direction). The increased thickness should impede the heat flow, right?

Fig. 4: An Energy2D simulation of a L-shaped wall.
Unclear about the outcome of these competing factors, I made some Energy2D simulations to see if they can help me. Figure 2 shows the first one that uses a block of object remaining at 20 °C to mimic a warm room and the surrounding environment of 0 °C, with a four-side wall in-between. Temperature sensors are placed at corners, as well as the middle point of a wall. The results show that the corners are indeed colder than other parts of the walls in a stable state. (Note that this simulation only involves heat diffusion, but adding radiation heat transfer should yield similar results.)

What about more complex shapes like an L-shaped wall that has both convex and concave junctions? Figure 3 shows the IR image of such a wall junction, taken from the outside of a house. In this image, interestingly enough, the convex edge appears to be colder, but the concave edge appears to be warmer!

The Energy2D simulation (Figure 4) shows a similar pattern like the IR image (Figure 3). The simulation results show that the temperature sensor placed near the concave edge outside the L-shape room does register a higher temperature than other sensors.

Now, the interesting question is, does the room lose more energy through a concave junction or a convex one? If we look at the IR image of the interior taken inside the house (Figure 1), we would probably say that the convex junction loses more energy. But if we look at the IR image of the exterior taken outside the house (Figure 3), we would probably say that the concave junction loses more energy.

Which statement is correct? I will leave that to you. You can download the Energy2D simulations from this link, play with them, and see if they help you figure out the answer. These simulations also include simulations of the reverse cases in which heat flows from the outside into the room (the summer condition).

Time series analysis tools in Visual Process Analytics: Cross correlation

Two time series and their cross-correlation functions
In a previous post, I showed you what autocorrelation function (ACF) is and how it can be used to detect temporal patterns in student data. The ACF is the correlation of a signal with itself. We are certainly interested in exploring the correlations among different signals.

The cross-correlation function (CCF) is a measure of similarity of two time series as a function of the lag of one relative to the other. The CCF can be imagined as a procedure of overlaying two series printed on transparency films and sliding them horizontally to find possible correlations. For this reason, it is also known as a "sliding dot product."

The upper graph in the figure to the right shows two time series from a student's engineering design process, representing about 45 minutes of her construction (white line) and analysis (green line) activities while trying to design an energy-efficient house with the goal to cut down the net energy consumption to zero. At first glance, you probably have no clue about what these lines represent and how they may be related.

But their CCFs reveal something that appears to be more outstanding. The lower graph shows two curves that peak at some points. I know you have a lot of questions at this point. Let me try to see if I can provide more explanations below.

Why are there two curves for depicting the correlation of two time series, say, A and B? This is because there is a difference between "A relative to B" and "B relative to A." Imagine that you print the series on two transparency films and slide one on top of the other. Which one is on the top matters. If you are looking for cause-effect relationships using the CCF, you can treat the antecedent time series as the cause and the subsequent time series as the effect.

What does a peak in the CCF mean, anyways? It guides you to where more interesting things may lie. In the figure of this post, the construction activities of this particular student were significantly followed by analysis activities about four times (two of them are within 10 minutes), but the analysis activities were significantly followed by construction activities only once (after 10 minutes).

The National Science Foundation funds SmartCAD—an intelligent learning system for engineering design

We are pleased to announce that the National Science Foundation has awarded the Concord Consortium, Purdue University, and the University of Virginia a $3 million, four-year collaborative project to conduct research and development on SmartCAD, an intelligent learning system that informs engineering design of students with automatic feedback generated using computational analysis of their work.

Engineering design is one of the most complex learning processes because it builds on top of multiple layers of inquiry, involves creating products that meet multiple criteria and constraints, and requires the orchestration of mathematical thinking, scientific reasoning, systems thinking, and sometimes, computational thinking. Teaching and learning engineering design becomes important as it is now officially part of the Next Generation Science Standards in the United States. These new standards mandate every student to learn and practice engineering design in every science subject at every level of K-12 education.
Figure 1

In typical engineering projects, students are challenged to construct an artifact that performs specified functions under constraints. What makes engineering design different from other design practices such as art design is that engineering design must be guided by scientific principles and the end products must operate predictably based on science. A common problem observed in students' engineering design activities is that their design work is insufficiently informed by science, resulting in the reduction of engineering design to drawing or crafting. To circumvent this problem, engineering design curricula often encourage students to learn or review the related science concepts and practices before they try to put the design elements together to construct a product. After students create a prototype, they then test and evaluate it using the governing scientific principles, which, in turn, gives them a chance to deepen their understanding of the scientific principles. This common approach of learning is illustrated in the upper image of Figure 1.

There is a problem in the common approach, however. Exploring the form-function relationship is a critical inquiry step to understanding the underlying science. To determine whether a change of form can result in a desired function, students have to build and test a physical prototype or rely on the opinions of an instructor. This creates a delay in getting feedback at the most critical stage of the learning process, slowing down the iterative cycle of design and cutting short the exploration in the design space. As a result of this delay, experimenting and evaluating "micro ideas"--very small stepwise ideas such as those that investigate a design parameter at a time--through building, revising, and testing physical prototypes becomes impractical in many cases. From the perspective of learning, however, it is often at this level of granularity that foundational science and engineering design ultimately meet.

Figure 2
All these problems can be addressed by supporting engineering design with a computer-aided design (CAD) platform that embeds powerful science simulations to provide formative feedback to students in a timely manner. Simulations based on solving fundamental equations in science such as Newton’s Laws model the real world accurately and connect many science concepts coherently. Such simulations can computationally generate objective feedback about a design, allowing students to rapidly test a design idea on a scientific basis. Such simulations also allow the connections between design elements and science concepts to be explicitly established through fine-grained feedback, supporting students to make informed design decisions for each design element one at a time, as illustrated by the lower image of Figure 1. These scientific simulations give the CAD software tremendous disciplinary intelligence and instructional power, transforming it into a SmartCAD system that is capable of guiding student design towards a more scientific end.

Despite these advantages, there are very few developmentally appropriate CAD software available to K-12 students—most CAD software used in industry not only are science “black boxes” to students, but also require a cumbersome tool chaining of pre-processors, solvers, and post-processors, making them extremely challenging to use in secondary education. The SmartCAD project will fill in this gap with key educational features centered on guiding student design with feedback composed from simulations. For example, science simulations can be used to analyze student design artifacts and compute their distances to specific goals to detect whether students are zeroing in towards those goals or going astray. The development of these features will also draw upon decades of research on formative assessments of complex learning.

SimBuilding on iPad

SimBuilding (alpha version) is a 3D simulation game that we are developing to provide a more accessible and fun way to teach building science. A good reason that we are working on this game is because we want to teach building science concepts and practices to home energy professionals without having to invade someone's house or risk ruining it (well, we have to create or maintain some awful cases for teaching purposes, but what sane property owner would allow us to do so?). We also believe that computer graphics can be used to create some cool effects that demonstrate the ideas more clearly, providing complementary experiences to hands-on learning. The project is funded by the National Science Foundation to support technical education and workforce development.

SimBuilding is based on three.js, a powerful JavaScript-based graphics library that renders 3D scenes within the browser using WebGL. This allows it to run on a variety of devices, including the iPad (but not on a smartphone that has less horsepower, however). The photos in this blog post show how it looks on an iPad Mini, with multi-touch support for navigation and interaction.

In its current version, SimBuilding only supports virtual infrared thermography. The player walks around in a virtual house, challenged to correctly identify home energy problems in a house using a virtual IR camera. The virtual IR camera will show false-color IR images of a large number of sites when the player inspects them, from which the player must diagnose the causes of problems if he believes the house has been compromised by problems such as missing insulation, thermal bridge, air leakage, or water damage. In addition to the IR camera, a set of diagnostics tools is also provided, such as a blower-door system that is used to depressurize a house for identifying infiltration. We will also provide links to our Energy2D simulations should the player become interested in deepening their understanding about heat transfer concepts such as conduction, convection, and radiation.

SimBuilding is a collaborative project with New Mexico EnergySmart Academy at Santa Fe. A number of industry partners such as FLIR Systems and Building Science Corporation are also involved in this project. Our special thanks go to Jay Bowen of FLIR, who generously provided most of the IR images used to create the IR game scenes free of charge.

A stock-and-flow model for building thermal analysis

Figure 1. A stock-and-flow model of building energy.
Our Energy3D CAD software has two built-in simulation engines for performing solar energy analysis and building thermal analysis. I have extensively blogged about solar energy analysis using Energy3D. This article introduces building thermal analysis with Energy3D.

Figure 2. A colonial house.
The current version of the building energy simulation engine is based on a simple stock-and-flow model of building energy. Viewed from the perspective of system dynamics—a subject that studies the behavior of complex systems, the total thermal energy of a building is a stock and the energy gains or losses through its various components are flows. These gains or losses usually happen via the energy exchange between the building and the environment through the components. For instance, the solar radiation that shines into a building through its windows are inputs; the heat transfer through its walls may be inputs or outputs depending on the temperature difference between the inside and the outside.

Figure 3. The annual energy graph.
Figure1 illustrates how energy flows into and out of a building in the winter and summer, respectively. In order to maintain the temperature inside a building, the thermal energy it contains must remain constant—any shortage of thermal energy must be compensated and any excessive thermal energy must be removed. These are done through heating and air conditioning systems, which, together with ventilation systems, are commonly known as HVAC systems. Based on the stock-and-flow model, we can predict the energy cost of heating and air conditioning by summing up the energy flows in various processes of heat transfer, solar radiation, and energy generation over all the components of the building such as walls, windows, or roofs and over a certain period of time such as a day, a month, or a year.

Figure 2 shows the solar radiation heat map of a house and the distribution of the heat flux density over its building envelope. Figure 3 shows the results of the annual energy analysis for the house shown in Figure 2.

More information can be found in Chapter 3 of Energy3D's User Guide.

Beautiful Chemisty won the Vizzies Award

The National Science Foundation and the Popular Science Magazine have announced that “Beautiful Chemistry” won the Expert's Choice Award for Video at the 2015 Visualization Challenge, known as Vizzies. According to the Popular Science Magazine,
For many, the phrase “chemical reactions” conjures memories of tedious laboratory work and equations scribbled on exams. But Yan Liang, a professor at the University of Science and Technology of China in Hefei, sees art in the basic science. Last September, Liang and colleagues launched beautiful​ to highlight aesthetically pleasing chemistry. Their video showcases crystallization, fluorescence, and other reactions or structures shot in glorious detail. Liang says finding experiments that meet their visual standards has been a challenge. “Many reactions are very interesting, but not beautiful,” he says. “But sometimes, when shot at close distance without the distraction of beakers or test tubes, ordinary reactions such as precipitation can be very beautiful.”
Beautiful Chemistry is the first of the Beautiful Science Series that Prof. Liang has been planning. The series will include two new titles, Beautiful Simulations and Beautiful Infrared, which we will co-produce with Prof. Liang this summer while he visits Boston.

Congratulations to Prof. Liang for this amazing work!

The deception of unconditionally stable solvers

Unconditionally stable solvers for time-dependent ordinary or partial differential equations are desirable in game development because they are highly resilient to player actions -- they never "blow up." In the entertainment industry, unconditionally stable solvers for creating visual fluid effects (e.g., flow, smoke, or fire) in games and movies were popularized by Jos Stam's 1999 paper "Stable Fluids."

Figure 1: Heat conduction between two objects.
The reason that a solver explodes is because the error generated in a numerical procedure gets amplified in iteration and grows exponentially. This occurs especially when the differential equation is stiff. A stiff equation often contains one or more terms that change very rapidly in space or time. For example, a sudden change of temperature between two touching objects (Figure 1) creates what is known as a singularity in mathematics (a jump discontinuity, to be more specific). Even if the system described by the equation has many other terms that do not behave like this, one such term is enough to crash the whole solver if it is linked to other terms directly or indirectly. To avoid this breakdown, a very small time step must be used, which often makes the simulation look too slow to be useful for games.

The above problem typically occurs in what is known as the explicit method in the family of the finite-difference methods (FDMs) commonly used to solve time-dependent differential equations. There is a magic bullet for solving this problem. This method is known as the implicit method. The secret is that it introduces numerical diffusion, an unphysical mechanism that causes the errors to dissipate before they grow uncontrollably. Many unconditionally stable solvers use the implicit method, allowing the user to use a much larger time step to speed up the simulation.

There ain't no such thing as a free lunch, however. It turns out that we cannot have the advantages of both speed and accuracy at the same time (efficiency and quality are often at odd in reality, as we have all learned from life experiences). Worse, we may even be deceived by the stability of an unconditionally stable solver without questioning the validity of the predicted results. If the error does not drive the solver nuts and the visual looks fine, the result must be good, right?

Figure 2: Predicted final temperature vs. time step.
Not really.

The default FDM solver in Energy2D for simulating thermal conduction uses the implicit method as well. As a result, it never blows up no matter how large the time step is. While this provides good user experiences, you must be cautious if you are using it in serious engineering work that requires not only numerical stability but also numerical reliability (in games we normally do not care about accuracy as long as the visual looks entertaining, but engineering is a precision science). In the following, I will explain the problems using very simple simulations:

1. Inaccurate prediction of steady states

Figure 3. Much longer equilibration with a large time step.
Figure 1 shows a simulation in which two objects at different temperatures come into contact and thermal energy flows from the high-temperature object into the low-temperature one. The two objects have different heat capacities (another jump discontinuity other than the difference in initial temperatures). As expected, the simulation shows that the two objects approach the same temperature, as illustrated by the convergence of the two temperature curves in the graph. If you increase the time step, this overall equilibration behavior does not change. Everything seems good at this point. But if you look at the final temperature after the system reaches the steady state, you will find that there are some deviations from the exact result, as illustrated in Figure 2, when the time step is larger than 0.1 second. The deviation stabilizes at about 24°C -- 4°C higher than the exact result.
Figure 4. Accurate behavior at a small time step.

2. Inaccurate equilibration time

The inaccuracy at large time steps is not limited to steady states. Figure 3 shows that the time it takes the system to reach the steady state is more than 10 times (about 1.5 hours as opposed to roughly 0.1 hours -- if you read the labels of the horizontal time axis of the graph) if we use a time step of 5 seconds as opposed to 0.05 second. The deceiving part of this is that the simulation appears to run equally quickly in both cases, which may fool your eyes until you look at the numerical outputs in the graphs.

3. Incorrect transient behaviors

Figure 5. Incorrect behavior at a very large time step.
With a more complex system, the transient behaviors can be affected more significantly when a large time step is used. Figure 4 shows a case in which the thermal conduction through two materials of different thermal conductivities (wood vs. metal) are compared, with a small time step (1 second). Figure 5 shows that when a time step of 1,000 seconds is used, the wood turns out to be initially more conductive than metal, which, of course, is not correct. If the previous example with two touching objects suggests that the simulation result can be quantitatively inaccurate at large time steps, this example means that the results can also be qualitatively incorrect in some cases (which is worse).

The general advice is to always choose a few smaller time steps to check if your results would change significantly. You can use a large time step to set up and test your model rapidly. But you should run your model at smaller time steps to validate your results.

The purpose of this article is to inform you that there are certain issues with Energy2D simulations that you must be aware if you are using it for engineering purposes. If these issues are taken care of, Energy2D can be highly accurate for conduction simulations, as illustrated by this example that demonstrates the conservation of energy of an isolated conductive system.

Energy2D and Quantum Workbench featured in Springer books

Two recently published Springer books have featured our visual simulation software, indicating perhaps that their broader impacts beyond their originally intended audiences (earlier I have blogged about the publication of the first scientific paper that used Energy2D to simulate geological problems).

A German book "Faszinierende Physik" (Fantastic Physics) includes a series of screenshots from a 2D quantum tunneling simulation from our Quantum Workbench software that shows how wave functions split when they smash into a barrier. The lead author of the book said in the email to us that he found the images generated by the Quantum Workbench "particularly beautiful."

Another book "Simulation and Learning: A Model-Centered Approach" chose our Energy2D software as a showcase that demonstrates how powerful scientific simulations can convey complex science and engineering ideas.

Quantum Workbench and Energy2D are based on solving extremely complex partial differential equations that govern the quantum world and the macroscopic world, respectively. Despite the complexity in the math and computation, both software present intuitive visualizations and support real-time interactions so that anyone can mess around with them and discover rich scientific phenomena on the computer.