Visualizing thermal equilibration: IR imaging vs. Energy2D simulation

Figure 1
A classic experiment to show thermal equilibration is to put a small Petri dish filled with some hot or cold water into a larger one filled with tap water around room temperature, as illustrated in Figure 1. Then stick one thermometer in the inner dish and another in the outer dish and take their readings over time.

With a low-cost IR camera like the FLIR C2 camera or FLIR ONE camera, this experiment becomes much more visual (Figure 2). As an IR camera provides a full-field view of the experiment in real time, you get much richer information about the process than a graph of two converging curves from the temperature data read from the two thermometers.
Figure 2

The complete equilibration process typically takes 10-30 minutes, depending on the initial temperature difference between the water in the two dishes and the amount of water in the inner dish. A larger temperature difference or a larger amount of water in the inner dish will require more time to reach the thermal equilibrium.

Another way to quickly show this process is to use our Energy2D software to create a computer simulation (Figure 3). Such a simulation provides a visualization that resembles the IR imaging result. The advantage is that it runs very fast -- only 10 seconds or so are needed to reach the thermal equilibrium. This allows you to test various conditions rapidly, e.g., changing the initial temperature of the water in the inner dish or the outer dish or changing the diameters of the dishes.

Figure 3
Both real-world experiments and computer simulations have their own pros and cons. Exactly which one to use depends on your situation. As a scientist, I believe nothing beats real-world experiments in supporting authentic science learning and we should always favor them whenever possible. However, conducting real-world experiments requires a lot of time and resources, which makes it impractical to implement throughout a course. Computer simulations provide an alternative solution that allows students to get a sense of real-world experiments without entailing the time and cost. But the downside is that a computer simulation, most of the time, is an overly simplified scientific model that does not have the many layers of complexity and the many types of interactions that we experience in reality. In a real-world experiment, there are always unexpected factors and details that need to be attended to. It is these unexpected factors and details that create genuinely profound and exciting teachable moments. This important nature of science is severely missing in computer simulations, even with a sophisticated computational fluid dynamics tool such as Energy2D.

Here is my balancing of this trade-off equation: It is essential for students to learn simplified scientific models before they can explore complex real-world situations. The models will give students the frameworks needed to make sense of real-world observation. A fair strategy is to use simulations to teach simplified models and then make some time for students to conduct experiments in the real world and learn how to integrate and apply their knowledge about the models to solve real problems.

A side note: You may be wondering how well the Energy2D result agrees with the IR result on a quantitative basis. This is kind of an important question -- If the simulation is not a good approximation of the real-world process, it is not a good simulation and one may challenge its usefulness, even for learning purposes. Figure 4 shows a comparison of a test run. As you can see, the while the result predicted by Energy2D agrees in trend with the results observed through IR imaging, there are some details in the real data that may be caused by either human errors in taking the data or thermal fluctuations in the room. What is more, after the thermal equilibrium was reached, the water in both dishes continued to cool down to room temperature and then below due to evaporative cooling. The cooling to room temperature was modeled in the Energy2D simulation through a thermal coupling to the environment but evaporative cooling was not.

Figure 4

An infrared investigation on a Stirling engine

Figure 1
The year 2016 marks the 200th anniversary of an important invention of Robert Stirling -- the Stirling engine. So I thought I should start this year's blogging with a commemoration article about this truly ingenious invention.

A Stirling engine is a closed-cycle heat engine that operates by cyclic compression and expansion of air or other gas by a temperature difference across the engine. A Stirling engine is able to convert thermal energy into mechanical work.

You can buy an awesome toy Stirling engine from Amazon (perhaps next Christmas's gift for some inquisitive minds). If you put it on top of a cup of hot water, this amazing machine will just run until the hot water cools down to the room temperature.

Figure 2
Curious about whether the Stirling circle would actually accelerate the cooling process, I filled hot water into two identical mugs and covered one of them with the Stirling engine. Then I started the engine and observed what happened to the temperature through an IR camera. It turned out that the mug covered by the engine maintained a temperature about 10 °C higher than the open mug in about 30 minutes of observation time. If you have a chance to do this experiment, you probably would be surprised. The flying wheel of the Stirling engine seems to be announcing that it is working very hard by displaying fast spinning and making a lot of noise. But all that energy, visual and audible as it is, is no match to the thermal energy lost through evaporation of water from the open hot mug (Figure 1).

How about comparing the Stirling engine with heat transfer? I found a metal box that has approximately the same size and same thickness with our Stirling engine. I refilled the hot water to the two mugs and covered one with the metal box and the other with the Stirling engine. Then I started the engine and tracked their temperatures through the IR camera. It turned out that the rates of heat loss from the two mugs were about the same in about 30 minutes of observation. What this really means is that the energy that drove the engine was actually very small compared with the thermal energy that is lost to the environment through heat transfer (Figure 2).

This is understandable because the speed of the flying wheel is only a small fraction of the average speed of molecules (which is about the speed of sound or higher). This investigation also suggests that the Stirling engine is very efficient. Had we insulated the mug, it would have run for hours.

Chemical imaging using infrared cameras

Figure 1: Evaporative cooling
Scientists have long relied on powerful imaging techniques to see things invisible to the naked eye and thus advance science. Chemical imaging is a technique for visualizing chemical composition and dynamics in time and space as actual events unfold. In this sense, infrared (IR) imaging is a chemical imaging technique as it allows one to see temporal and spatial changes of temperature distribution and, just like in other chemical imaging techniques, infer what is occurring at the molecular level based on these information.

Figure 2: IR imaging
Most IR cameras are sensitive enough to pick up a temperature difference of 0.1°C or less. This sensitivity makes it possible to detect certain effects from the molecular world. Figure 1 provides an example that suggests this possibility.

This experiment, which concerns evaporation of water, cannot be simpler: Just pour some room-temperature water into a plastic cup, leave it for a few hours, and then aim an IR camera at it. In stark contrast to the thermal background, the whole cup remains 1-2°C cooler than the room temperature (Figure 2). About how much water evaporation is enough to keep the cup this cool? Let’s do a simple calculation. Our measurement showed that in a typical dry and warm office environment in the winter, a cup of water (10 cm diameter) loses approximately six grams of water in 24 hours. That is to say, the evaporation rate is 7×10-5 g/s or 7×10-11 m3/s. Divided by the surface area of the cup mouth, which is 0.00785 m2, we obtain that the thickness of the layer of water that evaporates in a second is 8.9 nm—that is roughly the length of only 30 water molecules lining up shoulder to shoulder! It is amazing to notice that just the evaporation of this tiny amount of water at such a slow rate (a second is a very long time for molecules) suffices to sustain a temperature difference of 1-2°C for the entire cup. 

This simple experiment actually raises more questions than it answers. Based on the latent heat of vaporization of water, which is about 2265 J/g, we estimate that the rate of energy loss through evaporation is only 0.16 J/s. This rate of energy loss should have a negligible effect on the 200 g of water in the cup as the specific heat of water is 4.186 J/(g×°C). So where does this cooling effect come from? How does it persist? Would the temperature of water be even lower if there is less water in the cup? What would the temperature difference be if the room temperature changes? These questions pose great opportunities to engage students to propose their hypotheses and test them with more experiments. It is through the quest to the answers that students learn to think and act like scientists.

IR imaging is an ideal tool for guided inquiry as it eliminates the tedious data collection procedures and focuses students on data analysis. In the practice of inquiry, data analysis is viewed as more important than data collection in helping students develop their thinking skills and conceptual understandings. Although this cooling effect can also be investigated using a thermometer, students’ perception might be quite different. An IR camera immediately shows that the entire cup, not just the water surface, is cooler. Seeing the bulk of the cup in blue color may prompt students to think more deeply and invite new questions, whereas a single temperature reading from a thermometer may not deliver the same experience.

Scientists use Energy2D to simulate the effect of micro flow on molecular self-assembly

Copyright: ACS Nano, American Chemical Society
Self-assembled peptide nanostructures have unique properties that lead to applications in electrical devices and functional molecular recognition. Exactly how to control the self-assembly process in a solution is a hot research topic. Since a solution is a fluid, a little fluid mechanics would be needed to understand how micro flow affects the self-assembly of the peptide molecules.

ACS Nano, a journal of the American Chemical Society, published a research article on December 11 that includes a result of using our Energy2D software to simulate turbulent situations in which the non-uniform plumes rising from the substrate result in the formation of randomly arranged diphenylalanine (FF) rods and tubes. This paper, titled "Morphology and Pattern Control of Diphenylalanine Self-Assembly via Evaporative Dewetting," is the result of collaboration between scientists from Nanjing University and the City University of Hong Kong.

We are absolutely thrilled by the fact that many scientists have used Energy2D in their work. As far as we know, this is the second published scientific research paper that has used Energy2D.

On a separate avenue, many engineers are already using Energy2D to aid their design work. For example, in a German forum about renewable energy, an engineer has recently used the tool to make sense of his experimental results with various air collector designs. He reported that the results are "confirmed by the experiences of several users: pressure losses and less volume of air in the blowing operation" (translated from German using Google Translate).

It is these successful applications of Energy2D in the real world that will make it a relevant tool in science and engineering for a very long time.

Energy3D V5.0 released

Full-scale building energy simulation
Insolation analysis of a city block
We are pleased to announce a milestone version of our Energy3D CAD software. In addition to fixing numerous bugs, Version 5.0 includes numerous new features that we have recently added to the software to enhance its already powerful concurrent design, simulation, and analysis capabilities.

For example, we have added cut/copy/paste in 3D space that greatly eases 3D construction. With this functionality, laying an array of solar panels on a roof is as simple and intuitive as copying and pasting an existing solar panel. Creating a village or city block is also made easier as a building can be copied and pasted anywhere on the ground -- you can create a number of identical buildings using the copy/paste function and then work to make them different.

Insolation analysis of various houses
Compared with previous versions, the properties of every building element can now be set individually using the corresponding popup menu and window. Being able to set the properties of an individual element is important as it is often a good idea for fenestration on different sides of a building to have different solar heat gain coefficients. The user interface for setting the solar heat gain coefficient, for instance, allows the user to specify whether he or she wants to apply the value to the selected window, all the windows on the selected side, or the entire building.

In a move to simulate machine-learning thermostats such as Google's Nest Thermostat to test the assertion that they can help save energy, we have added programmable thermostats. We have also added a geothermal model that allows for more accurate simulation of heat exchange between a building and the ground. New efforts for modeling weather and landscape more accurately are already on the way.

The goal of Energy3D is to create a software platform that bridges education and industry -- we are already working with leading home energy companies to bring this tool to schools and workplaces. This synergy has led to some interesting and exciting business opportunities that mutually benefit education and industry.

A bonus of this version is that it no longer requires users to install Java. We have provided a Windows installer and a Mac installer that work just like any other familiar software installer. Users should now find it easy to install Energy3D, compared with the previously problematic Java Web Start installer.

Building data science fluency using games

The National Science Foundation has awarded the Concord Consortium a three-year Cyberlearning grant to develop and test new data science games for high school biology, chemistry, and physics, and research how learners conceive of and learn with data. The Data Science Games project builds on prior work, which led to the invention of a new genre of learning technology—a “data science game.”

The use of games for education is a growing field with significant promise for STEM learning. Games provide a powerful means of motivation and engagement, and align with many STEM learning goals. Data Science Games is making use of the data generated as students play digital games in a novel and creative way. When students play a data science game, their gameplay actions generate data—data that is essential to the game itself. To succeed at a data science game, students must visualize, understand, and properly apply the data their game playing has generated in order to “level up” and progress within the game. As they visualize and analyze the data, planning and plotting new, evolving strategies, students learn the fundamentals of data science.

The new data science games will be embedded in our open source Common Online Data Analysis Platform (CODAP). Data from the games will flow seamlessly into CODAP thanks to innovations that leverage advances in the interoperability of components embedded in browsers, new capabilities for data visualization using HTML5, and recent innovations in design of interfaces compatible with both PC-based browsers and touch devices.

Project research will investigate ways this new genre of educational technology can be integrated into classroom learning. We will identify and characterize learner perceptions of data, including how learners see flat, hierarchical, and network structures as emerging from realistic problems; questions learners ask with data; and learning trajectories for restructuring and visualization of data.

The project will also produce guidelines for making use of data science games across a range of grade levels and subject matter. Data Science Games will thus provide both models and templates of how to integrate learning of data science into existing content areas, helping to grow the next generation of data scientists.

Data Science Games Play Roshambo against the evil Dr. Markov (log in as guest). If you win, you can save Madeline the dog. Improve your odds by analyzing Markov’s moves in a graph.

Play Roshambo
(log in as guest)

Solarizing a house in Energy3D

Fig. 1 3D model of a real house near Boston (2,150 sq ft).
On August 3, 2015, President Obama announced the Clean Power Plan – a landmark step in reducing carbon pollution from power plants that takes real action on climate change. Producing clean energy from rooftop solar panels can greatly mitigate the problems in current power generation. In the US, there are more than 130 million homes. These homes, along with commercial buildings, consume more than 40% of the total energy of the country. With improving generation and storage technologies, a large portion of that usage could be generated by home buildings themselves.

A practical question is: How do we estimate the energy that a house can potentially generate if we put solar panels on top of it? This estimate is key to convincing homeowners to install solar panels or the bank to finance it. You wouldn't buy something without knowing its exact benefits, would you? This is why solar analysis and evaluation are so important to the solar energy industry.

The problem is: Every building is different! The location, the orientation, the landscape, the shape, the roof pitch, and so on, vary from one building to another. And there are over 100 MILLION of them around the country! To make the matter even more complicated, we are talking about annual gains, which require the solar analyst to consider solar radiation and landscape changes in four seasons. With all these complexities, no one can really design the layout of solar panels and calculate their outputs without using a 3D simulation tool.

There may be solar design and prediction software from companies like Autodesk. But for three reasons, we believe that our Energy3D CAD software will be a relevant tool in this marketplace. First, our goal is to enable everyone to use Energy3D without having to go through the level of training that most engineers must go through with other CAD tools in order to master them. Second, Energy3D is completely free of charge to everyone. Third, the accuracy of Energy3D's solar analysis is comparable with that of others (and is improving as we speak!).

With these advantages, it is now possible for homeowners to evaluate the solar potential of their houses INDEPENDENTLY, using an incredibly powerful scientific simulation tool that has been designed for the layperson.

In this post, I will walk you through the solar design process in Energy3D step by step.

1) Sketch up a 3D model of your house

Energy3D has an easy-to-use interface for quickly constructing your house in a 3D environment. With this interface, you can create an approximate 3D model of your house without having to worry about details such as interiors that are not important to solar analysis. Improvements of this user interface are on the way. For example, we just added a handy feature that allows users to copy and paste in 3D space. This new feature can be used to quickly create an array of solar panels by simply copying a panel and hitting Ctrl/Command+V a few times. As trees are important to the performance of your solar panels, you should also model the surrounding trees by adding various tree objects in Energy3D. Figure 1 shows a 3D model of a real house in Massachusetts, surrounded by trees. Notice that this house has a T shape and its longest side faces southeast, which means that other sides of its roof may worth checking.
Fig. 2 Daily solar radiation in four seasons

2) Examine the solar radiation on the roof in four seasons

Once you have a 3D model of your house and the surrounding trees, you should take a look at the solar radiation on the roof throughout the year. To do this, you have to change the date and run a solar simulation for each date. For example, Figure 2 shows the solar radiation heat maps of the Massachusetts house on 1/1, 4/1, 7/1, and 10/1, respectively. Note that the trees do not have leaves from the beginning of December to the end of April (approximately), meaning that their impacts to the performance of the solar panels are minimal in the winter.

The conventional wisdom is that the south-facing side of the roof is a good place to put solar panels. But very few houses face exact south. This is why we need a simulation tool to analyze real situations. By looking at the color maps in Figure 2, we can quickly figure out that the southeast-facing side of the roof of this house is the optimal side for solar panels and we also know that the lower part of this side is shadowed significantly by the surrounding trees.

Fig. 3 Solarizing the house
3) Add, copy, and paste solar panels to create arrays

Having decided which side to lay the solar panels, the next step is to add them to it. You can drop them one by one. Or drop the first one near an edge and then copy and paste it to easily create an array. Repeat this for three rows as illustrated in Figure 3. Note that I chose the solar panels that have a light-electricity conversion efficiency of 15%, which is about average in the current market. New panels may come with higher efficiency.

The three rows have a total number of 45 solar panels (3 x 5 feet each). From Figure 2, it also seems the T-wing roof leaning towards west may be a sub-optimal place to go solar. Let's also put a 2x5 array of panels on that side. If the simulation shows that they do not worth the money, we can just delete them from the model. This is the power of the simulation -- you do not have to pay a penny for anything you do with a virtual house (and you do not have to wait for a year to evaluate the effect of anything you do on its yearly energy usage).

4) Run annual energy analysis for the building

Fig. 4 Energy graphs with added solar panels
Now that we have put up the solar panels, we want to know how much energy they can produce. In Energy3D, this is as simple as selecting "Run Annual Energy Analysis for Building..." under the Analysis Menu. A graph will display the progress while Energy3D automatically performs a 12-month simulation and updates the results (Figure 4).

I recommend that you run this analysis every time you add a row of solar panels to keep track of the gains from each additional row. For example, Figure 4 shows the changes of solar outputs each time we add a row (the last one is the 10 panels added to the west-facing side of the T-wing roof). The following lists the annual results:
  • Row 1, 15 panels, output: 5,414 kWh --- 361 kWh/panel
  • Row 2, 15 panels, output: 5,018 kWh (total: 10,494 kWh) --- 335 kWh/panel
  • Row 3, 15 panels, output: 4,437 kWh (total: 14,931 kWh) --- 296 kWh/panel
  • T-wing 2x5 array, 10 panels, output: 2,805 kWh (total: 17,736 kWh) --- 281 kWh/panel
These results suggest that 30 panels in Rows 1 and 2 are probably a good solution for this house -- they generate a total of 10,494 kWh in a year. But if we have better (i.e., high efficiency) and cheaper solar panels in the future, adding panels to Row 3 and the T-wing may not be such a bad idea.

Fig. 5 Comparing solar panels at different positions
5) Compare the solar gains of panels at different positions

In addition to analyze the energy performance of the entire house, Energy3D also allows you to select individual elements and compare their performances. Figure 5 shows the comparison of four solar panels at different positions. The graph shows that the middle positions in Row 3 are not good spots for solar panels. Based on this information, we can go back to remove those solar panels and redo the analysis to see if we will have a better average output of Row 3.

After removing the five solar panels in the middle of Row 3, the total output drops to 16,335 kWh, meaning that the five panels on average output 280 kWh each.

6) Decide which positions are acceptable for installing solar panels

The analysis results thus far should provide you enough information with regard to whether it worth your money to solarize this house and, if yes, how to solarize it. The real decision depends on the cost of electricity in your area, your budget, and your expectation of the return of investment. With the price of solar panel continuing to drop, the quality continues to improve, and the pressure to reduce fossil energy usage continues to increase, building solarization is becoming more and more viable.

Solar analysis using computational tools is typically considered as the job of a professional engineer as it involves complicated computer-based design and analysis. The high cost of a professional engineer makes analyzing and evaluating millions of buildings economically unfavorable. But Energy3D reduces this task to something that even children can do. This could lead to a paradigm shift in the solar industry that will fundamentally change the way residential and commercial solar evaluation is conducted. We are very excited about this prospect and are eager to with the energy industry to ignite this revolution.

Open invitation to software developers

CODAP Screenshot Our Common Online Data Analysis Platform (CODAP) offers easy-to-use web-based software that makes it possible for students in grades 6 through college to visualize, analyze, and ultimately learn from data. Whether the source of data is a game, a map, an experiment, or a simulation, CODAP provides an immersive, exploratory experience with dynamically linked data representations, including graphs, maps, and tables. CODAP is not dependent on specific content, so data analysis can be integrated into math, science, history, or economics classrooms.

CODAP is HTML5, making use of JavaScript, HTML, and CSS3. Various open source libraries are part of CODAP, including SproutCore, JQuery, Raphaël, Leaflet, and several other smaller libraries. CODAP uses SproutCore as an application framework. You can deploy CODAP as a static website with no server interaction. CODAP can be configured to store documents on your local device, or integrated with an online server for cloud-based document management. It can also log user actions to a server specified in a configuration file.

Our goal is to create a community of curriculum and software developers committed to ensuring that students from middle school through college have the knowledge and skills to learn with data across disciplines. We need your help!

Get involved

Digital gaming will connect afterschool students with biotech mentors

Our nation’s future competitiveness and our citizens’ overall STEM literacy rely on our efforts to forge connections between the future workforce and the world of emerging STEM careers. Biotechnology, and genetics in particular, are rapidly advancing areas that will offer new jobs across the spectrum from technicians to scientists. A new $1.2 million National Science Foundation-funded project at the Concord Consortium will use Geniverse, an immersive digital game where students put genetics knowledge into action as they breed dragons, to help connect underserved students with local biotechnology professionals to strengthen student awareness of STEM careers.

East End House Students

Students from East End House enjoy collaborating on computer-based science activities.

Geniverse is our free, web-based software designed for high school biology that engages students in exploring heredity and genetics by breeding and studying virtual dragons. This game-like software allows students to undertake genetics experimentation with results that closely mimic real-world genetics. The new GeniConnect project will extend the gaming aspects of Geniverse and revise the content to more fully target middle school biology, introducing Geniverse to the afterschool environment.

The three-year GeniConnect project will develop and research a coherent series of student experiences in biotechnology and genetics involving game-based learning, industry mentoring, and hands-on laboratory work. Industry professionals from Biogen, Monsanto, and other firms will mentor afterschool students at East End House, a community center in East Cambridge, Massachusetts.

With researchers from Purdue University, we’ll explore how an immersive game and a connection to a real scientist can increase STEM knowledge, motivation, and career awareness of underserved youth. We will also develop and research a scalable model for STEM industry/afterschool partnerships, and produce a STEM Partnership Toolkit for the development of robust, educationally sound partnerships among industry professionals and afterschool programs. The Toolkit will be distributed to approximately 500 community-based organizations and afterschool programs nationally that are member organizations of the Alliance for Strong Families and Communities.

Geniverse Narrative

Beautiful graphics designed by FableVision Studios engage students in a compelling narrative. Students follow the arduous journey of their heroic character and suffering dragon to the Drake Breeder’s Guild.

Geniverse Lab

Students are welcomed into the Drake Breeder’s Guild where they will learn the tricks of the genetic trade. (Drakes are a model species that can help solve genetic mysteries in dragons, in much the same way as the mouse is a model species for human genetic disease.) Students are engaging in an authentic, experiment-driven approach to biology—in a fantastical world.

Launching a new interdisciplinary field of study in spoken language technology for education

A grant from the National Science Foundation will help launch a new interdisciplinary field of study in spoken language technology for education. The one-year “Building Partnerships for Education and Speech Research” project will unite the extensive education research and educational technology backgrounds at the Concord Consortium and SRI International’s Center for Technology in Learning (CTL) and bring them together with two of the strongest groups in spoken language technology research, the Speech Technology and Research (STAR) Laboratory at SRI and the Center for Robust Speech Systems (CRSS) at the University of Texas at Dallas.

The sophistication of technologies for processing and understanding spoken language—such as speech recognition, detection of individual speakers, and natural language processing—have radically improved in recent years, though most people’s image of modern spoken language technology is colored by often-finicky interactions with Siri or Google products. In fact, many lesser-known technologies can now automatically detect many features of speech, including question asking, dialog interchanges, word counts, indication of emotion or stress, and specific spoken keywords with high accuracy.

However, educational research has barely begun exploring their potential to provide insight into, and eventually revolutionize, research areas as diverse as collaboration, argumentation, discourse analysis, emotion, and engagement. And capturing the most critical and substantive interactions during the teaching and learning process—the discourse and conversation among students, teachers, and mentors—remains elusive.

The central goal of this new project is to generate interest in and momentum toward the use of spoken language technologies in education research. The potential for such applied technologies is vast, and the broader impacts could be significant. As these technologies become established for use in improved education research and development, researchers will be able to better understand and target interventions, educators will be able to monitor and adjust their interactions with learners, and learners will be better informed of their learning progress.