Posts Tagged ‘mixed-reality’

Some thoughts and variations of the Gas Frame (a natural user interface for learning gas laws)

August 14th, 2013 by Charles Xie
A natural user interface (NUI) is the user interface that is based on natural elements or natural actions. Interacting with computer software through a NUI simulates everyday experiences (such as swiping a finger across a touch screen to move a photo in display or just "asking" a computer to do something through voice commands). Because of this resemblance, a NUI is intuitive to use and requires little or no time to learn. NUIs such as touch screen and speech recognition have become commonplace on new computers.

As the sensing capability of computers becomes more powerful and versatile, new types of NUI emerge. The last three years have witnessed the birth and growth of sophisticated 3D motion sensors such as Microsoft Kinect and Leap Motion. These infrared-based sensors are capable of detecting the user's body language within a physical space near a computer with varied degrees of resolution. The rest is how to use the data to create meaningful interactions between the user and a certain piece of computer software.

Think about how STEM education can benefit from this wave of technological innovations. Being scientists, we are especially interested in how these capabilities can be leveraged to improve learning experiences in science education. Thirty years of development, mostly funded by federal agencies such as the National Science Foundation, have produced a wealth of virtual laboratories (aka computational models or simulations) that are currently being used by millions of students. These virtual labs, however, are often criticized for not being physically relevant and not providing hands-on experiences commonly viewed as necessary in practicing science. We now have an opportunity to partially remedy these problems by connecting virtual labs to physical realities through NUIs.

What would a future NUI for a science simulation look like? For example, if you teach physical sciences, you may have seen many versions of gas simulations that allow students to interact with them through some kind of graphical user interface (GUI). What would a NUI for interacting with a gas simulation look like? How would that transform learning? Our Gas Frame provides an example of implementation that may give you something concrete to think about.

Figure 1: The Gas Frame (the default configuration).
In the default implementation (Figure 1), the Gas Frame uses three different kinds of "props" as the natural elements to control three independent variables related to a gas: A warm or cold object to heat or cool the gas, a spring to exert force on a piston that contains the gas, and a syringe to add or remove gas molecules. The reason that I call these objects "props" is because, like in film making, they mostly serve as close simulations to the real things without necessarily performing the real functions (you don't want a prop gun to shoot real bullets, do you?).

The motions of the gas molecules are simulated using a molecular dynamics method and visualized on the computer screen. The volume of the gas is calculated in real time using the molecular dynamics method based on the three physical inputs. In addition to the physical controls through the three props, a set of virtual controls are available on the screen for students to interact with the simulation such as viewing the trajectory path or the kinetic energy of a molecule. These virtual controls support interactions that are impossible in reality (no, we cannot see the trajectory of a single molecule in the air).

The three props can control the gas simulation because a temperature sensor, a force sensor, and a gas pressure sensor are used to detect student interactions with them, respectively. The data from the sensors are then translated into inputs to the gas simulation, creating a virtual response to a real action (e.g., molecules are added or subtracted when the student pushes or pulls a syringe) and a molecular interpretation of the action (e.g., molecules run faster or slower when temperature increases or decreases).

Like in almost all NUIs, the sensors and the data they collect are hidden from students, meaning that students do not need to know that there are sensors involved in their interactions with the gas simulation and they do not need to see the raw data. This is unlike many other activities in which sensors play a central role in inquiry and must be explicitly explained to students (and the data they collected must be visually presented to students, too). There are definitely advantages of using sensors as inquiry tools to teach students how to collect and analyze data. Sometimes we even go extra miles to ask students to use a computer model to make sense of the data (like the simulation fitting idea I blogged before). But that is not the reason why the National Science Foundation funded innovators like us to do.

The NUIs for science simulations that we have developed in our NSF project all use sensors that have been widely used in schools, such as those from Vernier Software and Technology. This makes it possible for teachers to reuse existing sensors to run these NUI apps. This decision to build our NUI technology on existing probeware is essential for our NUI apps to run in a large number of classrooms in the future.

Figure 2: Variation I.
Considering that not all schools have all the types of sensors needed to run the basic version of the Gas Frame app, we have also developed a number of variations that use only one type of sensor in each app.

Figure 2 shows a variation that uses two temperature sensors, each connected to the temperature of the virtual gas in a compartment. The two compartments are separated by a movable piston in the middle. Increasing or decreasing the temperature of the gas in the left or right compartment through heating or cooling the thermal contacts in which the sensors are applied will cause the virtual piston to move accordingly, allowing students to explore the relationships among pressure, temperature, and volume through two thermal interactions in the real world.

Figure 3: Variation II.
Figure 3 shows another variation that uses two gas pressure sensors, each connected to the number of molecules of the virtual gas in a compartment through an attached syringe. Like in Variation I, the two compartment are separated by a movable piston in the middle. Pushing or pulling the real syringes will cause molecules to be added or removed from the virtual compartments, allowing students to explore the relationships among number of molecules, pressure, and volume through two tactile interactions.

If you don't have that many sensors, don't worry -- both variations will still work if only one sensor is available.

I hear you asking: All these sounds fun, but so what? Will students learn more from these? If not, why bother to go through these extra troubles, compared with using an existing GUI version that needs nothing but a computer? I have to confess that I cannot answer this question at this moment. But in the next blog post, I will try to explain our plan for figuring this out.

A mixed-reality gas lab

February 12th, 2013 by Charles Xie
In his Critique of Pure Reason, the Enlightenment philosopher Immanuel Kant asserted that “conception without perception is empty, perception without conception is blind. The understanding can intuit nothing, the senses can think nothing. Only through their unison can knowledge arise.” More than 200 years later, his wisdom is still enlightening our NSF-funded Mixed-Reality Labs project.

Mixed reality (more commonly known as augmented reality) refers to the blending of real and virtual worlds to create new environments where physical and digital objects co-exist and interact in real time to provide user experiences that are impossible in only real or virtual world. Mixed reality provides a perfect technology to promote the unison of perception and conception. Perception happens in the real world, whereas conception can be enhanced by the virtual world. Knitting the real and virtual worlds together, we can build a pathway that leads perceptual experiences to conceptual development.

We have developed and perfected a prototype of mixed reality for teaching the Kinetic Molecular Theory and the gas laws using our Frame technology. This Gas Frame uses three different types of sensors to translate user inputs into changes of variables in a molecular simulation on the computer: A temperature sensor is used to detect thermal changes in the real world and then change the temperature of the gas molecules in the virtual world; a gas pressure sensor is used to detect gas compression or decompression in the real world and then change the density of the gas molecules in the virtual world; a force sensor is used to detect force changes in the real world and then change the force on a piston in the virtual world. Because of this underlying linkage with the real world through the sensors, the simulation appears to be "smart" enough to detect user actions and react in meaningful ways accordingly.

Each sensor is attached to a physical object installed along the edge of the computer screen (see the illustration above). The temperature sensor is attached to a thermal contact area made of highly conductive material, the gas pressure sensor is attached to a syringe, and the force sensor is attached to a spring that provides some kind of force feedback. These three physical objects provide the real-world contextualization of the interactions. In this way, the Gas Frame not only produces an illusion as if students could directly manipulate tiny gas molecules, but also creates a natural association between microscopic concepts and macroscopic perception. Uniting the actions of students in the real world and the reactions of the molecules in the virtual world, the Gas Frame provides an unprecedented way of learning a set of important concepts in physical science.

Pilot tests of the Gas Frame will begin at Concord-Carlisle High School this week and, collaborating with our project partners Drs. Jennie Chiu and Jie Chao at the University of Virginia, unfold at several middle schools in Virginia shortly. Through the planned sequence of studies, we hope to understand the cognitive aspects of mixed reality, especially on whether perceptual changes can lead to conceptual changes in this particular kind of setup.

Acknowledgements: My colleague Ed Hazzard made a beautiful wood prototype of the Frame (in which we can hide the messy wires and sensor parts). The current version of the Gas Frame uses Vernier's sensors and a Java API to their sensors developed primarily by Scott Cytacki. This work is made possible by the National Science Foundation.

Natural learning interfaces

August 21st, 2012 by Charles Xie
Natural user interfaces (NUIs) are the third generation of user interface for computers, after command line interfaces and graphical user interfaces. A NUI uses natural elements or natural interactions (such as voice or gestures) to control a computer program. Being natural means that the user interface is built upon something that most people are already familiar with. Thus, the learning curve can be significantly shortened. This ease of use allows computer scientists to build more complicated but richer user interfaces that simulate the existing ways people interact with the real world.

Research on NUIs is currently one of the most active areas in computer science and engineering. It is one of the most important directions of Microsoft Research. In line with this future, our NSF-funded Mixed-Reality Labs (MRL) project has proposed a novel concept called the Natural Learning Interfaces (NLIs), which represents our latest ambition to realize the educational promise of cutting-edge technology. In the context of science education, an NLI provides a natural user interface to interact with a scientific simulation on the computer. It maps a natural user action to the change of a variable in the simulation. For example, the user uses a hot or cold source to control a temperature variable in a thermal simulation. The user exerts a force to control the pressure of a gas simulation. NLIs use sensors to acquire real-time data that are then used to drive the simulation in real time. In most cases, it involves a combination of multiple sensors (or multiple types of sensors) to feed more comprehensive data to a simulation and to enrich the user interface.

I have recently invented a technology called the Frame, which may provide a rough idea of what NLIs may look like as an emerging learning technology for science education. The Frame technology is based on the fact that the frame of a computer screen is the natural boundary between the virtual world and the physical world and is, therefore, an intuitive user interface for certain human-computer interactions. Compared with other interfaces such as touch screens or motion trackers, the Frame allows users to interact with the computer from the edges of the screen.

Collaborating with Jennie Chiu's group at the University of Virginia (UVA), we have been working on a few Frame prototypes that will be field tested with several hundred Virginia students in the fall of 2012. These Frame prototypes will be manufactured using UVA's 3D printers. One of the prototypes shown in this blog post is a mixed-reality gas lab, which was designed for eighth graders to learn the particulate nature of temperature and pressure of a gas. With this prototype, students can push or pull a spring to exert a force on a virtual piston, or use a cup of hot water or ice water to adjust the temperature of the virtual molecules. The responsive simulation will immediately show the effect of those natural actions on the state of the virtual system. Besides the conventional gas law behavior, students may discover something interesting. For example, when they exert a large force, the gas molecules can be liquified, simulating gas liquifying under high pressure. When they apply a force rapidly, a high-density layer will be created, simulating the initiation of a sound wave. I can imagine that science centers and museums may be very interested in using this Frame lab as a kiosk for visitors to explore gas molecules in a quick and fun way.

A mixed-reality gas lab (a Frame prototype)
As these actions can happen concurrently, two students can control the simulation using two different mechanisms: changing temperature or changing pressure. This makes it possible for us to design a student competition in which two students use these two different mechanisms to push the piston into each other's side as far as possible. To the best of our knowledge, this is the first collaborative learning of this kind mediated by a scientific simulation.

NLIs are not just the results of some programming fun. NLIs are deeply rooted in cognitive science. Constructivism views learning as a process in which the learner actively constructs or builds new ideas or concepts based upon current and past knowledge or experience. In other words, learning involves constructing one's own knowledge from one's own experiences. NLIs are learning systems built on what learners already know or what they feel natural. The key of a NLI is that it engineers natural interactions that connect prior experiences to what students are supposed to learn, thus building a bridge for stronger mental association and deeper conceptual understanding.

Embedding Next-Generation Molecular Workbench

June 7th, 2012 by Dan Barstow

The next-generation Molecular Workbench has a fundamental feature that is both simple and profound: MW models will be embeddable directly in Web pages. This simple statement means that anyone will be able to integrate these scientifically accurate models into their own work—without having to launch a separate application. Teachers will embed MW models and activities into their own Web pages. Textbook publishers will embed them in new e-books.  There is much room for creativity and partnerships here.

The significance of this advance struck me at a recent conference on educational technology sponsored by the Software & Information Industry Association. Many creative people and companies attended, from large publishers to innovative startups. Throughout the presentations and conversations, I envisioned ways these potential partners might use MW to enhance their products and services.

Ron Dunn, CEO of Cengage, gave a keynote describing their new digital textbooks and aligned homework helpers and other digital resources. He pointed out that 35% of their sales are “digitally driven,” and that technology is essential to their future. Other major publishers echoed those messages. When publishers embed Molecular Workbench models and activities throughout their e-books as a consistent modeling environment, students will be able to investigate fundamental principles of chemistry, physics and biology more deeply than the simple animations and videos now so typical in e-books.

SmartScience is a startup, developing supplemental science education activities. Their idea to link videos of science phenomena with corresponding graphing tools is clever. For example, in a time-lapse video of rising and falling tides, students mark the ocean height and automatically see their data in a graph in order to understand both the scientific phenomena and the graph output. Augmenting reality is great, and we love the idea of integrating videos of physical, chemical and biological processes at the macroscopic scale with MW models to show what happens at the microscopic scale.

Karen Cator, Director of the Office of Educational Technology at the U.S. Department of Education, discussed a new framework for evaluating the effectiveness of educational technology projects. Software can monitor how students work their way through online problems, providing teachers with deeper insights on student learning, especially in terms of scientific thinking and problem-solving skills. Teachers can focus on students’ higher-level thinking skills, and provide useful, real-time feedback to identify strengths, progress and areas in need of help. We agree whole-heartedly and have been working on ways to capture student data in real time and provide feedback loops for teachers. Our next-generation Molecular Workbench will record what students do as they explore the models and make that information available to teachers and researchers.

Partnerships with creative teachers, publishers, and software developers will help us ignite large-scale improvements in teaching and learning through technology. That’s our mission and our goal for Molecular Workbench. Thanks to Google funding, we’re working to increase access to the incredibly powerful next-generation Molecular Workbench.