Posts Tagged ‘augmented reality’

Some thoughts and variations of the Gas Frame (a natural user interface for learning gas laws)

August 14th, 2013 by Charles Xie
A natural user interface (NUI) is the user interface that is based on natural elements or natural actions. Interacting with computer software through a NUI simulates everyday experiences (such as swiping a finger across a touch screen to move a photo in display or just "asking" a computer to do something through voice commands). Because of this resemblance, a NUI is intuitive to use and requires little or no time to learn. NUIs such as touch screen and speech recognition have become commonplace on new computers.

As the sensing capability of computers becomes more powerful and versatile, new types of NUI emerge. The last three years have witnessed the birth and growth of sophisticated 3D motion sensors such as Microsoft Kinect and Leap Motion. These infrared-based sensors are capable of detecting the user's body language within a physical space near a computer with varied degrees of resolution. The rest is how to use the data to create meaningful interactions between the user and a certain piece of computer software.

Think about how STEM education can benefit from this wave of technological innovations. Being scientists, we are especially interested in how these capabilities can be leveraged to improve learning experiences in science education. Thirty years of development, mostly funded by federal agencies such as the National Science Foundation, have produced a wealth of virtual laboratories (aka computational models or simulations) that are currently being used by millions of students. These virtual labs, however, are often criticized for not being physically relevant and not providing hands-on experiences commonly viewed as necessary in practicing science. We now have an opportunity to partially remedy these problems by connecting virtual labs to physical realities through NUIs.

What would a future NUI for a science simulation look like? For example, if you teach physical sciences, you may have seen many versions of gas simulations that allow students to interact with them through some kind of graphical user interface (GUI). What would a NUI for interacting with a gas simulation look like? How would that transform learning? Our Gas Frame provides an example of implementation that may give you something concrete to think about.

Figure 1: The Gas Frame (the default configuration).
In the default implementation (Figure 1), the Gas Frame uses three different kinds of "props" as the natural elements to control three independent variables related to a gas: A warm or cold object to heat or cool the gas, a spring to exert force on a piston that contains the gas, and a syringe to add or remove gas molecules. The reason that I call these objects "props" is because, like in film making, they mostly serve as close simulations to the real things without necessarily performing the real functions (you don't want a prop gun to shoot real bullets, do you?).

The motions of the gas molecules are simulated using a molecular dynamics method and visualized on the computer screen. The volume of the gas is calculated in real time using the molecular dynamics method based on the three physical inputs. In addition to the physical controls through the three props, a set of virtual controls are available on the screen for students to interact with the simulation such as viewing the trajectory path or the kinetic energy of a molecule. These virtual controls support interactions that are impossible in reality (no, we cannot see the trajectory of a single molecule in the air).

The three props can control the gas simulation because a temperature sensor, a force sensor, and a gas pressure sensor are used to detect student interactions with them, respectively. The data from the sensors are then translated into inputs to the gas simulation, creating a virtual response to a real action (e.g., molecules are added or subtracted when the student pushes or pulls a syringe) and a molecular interpretation of the action (e.g., molecules run faster or slower when temperature increases or decreases).

Like in almost all NUIs, the sensors and the data they collect are hidden from students, meaning that students do not need to know that there are sensors involved in their interactions with the gas simulation and they do not need to see the raw data. This is unlike many other activities in which sensors play a central role in inquiry and must be explicitly explained to students (and the data they collected must be visually presented to students, too). There are definitely advantages of using sensors as inquiry tools to teach students how to collect and analyze data. Sometimes we even go extra miles to ask students to use a computer model to make sense of the data (like the simulation fitting idea I blogged before). But that is not the reason why the National Science Foundation funded innovators like us to do.

The NUIs for science simulations that we have developed in our NSF project all use sensors that have been widely used in schools, such as those from Vernier Software and Technology. This makes it possible for teachers to reuse existing sensors to run these NUI apps. This decision to build our NUI technology on existing probeware is essential for our NUI apps to run in a large number of classrooms in the future.

Figure 2: Variation I.
Considering that not all schools have all the types of sensors needed to run the basic version of the Gas Frame app, we have also developed a number of variations that use only one type of sensor in each app.

Figure 2 shows a variation that uses two temperature sensors, each connected to the temperature of the virtual gas in a compartment. The two compartments are separated by a movable piston in the middle. Increasing or decreasing the temperature of the gas in the left or right compartment through heating or cooling the thermal contacts in which the sensors are applied will cause the virtual piston to move accordingly, allowing students to explore the relationships among pressure, temperature, and volume through two thermal interactions in the real world.

Figure 3: Variation II.
Figure 3 shows another variation that uses two gas pressure sensors, each connected to the number of molecules of the virtual gas in a compartment through an attached syringe. Like in Variation I, the two compartment are separated by a movable piston in the middle. Pushing or pulling the real syringes will cause molecules to be added or removed from the virtual compartments, allowing students to explore the relationships among number of molecules, pressure, and volume through two tactile interactions.

If you don't have that many sensors, don't worry -- both variations will still work if only one sensor is available.

I hear you asking: All these sounds fun, but so what? Will students learn more from these? If not, why bother to go through these extra troubles, compared with using an existing GUI version that needs nothing but a computer? I have to confess that I cannot answer this question at this moment. But in the next blog post, I will try to explain our plan for figuring this out.

A mixed-reality gas lab

February 12th, 2013 by Charles Xie
In his Critique of Pure Reason, the Enlightenment philosopher Immanuel Kant asserted that “conception without perception is empty, perception without conception is blind. The understanding can intuit nothing, the senses can think nothing. Only through their unison can knowledge arise.” More than 200 years later, his wisdom is still enlightening our NSF-funded Mixed-Reality Labs project.

Mixed reality (more commonly known as augmented reality) refers to the blending of real and virtual worlds to create new environments where physical and digital objects co-exist and interact in real time to provide user experiences that are impossible in only real or virtual world. Mixed reality provides a perfect technology to promote the unison of perception and conception. Perception happens in the real world, whereas conception can be enhanced by the virtual world. Knitting the real and virtual worlds together, we can build a pathway that leads perceptual experiences to conceptual development.

We have developed and perfected a prototype of mixed reality for teaching the Kinetic Molecular Theory and the gas laws using our Frame technology. This Gas Frame uses three different types of sensors to translate user inputs into changes of variables in a molecular simulation on the computer: A temperature sensor is used to detect thermal changes in the real world and then change the temperature of the gas molecules in the virtual world; a gas pressure sensor is used to detect gas compression or decompression in the real world and then change the density of the gas molecules in the virtual world; a force sensor is used to detect force changes in the real world and then change the force on a piston in the virtual world. Because of this underlying linkage with the real world through the sensors, the simulation appears to be "smart" enough to detect user actions and react in meaningful ways accordingly.

Each sensor is attached to a physical object installed along the edge of the computer screen (see the illustration above). The temperature sensor is attached to a thermal contact area made of highly conductive material, the gas pressure sensor is attached to a syringe, and the force sensor is attached to a spring that provides some kind of force feedback. These three physical objects provide the real-world contextualization of the interactions. In this way, the Gas Frame not only produces an illusion as if students could directly manipulate tiny gas molecules, but also creates a natural association between microscopic concepts and macroscopic perception. Uniting the actions of students in the real world and the reactions of the molecules in the virtual world, the Gas Frame provides an unprecedented way of learning a set of important concepts in physical science.

Pilot tests of the Gas Frame will begin at Concord-Carlisle High School this week and, collaborating with our project partners Drs. Jennie Chiu and Jie Chao at the University of Virginia, unfold at several middle schools in Virginia shortly. Through the planned sequence of studies, we hope to understand the cognitive aspects of mixed reality, especially on whether perceptual changes can lead to conceptual changes in this particular kind of setup.

Acknowledgements: My colleague Ed Hazzard made a beautiful wood prototype of the Frame (in which we can hide the messy wires and sensor parts). The current version of the Gas Frame uses Vernier's sensors and a Java API to their sensors developed primarily by Scott Cytacki. This work is made possible by the National Science Foundation.

Natural learning interfaces

August 21st, 2012 by Charles Xie
Natural user interfaces (NUIs) are the third generation of user interface for computers, after command line interfaces and graphical user interfaces. A NUI uses natural elements or natural interactions (such as voice or gestures) to control a computer program. Being natural means that the user interface is built upon something that most people are already familiar with. Thus, the learning curve can be significantly shortened. This ease of use allows computer scientists to build more complicated but richer user interfaces that simulate the existing ways people interact with the real world.

Research on NUIs is currently one of the most active areas in computer science and engineering. It is one of the most important directions of Microsoft Research. In line with this future, our NSF-funded Mixed-Reality Labs (MRL) project has proposed a novel concept called the Natural Learning Interfaces (NLIs), which represents our latest ambition to realize the educational promise of cutting-edge technology. In the context of science education, an NLI provides a natural user interface to interact with a scientific simulation on the computer. It maps a natural user action to the change of a variable in the simulation. For example, the user uses a hot or cold source to control a temperature variable in a thermal simulation. The user exerts a force to control the pressure of a gas simulation. NLIs use sensors to acquire real-time data that are then used to drive the simulation in real time. In most cases, it involves a combination of multiple sensors (or multiple types of sensors) to feed more comprehensive data to a simulation and to enrich the user interface.

I have recently invented a technology called the Frame, which may provide a rough idea of what NLIs may look like as an emerging learning technology for science education. The Frame technology is based on the fact that the frame of a computer screen is the natural boundary between the virtual world and the physical world and is, therefore, an intuitive user interface for certain human-computer interactions. Compared with other interfaces such as touch screens or motion trackers, the Frame allows users to interact with the computer from the edges of the screen.

Collaborating with Jennie Chiu's group at the University of Virginia (UVA), we have been working on a few Frame prototypes that will be field tested with several hundred Virginia students in the fall of 2012. These Frame prototypes will be manufactured using UVA's 3D printers. One of the prototypes shown in this blog post is a mixed-reality gas lab, which was designed for eighth graders to learn the particulate nature of temperature and pressure of a gas. With this prototype, students can push or pull a spring to exert a force on a virtual piston, or use a cup of hot water or ice water to adjust the temperature of the virtual molecules. The responsive simulation will immediately show the effect of those natural actions on the state of the virtual system. Besides the conventional gas law behavior, students may discover something interesting. For example, when they exert a large force, the gas molecules can be liquified, simulating gas liquifying under high pressure. When they apply a force rapidly, a high-density layer will be created, simulating the initiation of a sound wave. I can imagine that science centers and museums may be very interested in using this Frame lab as a kiosk for visitors to explore gas molecules in a quick and fun way.

A mixed-reality gas lab (a Frame prototype)
As these actions can happen concurrently, two students can control the simulation using two different mechanisms: changing temperature or changing pressure. This makes it possible for us to design a student competition in which two students use these two different mechanisms to push the piston into each other's side as far as possible. To the best of our knowledge, this is the first collaborative learning of this kind mediated by a scientific simulation.

NLIs are not just the results of some programming fun. NLIs are deeply rooted in cognitive science. Constructivism views learning as a process in which the learner actively constructs or builds new ideas or concepts based upon current and past knowledge or experience. In other words, learning involves constructing one's own knowledge from one's own experiences. NLIs are learning systems built on what learners already know or what they feel natural. The key of a NLI is that it engineers natural interactions that connect prior experiences to what students are supposed to learn, thus building a bridge for stronger mental association and deeper conceptual understanding.

Embedding Next-Generation Molecular Workbench

June 7th, 2012 by Dan Barstow

The next-generation Molecular Workbench has a fundamental feature that is both simple and profound: MW models will be embeddable directly in Web pages. This simple statement means that anyone will be able to integrate these scientifically accurate models into their own work—without having to launch a separate application. Teachers will embed MW models and activities into their own Web pages. Textbook publishers will embed them in new e-books.  There is much room for creativity and partnerships here.

The significance of this advance struck me at a recent conference on educational technology sponsored by the Software & Information Industry Association. Many creative people and companies attended, from large publishers to innovative startups. Throughout the presentations and conversations, I envisioned ways these potential partners might use MW to enhance their products and services.

Ron Dunn, CEO of Cengage, gave a keynote describing their new digital textbooks and aligned homework helpers and other digital resources. He pointed out that 35% of their sales are “digitally driven,” and that technology is essential to their future. Other major publishers echoed those messages. When publishers embed Molecular Workbench models and activities throughout their e-books as a consistent modeling environment, students will be able to investigate fundamental principles of chemistry, physics and biology more deeply than the simple animations and videos now so typical in e-books.

SmartScience is a startup, developing supplemental science education activities. Their idea to link videos of science phenomena with corresponding graphing tools is clever. For example, in a time-lapse video of rising and falling tides, students mark the ocean height and automatically see their data in a graph in order to understand both the scientific phenomena and the graph output. Augmenting reality is great, and we love the idea of integrating videos of physical, chemical and biological processes at the macroscopic scale with MW models to show what happens at the microscopic scale.

Karen Cator, Director of the Office of Educational Technology at the U.S. Department of Education, discussed a new framework for evaluating the effectiveness of educational technology projects. Software can monitor how students work their way through online problems, providing teachers with deeper insights on student learning, especially in terms of scientific thinking and problem-solving skills. Teachers can focus on students’ higher-level thinking skills, and provide useful, real-time feedback to identify strengths, progress and areas in need of help. We agree whole-heartedly and have been working on ways to capture student data in real time and provide feedback loops for teachers. Our next-generation Molecular Workbench will record what students do as they explore the models and make that information available to teachers and researchers.

Partnerships with creative teachers, publishers, and software developers will help us ignite large-scale improvements in teaching and learning through technology. That’s our mission and our goal for Molecular Workbench. Thanks to Google funding, we’re working to increase access to the incredibly powerful next-generation Molecular Workbench.

 

Project KTracker kicks off

May 9th, 2012 by Charles Xie
Watch a demo video
We have started to develop a high quality three-dimensional motion tracking system for science education based on the Microsoft Kinect controller, which was released about 18 months ago. This development is part of the Mixed-Reality Labs project funded by the National Science Foundation.

KTracker will provide a versatile interface between the Kinect and many physics experiments commonly conducted in the classroom. It will also provide natural user interfaces for students to control the software for data collection, analysis, and task management. For example, the data collector will automatically pause while the Kinect detects that the experimenter is adjusting the apparatus to create a new experimental condition (during which the data collection should be suspected). Or the user can "wave" to the Kinect to instruct the software to invoke a procedure. In this way, the user will not need to switch hands between the apparatus and the keyboard or mouse of the computer (this "hand-switching" scene seems familiar to the experimentalists reading this post, huh?). The Kinect sensor has the capacity to recognize both gestures of the experimenter and motions of the subject, making it an ideal device for carrying out performance assessment based on motor skill analysis.

KTracker is not a post-processing tool. It is not based on video analysis. Thanks to the high performance infrared-based depth camera built in the Kinect, KTracker is capable of doing motion tracking and kinematic analysis in real time. This is very important as it helps to accelerate the data analysis process and contributes to enhancing the interactivity of laboratory experiments.

KTracker will also integrate a popular physics engine, Box2D, to support simulation fitting. For example, the user can design a computer model of the pendulum shown in the above video and adjust the parameters so that its motion will fit what the camera is showing--all in real time. Like the graph demonstrated in the above video, the entire Box2D will be placed in a translucent pane on top of the camera view, making it easy for the user to align the simulation view and the experiment view.

KTracker will soon be available for download on our websites. We will keep you posted.

Kinect-based motion tracking and analysis

May 3rd, 2012 by Charles Xie
Click here to watch a video.
Microsoft's Kinect controller offers the first affordable 3D camera that can be used to detect complex three-dimensional motions such as body language, gestures, and so on. It provides a compelling solution to motion tracking, which--up to this point--is often based on analyzing the conventional RGB data from one or more video cameras.

The conventional wisdom of motion tracking based on RGB data requires complicated algorithms to process a large amount of video data, making it harder to implement a real-time application. The Kinect adds a depth camera that detects the distances between the subjects and the sensor based on the difference of the infrared beams it emits and the reflection it receives. This gives us a way to dynamically construct a 3D model of what is in front of the Kinect with a rate of about 10-30 frames per second, fast enough to build interactive applications (see the video linked under the above image). For as low as $100, we now have a revolutionary tool for tracking 3D motions of almost anything.

The demo video in this post shows an example of using the Kinect sensor to track and analyze the motion of a pendulum. The left part of the above image shows the overlay of trajectory and velocity vector to the RGB image of the pendulum, whereas the right part shows the slice of the depth data that is relevant to analyzing the pendulum.

The National Science Foundation provides funding to this work.

Augmented reality thermal imaging

March 26th, 2012 by Charles Xie
IR: Watch the YouTube video
Augmented reality (AR) presents a live view of the real world whose elements are augmented by computer-generated data such as sound or graphics. The technology promises to enhance the user's current perception of reality. AR is considered as an extension of virtual reality (VR). But unlike VR that replaces the real world with a simulated one, AR bridges and takes advantage of the real world and the simulated world.

Augmentation is conventionally in real-time and in semantic context with environmental elements. With the help of AR technology, the information about the surrounding real world of the user becomes digitally manipulable. Artificial information about the environment and its objects can be overlaid on the real world to achieve seamless effects and user experiences.

Our NSF-funded Mixed-Reality (MR) Labs Project has set out to explore how AR/MR technologies can support "augmented inquiry" to help students learn abstract concepts that cannot be directly seen or felt in purely hands-on lab activities.

AR: Watch the YouTube video
One of the first classes of prototype we have built is what we call "Augmented Reality Thermal Imaging." The concepts related to heat and temperature are somehow difficult to some students because thermal energy is invisible to the naked eye. Thermal energy can now be visualized using infrared (IR) imaging. But we have developed AR technology that provides another means of "seeing" thermal energy and its flow.

The first image in this post shows an IR image of a poster board heated by a hair dryer. The second image shows a demo of AR thermal imaging: When a hair dryer blows hot air to a liquid crystal display (LCD), the AR system reacts as if hot air could flow into the screen and leave a trace of heat on the screen, just like what we see in the IR image above. You may click the links below the images to watch the recorded videos.

The tricky part of MR Labs is that, in order to justify the augmentation of a computer simulation to a physical activity, the simulation should be a good approximation of what happens in the real world. We used our computational fluid dynamics (CFD) program, Energy2D, to accomplish this. There are many more demos of MR Labs using Energy2D, which can be viewed at this website.


Surface computing on human skin and walls? Microsoft LightSpace brings augmented reality to a new dimension

October 14th, 2010 by Chad Dorsey

Microsoft is moving beyond one surface onto multiple surfaces now. With their LightSpace research project, they are tracking virtual objects as they move off a surface and onto users’ hands to be carried around the room. Projectors keep the virtual objects in sync with the real-world objects. So you can write a virtual note, carry it around, and “drop” it onto a wall. This is apparently made possible through the use of something called “depth cameras,” important for Microsoft’s Kinect gaming platform:

“Depth cameras (such as those from PrimeSense1, 3DV, and Canesta) are able to directly sense range to the nearest physical surface at each pixel location. They are unique in that they enable inexpensive real time 3D modeling of surface geometry, making some traditionally difficult computer vision problems easier. For example, with a depth camera it is trivial to composite a false background in a video conferencing application.

A white paper on LightSpace describes more.

(via ZDNet.)