Author Archives: Chad Dorsey

Learning Everywhere taking inspiration from two partners, At-Bristol and Exploradôme

Innovative applications of technology are found virtually everywhere, transforming all kinds of spaces into opportunities for STEM learning that move beyond the walls of classrooms and past schooltime hours. Persistent engagement and interest in meaningful learning activities and practices can spur an enduring pursuit of science.

Our Learning Everywhere initiative is exploring, prototyping, and creating new learning experiences—including exhibits, mobile apps, and user tracking technologies—that connect and coordinate learning across museums and bridge in-school and out-of-school time. To survey new learning spaces and interactive technologies, we visited two of our Learning Everywhere partners, At-Bristol and Exploradôme, as well as other science centers in the London and Paris areas, including the Science Museum of London and the City of Science and Industry at La Villette.

Chad Dorsey and Sherry Hsi at the entrance of At-Bristol Science Center.

Chad Dorsey and Sherry Hsi at the entrance of At-Bristol Science Center.

Donning our bracelets printed with unique barcode IDs at the entrance, we explored the many At-Bristol exhibits, scanning our bracelets to collect and compare our data with data from other visitors. At some stations, we learned how the creators of Wallace and Gromit, from Aardman Animations’ studios also in Bristol, made their great movies before creating our own stop-motion animations. A quick scan of our wrists saved these animations to a website where we could access them later. Other parts of our experience, from scatterplots of our height compared to other visitors to videos of ourselves on slow-motion “startle-cam” added themselves into our electronic portfolio during the visit. We even found ourselves wearing bee wings and performing a waggle dance to mimic bee behaviors in an exhibit about the mysterious lives of bees! This and other digital artifacts from our visit served as opportunities for further conversation and inquiry back home, and as a source of fun for our families. (Needless to say, the bee dance video was a source of great enjoyment, but it will not be showing up publicly on Instagram any time soon!)

At-Bristol Science Center’s animation exhibits area.

At-Bristol Science Center’s animation exhibits area.

Our visit to London coincided with the grand opening of Wonder Lab at the Science Museum of London. Our guide, Dave Patten, Head of New Media there, showed us the spacious, colorful interactive gallery designed to encourage visitors to collaborate, play, and learn from conversation. In another exhibition, Engineer Your Future, teens and young adults use their personal mobile devices in public gallery spaces to design vehicles, then launch and control them on a huge public screen! Other large-screen and combined physical-digital exhibits featured different design-oriented and competitive games on energy, vehicle design, and different engineering careers.

Science Museum of London’s WonderLab the evening before its grand opening.

Science Museum of London’s WonderLab the evening before its grand opening.

The many heads of Dave Patten from the Science Museum of London in a Wonder Lab exhibit.

The many heads of Dave Patten from the Science Museum of London in a Wonder Lab exhibit.

Moving farther south, we visited the Cité des Sciences et de l’Industrie in Paris, where an immense, airy space houses corners with multiple galleries of permanent and temporary exhibitions. Among them, designed areas invite reflection and discussion among school groups or individuals. In a highlight of the visit, François Vescia, Senior International Project Manager at the museum, gave us a tour of their fabrication laboratory, Carrefour Numerique. This public space is a wonderland of design and making, custom created to invite design collaboration and discussions that merge seamlessly into design and construction of physical prototypes and objects. Visitors access materials and machinery from e-textile design, milling machines, 3D printers, and laser and vinyl cutters to turn their visions into reality. Drop-in and scheduled programs and workshops and in-person support are available, and visitors can begin designing projects digitally in the multimedia lab, then move next door to fabricate them.

Chad Dorsey, Francois Vescia, and Sherry Hsi at Parc de la Villette, an area in Paris, known for the Cité des Sciences et de l'Industrie science museum.

Chad Dorsey, Francois Vescia, and Sherry Hsi at Parc de la Villette, an area in Paris, known for the Cité des Sciences et de l’Industrie science museum.

Entrance to the Fab Lab at the City of Sciences and Industry in Paris.

Entrance to the Fab Lab at the City of Sciences and Industry in Paris.

Taking the train to the southern suburbs of Paris, we visited the Exploradôme, where we met Goery Delacote, its founder and a longstanding member of the Concord Consortium Board of Trustees. Goery toured us among the great exhibits packed into the floor of this small museum, where the motto is “Not touching is not allowed!” Playing like kids (and some of us were!), we explored visual perception phenomena, dug holes for water in a version of the AR Sandbox Sherry helped create and worked together to launch six-foot smoke rings that rose to the ceiling.

Goery Delacôte, Sherry Hsi, and Chad Dorsey at the entrance of Exploradome in Vitry-sur-Seine south east of Paris. Colors from the building were selected from colors found around the local neighborhood.

Goery Delacôte, Sherry Hsi, and Chad Dorsey at the entrance of Exploradome in Vitry-sur-Seine south east of Paris. Colors from the building were selected from colors found around the local neighborhood.

The thoughtful curation and orchestration of interactive exhibits throughout our Learning Everywhere tour was inspiring, as was the innovative use of technology to engage visitors and extend museum experiences beyond the visit. As we collate and catalog these experiences and technologies as part of the project work, we look forward to working further with museums and other out-of-school institutions to bridge and extend learning everywhere.

Making smoke rings collaboratively at the Exploradome with Goery Delacôte and Sherry Hsi.

Making smoke rings collaboratively at the Exploradome with Goery Delacôte and Sherry Hsi.

Making virtual lakes by digging in the Augmented Reality Sandbox exhibit at the Exploradome.

Making virtual lakes by digging in the Augmented Reality Sandbox exhibit at the Exploradome.

Exploring optical illusions and visualization puzzles at the Exploradome with Goery Delacôte.

Exploring optical illusions and visualization puzzles at the Exploradome with Goery Delacôte.

Launching a new interdisciplinary field of study in spoken language technology for education

A grant from the National Science Foundation will help launch a new interdisciplinary field of study in spoken language technology for education. The one-year “Building Partnerships for Education and Speech Research” project will unite the extensive education research and educational technology backgrounds at the Concord Consortium and SRI International’s Center for Technology in Learning (CTL) and bring them together with two of the strongest groups in spoken language technology research, the Speech Technology and Research (STAR) Laboratory at SRI and the Center for Robust Speech Systems (CRSS) at the University of Texas at Dallas.

The sophistication of technologies for processing and understanding spoken language—such as speech recognition, detection of individual speakers, and natural language processing—have radically improved in recent years, though most people’s image of modern spoken language technology is colored by often-finicky interactions with Siri or Google products. In fact, many lesser-known technologies can now automatically detect many features of speech, including question asking, dialog interchanges, word counts, indication of emotion or stress, and specific spoken keywords with high accuracy.

However, educational research has barely begun exploring their potential to provide insight into, and eventually revolutionize, research areas as diverse as collaboration, argumentation, discourse analysis, emotion, and engagement. And capturing the most critical and substantive interactions during the teaching and learning process—the discourse and conversation among students, teachers, and mentors—remains elusive.

The central goal of this new project is to generate interest in and momentum toward the use of spoken language technologies in education research. The potential for such applied technologies is vast, and the broader impacts could be significant. As these technologies become established for use in improved education research and development, researchers will be able to better understand and target interventions, educators will be able to monitor and adjust their interactions with learners, and learners will be better informed of their learning progress.

The National Science Foundation funds grant to pair intelligent tutoring system and Geniverse

Games, modeling, and simulation technologies hold great potential for helping students learn science concepts and engage with the practices of science, and these environments often capture meaningful data about student interactions. At the same time, intelligent tutoring systems (ITS) have undergone important advancements in providing support for individual student learning. Their complex statistical user models can identify student difficulties effectively and apply real-time probabilistic approaches to select options for assistance.

The Concord Consortium is proud to announce a four-year $1.5 million grant from the National Science Foundation that will pair Geniverse with robust intelligent tutoring systems to provide real-time classroom support. The new GeniGUIDE—Guiding Understanding via Information from Digital Environments—project will combine a deeply digital environment with an ITS core.

Geniverse is our free, web-based software for high school biology that engages students in exploring heredity and genetics by breeding and studying virtual dragons. Interactive models, powered by real genes, enable students to do simulated experiments that generate realistic and meaningful genetic data, all within an engaging, game-like context.

Geniverse Breeding

Students are introduced to drake traits and inheritance patterns, do experiments, look at data, draw tentative conclusions, and then test these conclusions with more experimentation. (Drakes are a model species that can help solve genetic mysteries in dragons, in much the same way as the mouse is a model species for human genetic disease.)

The GeniGUIDE project will improve student learning of genetics content by using student data from Geniverse. The software will continually monitor individual student actions, taking advantage of ITS capabilities to sense and guide students automatically through problems that have common, easily rectified issues. At the classroom level, it will make use of this same capability to help learners by connecting them to each other. When it identifies a student in need of assistance that transcends basic feedback, the system will connect the student with other peers in the classroom who have recently completed similar challenges, thus cultivating a supportive environment.

At the highest level, the software will leverage the rich data being collected about student actions and the system’s evolving models of student learning to form a valuable real-time resource for teachers. GeniGUIDE will identify students most in need of help at any given time and provide alerts to the teacher. The alerts will include contextual guidance about students’ past difficulties and most recent attempts as well as suggestions for pedagogical strategies most likely to aid individual students as they move forward.

The Concord Consortium and North Carolina State University will research this layered learner guidance system that aids students and informs interactions between student peers and between students and teachers. The project’s theoretical and practical advances promise to offer a deeper understanding of how diagnostic formative data can be used in technology-rich K-12 classrooms. As adaptive student learning environments find broad application in education, GeniGUIDE technologies will serve as an important foundation for the next generation of teacher support systems.

Apple’s textbooks and deeply digital learning

I was on the plane returning from Wednesday’s great Cyberlearning Summit when Apple went live with its announcement about iBooks 2 and its foray into the textbook game. This is particularly relevant, as it applies directly to the concerns about digital textbooks and innovation we’ve been addressing in our calls for deeply digital learning. I’m sure I’ll have more to come, but here are some initial thoughts about this announcement and its implications.

Innovation? In many ways, the announcement was an example of the many things there are to be concerned about regarding shallow innovations in digital learning. The main features touted about digital textbooks were the obvious ones. They weigh less. They don’t fray at the edges. They can include images and videos. You can highlight. You can jump to individual sections, pages, or chapters. These are all good features of digital books, but do very little to move us past the transmissionist pedagogy that textbooks represent so strongly today.

Openness? A second large concern raised by many in the ensuing blogosphere echoes relates to the lack of openness that these textbooks permit. Creation occurs principally or solely (for now) on a Mac, via Apple’s iBooks Author application, and books created with this are for use on the iPad only, not even for use on Mac computers. Somewhat understandable, all, since Apple is all about ecosystems, and the iPad is certainly an imaginably good tool for use in the classroom. However, the strictures extend further in ways that seem relatively unpalatable in the long run. According to the iBooks Author EULA, as Dan Wineman identifies, the mere act of creating books via this application is supposed to legally restrict where they can be sold or distributed. This ranges from surprising to shocking, depending upon your views, and the viability of such a model will remain to be seen. Further, the standard used for iBooks, while a thin wrapper over ePub3, is apparently a closed standard, and the application is unlikely to output in formats that permit content to be used and distributed as widely as should be possible for educational materials.

However, there is a slight silver (gray?) lining involved, as the EULA does make it clear that textbooks created with iBooks Author can be distributed for free at will, seemingly across platforms as well. As long as you don’t ever want to attach a price to the materials, this may provide an out. May is the operative term, however, seeing as Apple has certainly been known to change its terms on various whims in the past.

Deeply Digital possibilities? This is where things get a bit interesting. Taking all the former concerns into stride (which may well be too difficult to do for many), the most intriguing and underreported innovation may be yet to be discovered within this. The possibility of creating custom widgets for iBooks using HTML5 and Javascript holds intriguing ramifications. Depending upon the potential and limitations of these widgets, it may be possible to begin opening up aspects of learning that transcend the mundane and push toward deeply digital learning. It’s yet unclear, and will require some cracks from programmers (in our camp as well as others) to try to stretch the possibilities of these Dashcode widgets for the iPad to see what they can enable. True computational models and simulations, rather than basic interactive images or animations? Access to probeware and sensors? Outside access to tools and data streams? Potential for real-time formative assessment and reporting on student progress?

It’s likely that some, but not all, of these will indeed be possible, and the iPad is a beautiful platform to create things for with creation tools that are usually equally elegant. Whether these push the possibilities of technology toward capabilities that can truly make a difference for teaching and learning or whether Apple’s format and strictures will limit these examples to another small stride or shallow cut at innovative educational technology remains to be seen.

Freak Control: On computing without keyboards

There have been some interesting posts recently demonstrating and discussing control of devices beyond the keyboard. First, every casual gamer’s dream has now come true: you can play Angry Birds using your brain as a controller. The implications for reaching an even higher vegetative state state of flow are simply staggering.

Second, one story that illustrates Apple’s genius in this arena and a second that questions it. If you missed All Things D’s story about the moment that Apple and Microsoft’s touch interface dreams diverged, chug it into your Instapaper queue right now – it’s a great reminder of how far we’ve come in such a short time, and about how Microsoft continued a strange fumble with their Surface platform while Apple managed this transition from practice with the iPhone to full-on victory with the iPad. (I touched on the consumer side of the success of practicing with the iPhone’s interface as readying the public for the concept of the iPad in my Perspective piece a year or so ago.)

Third, an interesting rant from Matt Honan at Gizmodo claims that Siri’s hands-off interface presents the nuanced user experience we have come to expect from Apple. Gruber agrees, and I have to say I do much of the time as well.

And finally, a group in Tokyo is turning everyday objects into interactive devices using projectors and cameras. I particularly like their turning a banana into a functioning telephone through the use of object detection and focused sound beams.

Happy snacking – maybe you can read this whole post without touching your computer. Just think “scroll up” really hard…

Reflections on a single-device world

We put the last clock radio in our house in the Goodwill pile last week. Seeing it sitting on the pile to go downstairs was a surprising revelation for me. Somehow it felt wrong for a reason I couldn’t place. Then it hit me: a clock radio was my first real gadget purchase.

For those who don’t recall, there was a time when clock radios were quite a novel invention. The ability to wake to the radio instead of some raucous bell was an entirely new concept. And to a budding radio-phile like me, it seemed like the newest of frontiers. I remember looking across the counter at our local Sterling Drug for many a visit, and piling up birthday money and allowance until the mound was enough to purchase this coolest of things. The red glow of the lights and the late-night sessions listening to AM talk radio or trying to pull the strains of Dr. Demento out of the static seem as close now as they did then.Clock radio destined for the dustbin

This was a first – a multi-function gadget. And the mere concept of combining the functions was mesmerizing. Now, it’s entirely replaced by one entirely multi-function gadget. I use my iPhone both to listen to radio as I’m falling asleep and to wake me up. Of course, our family point-and-shoot camera and car GPS device are also starting to gather dust at a surprising rate.

This is no new revelation, of course, but the fundamental nature of my feeling at this loss was interesting to note. What other fundamental weirdness will we be in for as technology continues to contract our world of life and transform the world of education? The first time a teacher enters a classroom without a board he or she can write on? The first time a mom realizes she doesn’t need to buy any spiral-bound notebooks at the back-to-school sales? The first time a principal realizes that she can find out about the misconceptions all of her students hold on a given day about science concepts they are studying, even the students who transferred in to school that morning?

Time will tell with all of these. For now, I need a nap – Siri, can you set a timer to wake me up…?

How old media dissolved the essence of Joi Ito’s NYT story

I recently fawned over Joi Ito’s NY Times story about how openness and the Internet change the way we approach innovation and daily life. However, the unabridged version he posted to his blog is actually much better. It’s interesting to think for a moment about this episode.

First, the simple fact that this had to be shortened is reflective of old/new media constraints. Clearly, the costs and space of paper itself drive this need at least partially. Electrons are cheap, and Joi has no problem posting something of any length he wants on his blog. Any new media does not live by these constraints. And the collision of the two is befuddling many in the industry right now. The NY Times online version of this story is the same as the print version as far as I can tell, though it does include a link to the MIT Media Lab. And, in fact, the Times’ forward-thinking hyperlink system is what enabled me to link directly to a highlighted sentence in Joi’s story online.

Now, I understand the necessity of editing things down. The vote-with-your-mouse Internet has made that all too clear. And I benefit significantly from the editing of others. But I think this piece suffered at the hands of old media constraints. Let’s look into it a bit.

First, the description of the creation of X.25 in the NYT article has it as a “standard that seemed to anticipate every possible problem and application.” When I first read that, I questioned why we ended up going with IP after all. Reading Joi’s phrasing, however, tells a different story. He states, “The X.25 people were trying to plan and anticipate every possible problem and application. They developed complex and extremely well-thought-out standards that the largest and most established research labs and companies would render into software and hardware.” The subtleties here describe quite a different proposition. “Trying to plan and anticipate every problem” and developing “complex” standards is not exactly the same thing as “seeming to anticipate every…problem.” From an Agile development point of view, one might in fact become increasingly skeptical of any solution that strives to anticipate every problem. This is part of the point I think Joi is trying to make here, and it is blurred in a subtle, but important, way by this edit.

A second edit that removes important concepts comes in the loss of the reference to the RFC process. To the NY Times’ credit, they have weighed in admirably on this in the past, and it is certainly a bit obscure, so I understand the change. However, for anyone familiar with the story, this nuance shows some of the depth behind how openness triumphed in this case. Not by pure magic, but as a product of carefully managed process and group dedication in equal measure.

Though the Times article captures many of the nuances of argument that the Maker movement parallels much about the early days of the Internet, a subtle change loses meaning here, too. Joi describes 3D printers and the related ecosystem as ” cheaper, standardized and connected via the Internet,” three essential elements to the core of the innovation happening here. While the Times’ description of this tightens this up nicely and gets the basics right, it is interesting to note the nuances that are missed.

New platforms and new media permit new messages and new opportunities. That is some of what the story of the Internet’ birth tells us. In the same way, it’s what we also see playing out in educational technology today. Let’s look closely as we go forward, and try not to miss the nuances.

The Joi of Openness

I just finished reading Joi Ito’s great New York Times essay about the Internet and openness. This is clearly a piece that resonates with many of us at the Concord Consortium as well as in the creative technology community at large. Joi does an excellent job explaining and characterizing what it is about the Internet’s birth and durability over time, capturing that often ineffable quality of open interconnectedness that is responsible for many of the aspects of networked life we take for granted every day.

One of the most exciting aspects of this all is something that Joi’s piece captures well – the freewheeling and wide-ranging freedom that this openness provides everyone who takes part. The fact that anyone with a good idea is in theory equally close to any other (though this apparently leaves many in this country without good network connections in the cold) is what makes the fabled guys-in-a-garage notion able to spring forth as the next Facebook or Google. It’s also what is fueling the burgeoning Maker movement (that the Economist captured somewhat well in a recent article highlighting the Maker Faire that our colleagues at the New York Hall of Science hold each year now).

Fortune cookie says: To succeed, you must share.

Plying this sensibility for the development of educational technology is our stock in trade at the Concord Consortium. Capturing this sensibility in everything we do, on the other hand seems to be in the genes of most all of us. Listening to the table conversations over one of our staff potlucks, I’m always amazed at how one central emotion ties through such a diversity of discourse: a shared interest in creation and the discovery of new things. This overriding interest is probably what makes most of us here science geeks. The idea of applying it in order to make education better for students across the world is what draws us to the Concord Consortium.

It’s in the pursuit of this interest that we all do our work here, but it’s in the enactment of openness that the work is able to thrive. Our code is on Github, our content is shared, and our ideas are in the open air.

I understand intrinsically what Joi describes with his use of the wonderful word “neoteny” to describe the childlike wonder of discovery retained as an adult. It’s why our friends at the Media Lab’s Lifelong Kindergarten Group chose their moniker. And now that we have been introduced to the word, I’m fairly certain it’s one we’ll hear included in the vivacious conversations across the table at our next group potluck. Thanks, Joi, for helping the Times’ readers make the connection between this concept and the openness that is so important to innovation and the Internet.

Update: 12/6 at 10:10 AM: Corrected the idiom to the proper “stock in trade,” and (reluctantly!) corrected the typo “monkier” to read “moniker.”