Author Archives: Chad Dorsey

Chad’s Great American Eclipse Chase: Part 1 — Prologue

This series details the eclipse-chasing exploits of our President and CEO, Chad Dorsey, as he heads down to Tennessee on a quest for the total solar eclipse. See the whole series.

I. Packing—and recalling.

As the suitcases start to fill and the lists of remaining to-dos become gradually shorter, the actual fact seems increasingly hard to believe—2017 is actually here, and the chase is about to begin once again.

When you’ve been targeting a specific date for more than 25 years, anticipation takes on a more subtle dimension. You see, this date has been in our sights since the afternoon of July 11, 1991. That was the day when our family, together with our close friends the Stewarts, experienced the 1991 eclipse in Baja, Mexico. There are more tales to be told of that magnificent eclipse later in this series, but only two facts are of real importance for the time being. First, that was the moment when I (and thousands of others standing there that day) officially caught eclipse fever, and second, that was the moment when, in the growing light of the receding partial phases, we all hunched over the battered copy of Astronomy magazine to ask the same question—where and when could we find totality again?

Astronomy magazine from July 1999

In July 1999, Astronomy magazine featured the “eclipse of the century.”

While there were many upcoming eclipses to be chased (and my father, an 8th grade science teacher, has sought out at least one other in the intervening years—Aruba in 1998), the most attractive candidate was obvious at a glance. As we scanned the graphic of future eclipse tracks, we saw many that fell across the ocean, hit land in Siberia in the dead of winter, or presented themselves in other un-useful or hard-to-reach areas. But one opportunity popped immediately off the page—a beautiful crimson stripe straight across the United States. And in the middle of August. It was enough to make any eclipse fanatic’s heart beat a little faster, if not send them running to the phone to book a hotel room right then and there.

The only catch? That enticing track marked a moment in time that lay a full 26 years in the future, practically an eternity for someone at my stage of life then. Nonetheless, we made a pact at that moment that we would do whatever it took to reconvene our group in the center of the moon’s shadow on that faraway date. I speculated with the Stewarts’ daughter, my friend since childhood, about what we might be doing in life then, imagining where we might be, but it all seemed far too far away to ever be a reality.

Map of future eclipse tracks

The future, as shown in 1999

Fast forwarding those 26 years, that daydream now seems both immediately close and unfathomably distant. Preparations for August 21 seem infused by a similar duality. While all the intrigue and mystery remain, the buildup seems prosaic in a way, and the hype and excitement in the media dulls somewhat on impact. But I do know that I’m finally heading to see that actual eclipse, and that we actually are all reconvening once again, this time with a raft of intervening life experiences and three additional children (and one dog) in tow among us.

II. Realizing again, with new eyes.

Group with eclipse glasses in Baja Mexico

The Dorseys and Stewarts prepare for the 1999 eclipse in Baja, Mexico

Now and then, though, the jolts come, and instants of full-blown excitement wash in, sweeping away any accumulated feelings that this might be an ordinary trip. Seeing an article try to describe totality to those who have never experienced it. Hearing a the recorded gasps of eclipse watchers in Manitoba, Canada in 1979—the eclipse where my father first caught the bug—at the moment the shadow first engulfs them. In those instants, anything timeworn drops away, and I’m instantly a kid again. Because in the moment of totality, we’re all transformed into kids again. Screaming, laughing, gasping, staring. Truly—it’s just like that. Those few minutes are somehow transcendent, childlike wonder played out on a grand scale in a fleeting experience that binds all those who watch together. Yeah, I know—when you read them, such descriptions are florid. Hyperbolic, even. But to anyone who’s experienced totality, they barely even begin to capture the essence of the thing.

The most exciting part about this one is that the people it stands to bind together include my own children. In a wonderful twist of fate, my son happens to be barely a year older than I was when I saw my father pack for Manitoba.

Manitoba eclipse T-shirt

A t-shirt depicts the 1979 Manitoba, CA, eclipse

I still remember seeing him lay out the filters, tripods, and camera equipment in a huge spread on the living room floor and observing as he rehearsed his moves over and over—removing the filters, training the camera, snapping photos—a carefully choreographed dance timed to the transition into totality. This time, my son and daughter get to come along, and I get the honor of passing the passion for eclipse chasing on to the next generation. That is, as long as the weather cooperates…

Data Science Education Meetup at NSTA 2017

Thanks to all the great folks who attended our NSTA 2017 Data Science Education Meetup at BottleRock LA last night. We had a great crowd attend, complete with representatives from, the education arm of the International Space Station (ISS), CASIS, the folks from MiniPCR, LAUSD, Lodi USD, the CREATE for STEM Institute, Educational Passages and more! We’re still processing the yummy food and hours of great conversation about all things data.

Happy attendees at the Data Science Education Meetup at NSTA 2017

A few highlights of the evening included:

  • Hearing about streaming atmospheric data from’s successful rocket and balloon launches at Lodi schools and others all across California, and their ExoLab experiments on the ISS
  • Debating the grand challenges and barriers to data science education with teachers and district staff
  • Hearing about how using data, modeling and evidence has transformed teaching for those using the Interactions curriculum from the CREATE for STEM Institute and the Concord Consortium
  • Seeing Liam Kennedy’s amazing ISS-ABOVE device that changes your TV into a portal to the International Space Station, complete with live alerts when the ISS passes overhead
  • Learning about how MiniPCR’s Genes in Space competition is bringing high school students’ experiments to the space station and how a new USB-stick-sized gene sequencing device is speeding up microbe research on the ISS by orders of magnitude
  • Hearing stories of how data from GPS-tracked drifter boats from Educational Passages has connected students across continents

The place was packed and the evening was full of great geekery, all the more evidence that the time has come for data science education. We thank everyone who attended and we look forward to building networks, collaborations and new modes of teaching and learning together.

Missed us last night? No problem. Join us for the next one, at an upcoming conference near you! See all our meetups and RSVP at

Learning Everywhere taking inspiration from two partners, At-Bristol and Exploradôme

Innovative applications of technology are found virtually everywhere, transforming all kinds of spaces into opportunities for STEM learning that move beyond the walls of classrooms and past schooltime hours. Persistent engagement and interest in meaningful learning activities and practices can spur an enduring pursuit of science.

Our Learning Everywhere initiative is exploring, prototyping, and creating new learning experiences—including exhibits, mobile apps, and user tracking technologies—that connect and coordinate learning across museums and bridge in-school and out-of-school time. To survey new learning spaces and interactive technologies, we visited two of our Learning Everywhere partners, At-Bristol and Exploradôme, as well as other science centers in the London and Paris areas, including the Science Museum of London and the City of Science and Industry at La Villette.

Chad Dorsey and Sherry Hsi at the entrance of At-Bristol Science Center.

Chad Dorsey and Sherry Hsi at the entrance of At-Bristol Science Center.

Donning our bracelets printed with unique barcode IDs at the entrance, we explored the many At-Bristol exhibits, scanning our bracelets to collect and compare our data with data from other visitors. At some stations, we learned how the creators of Wallace and Gromit, from Aardman Animations’ studios also in Bristol, made their great movies before creating our own stop-motion animations. A quick scan of our wrists saved these animations to a website where we could access them later. Other parts of our experience, from scatterplots of our height compared to other visitors to videos of ourselves on slow-motion “startle-cam” added themselves into our electronic portfolio during the visit. We even found ourselves wearing bee wings and performing a waggle dance to mimic bee behaviors in an exhibit about the mysterious lives of bees! This and other digital artifacts from our visit served as opportunities for further conversation and inquiry back home, and as a source of fun for our families. (Needless to say, the bee dance video was a source of great enjoyment, but it will not be showing up publicly on Instagram any time soon!)

At-Bristol Science Center’s animation exhibits area.

At-Bristol Science Center’s animation exhibits area.

Our visit to London coincided with the grand opening of Wonder Lab at the Science Museum of London. Our guide, Dave Patten, Head of New Media there, showed us the spacious, colorful interactive gallery designed to encourage visitors to collaborate, play, and learn from conversation. In another exhibition, Engineer Your Future, teens and young adults use their personal mobile devices in public gallery spaces to design vehicles, then launch and control them on a huge public screen! Other large-screen and combined physical-digital exhibits featured different design-oriented and competitive games on energy, vehicle design, and different engineering careers.

Science Museum of London’s WonderLab the evening before its grand opening.

Science Museum of London’s WonderLab the evening before its grand opening.

The many heads of Dave Patten from the Science Museum of London in a Wonder Lab exhibit.

The many heads of Dave Patten from the Science Museum of London in a Wonder Lab exhibit.

Moving farther south, we visited the Cité des Sciences et de l’Industrie in Paris, where an immense, airy space houses corners with multiple galleries of permanent and temporary exhibitions. Among them, designed areas invite reflection and discussion among school groups or individuals. In a highlight of the visit, François Vescia, Senior International Project Manager at the museum, gave us a tour of their fabrication laboratory, Carrefour Numerique. This public space is a wonderland of design and making, custom created to invite design collaboration and discussions that merge seamlessly into design and construction of physical prototypes and objects. Visitors access materials and machinery from e-textile design, milling machines, 3D printers, and laser and vinyl cutters to turn their visions into reality. Drop-in and scheduled programs and workshops and in-person support are available, and visitors can begin designing projects digitally in the multimedia lab, then move next door to fabricate them.

Chad Dorsey, Francois Vescia, and Sherry Hsi at Parc de la Villette, an area in Paris, known for the Cité des Sciences et de l'Industrie science museum.

Chad Dorsey, Francois Vescia, and Sherry Hsi at Parc de la Villette, an area in Paris, known for the Cité des Sciences et de l’Industrie science museum.

Entrance to the Fab Lab at the City of Sciences and Industry in Paris.

Entrance to the Fab Lab at the City of Sciences and Industry in Paris.

Taking the train to the southern suburbs of Paris, we visited the Exploradôme, where we met Goery Delacote, its founder and a longstanding member of the Concord Consortium Board of Trustees. Goery toured us among the great exhibits packed into the floor of this small museum, where the motto is “Not touching is not allowed!” Playing like kids (and some of us were!), we explored visual perception phenomena, dug holes for water in a version of the AR Sandbox Sherry helped create and worked together to launch six-foot smoke rings that rose to the ceiling.

Goery Delacôte, Sherry Hsi, and Chad Dorsey at the entrance of Exploradome in Vitry-sur-Seine south east of Paris. Colors from the building were selected from colors found around the local neighborhood.

Goery Delacôte, Sherry Hsi, and Chad Dorsey at the entrance of Exploradome in Vitry-sur-Seine south east of Paris. Colors from the building were selected from colors found around the local neighborhood.

The thoughtful curation and orchestration of interactive exhibits throughout our Learning Everywhere tour was inspiring, as was the innovative use of technology to engage visitors and extend museum experiences beyond the visit. As we collate and catalog these experiences and technologies as part of the project work, we look forward to working further with museums and other out-of-school institutions to bridge and extend learning everywhere.

Making smoke rings collaboratively at the Exploradome with Goery Delacôte and Sherry Hsi.

Making smoke rings collaboratively at the Exploradome with Goery Delacôte and Sherry Hsi.

Making virtual lakes by digging in the Augmented Reality Sandbox exhibit at the Exploradome.

Making virtual lakes by digging in the Augmented Reality Sandbox exhibit at the Exploradome.

Exploring optical illusions and visualization puzzles at the Exploradome with Goery Delacôte.

Exploring optical illusions and visualization puzzles at the Exploradome with Goery Delacôte.

Launching a new interdisciplinary field of study in spoken language technology for education

A grant from the National Science Foundation will help launch a new interdisciplinary field of study in spoken language technology for education. The one-year “Building Partnerships for Education and Speech Research” project will unite the extensive education research and educational technology backgrounds at the Concord Consortium and SRI International’s Center for Technology in Learning (CTL) and bring them together with two of the strongest groups in spoken language technology research, the Speech Technology and Research (STAR) Laboratory at SRI and the Center for Robust Speech Systems (CRSS) at the University of Texas at Dallas.

The sophistication of technologies for processing and understanding spoken language—such as speech recognition, detection of individual speakers, and natural language processing—have radically improved in recent years, though most people’s image of modern spoken language technology is colored by often-finicky interactions with Siri or Google products. In fact, many lesser-known technologies can now automatically detect many features of speech, including question asking, dialog interchanges, word counts, indication of emotion or stress, and specific spoken keywords with high accuracy.

However, educational research has barely begun exploring their potential to provide insight into, and eventually revolutionize, research areas as diverse as collaboration, argumentation, discourse analysis, emotion, and engagement. And capturing the most critical and substantive interactions during the teaching and learning process—the discourse and conversation among students, teachers, and mentors—remains elusive.

The central goal of this new project is to generate interest in and momentum toward the use of spoken language technologies in education research. The potential for such applied technologies is vast, and the broader impacts could be significant. As these technologies become established for use in improved education research and development, researchers will be able to better understand and target interventions, educators will be able to monitor and adjust their interactions with learners, and learners will be better informed of their learning progress.

The National Science Foundation funds grant to pair intelligent tutoring system and Geniverse

Games, modeling, and simulation technologies hold great potential for helping students learn science concepts and engage with the practices of science, and these environments often capture meaningful data about student interactions. At the same time, intelligent tutoring systems (ITS) have undergone important advancements in providing support for individual student learning. Their complex statistical user models can identify student difficulties effectively and apply real-time probabilistic approaches to select options for assistance.

The Concord Consortium is proud to announce a four-year $1.5 million grant from the National Science Foundation that will pair Geniverse with robust intelligent tutoring systems to provide real-time classroom support. The new GeniGUIDE—Guiding Understanding via Information from Digital Environments—project will combine a deeply digital environment with an ITS core.

Geniverse is our free, web-based software for high school biology that engages students in exploring heredity and genetics by breeding and studying virtual dragons. Interactive models, powered by real genes, enable students to do simulated experiments that generate realistic and meaningful genetic data, all within an engaging, game-like context.

Geniverse Breeding

Students are introduced to drake traits and inheritance patterns, do experiments, look at data, draw tentative conclusions, and then test these conclusions with more experimentation. (Drakes are a model species that can help solve genetic mysteries in dragons, in much the same way as the mouse is a model species for human genetic disease.)

The GeniGUIDE project will improve student learning of genetics content by using student data from Geniverse. The software will continually monitor individual student actions, taking advantage of ITS capabilities to sense and guide students automatically through problems that have common, easily rectified issues. At the classroom level, it will make use of this same capability to help learners by connecting them to each other. When it identifies a student in need of assistance that transcends basic feedback, the system will connect the student with other peers in the classroom who have recently completed similar challenges, thus cultivating a supportive environment.

At the highest level, the software will leverage the rich data being collected about student actions and the system’s evolving models of student learning to form a valuable real-time resource for teachers. GeniGUIDE will identify students most in need of help at any given time and provide alerts to the teacher. The alerts will include contextual guidance about students’ past difficulties and most recent attempts as well as suggestions for pedagogical strategies most likely to aid individual students as they move forward.

The Concord Consortium and North Carolina State University will research this layered learner guidance system that aids students and informs interactions between student peers and between students and teachers. The project’s theoretical and practical advances promise to offer a deeper understanding of how diagnostic formative data can be used in technology-rich K-12 classrooms. As adaptive student learning environments find broad application in education, GeniGUIDE technologies will serve as an important foundation for the next generation of teacher support systems.

Apple’s textbooks and deeply digital learning

I was on the plane returning from Wednesday’s great Cyberlearning Summit when Apple went live with its announcement about iBooks 2 and its foray into the textbook game. This is particularly relevant, as it applies directly to the concerns about digital textbooks and innovation we’ve been addressing in our calls for deeply digital learning. I’m sure I’ll have more to come, but here are some initial thoughts about this announcement and its implications.

Innovation? In many ways, the announcement was an example of the many things there are to be concerned about regarding shallow innovations in digital learning. The main features touted about digital textbooks were the obvious ones. They weigh less. They don’t fray at the edges. They can include images and videos. You can highlight. You can jump to individual sections, pages, or chapters. These are all good features of digital books, but do very little to move us past the transmissionist pedagogy that textbooks represent so strongly today.

Openness? A second large concern raised by many in the ensuing blogosphere echoes relates to the lack of openness that these textbooks permit. Creation occurs principally or solely (for now) on a Mac, via Apple’s iBooks Author application, and books created with this are for use on the iPad only, not even for use on Mac computers. Somewhat understandable, all, since Apple is all about ecosystems, and the iPad is certainly an imaginably good tool for use in the classroom. However, the strictures extend further in ways that seem relatively unpalatable in the long run. According to the iBooks Author EULA, as Dan Wineman identifies, the mere act of creating books via this application is supposed to legally restrict where they can be sold or distributed. This ranges from surprising to shocking, depending upon your views, and the viability of such a model will remain to be seen. Further, the standard used for iBooks, while a thin wrapper over ePub3, is apparently a closed standard, and the application is unlikely to output in formats that permit content to be used and distributed as widely as should be possible for educational materials.

However, there is a slight silver (gray?) lining involved, as the EULA does make it clear that textbooks created with iBooks Author can be distributed for free at will, seemingly across platforms as well. As long as you don’t ever want to attach a price to the materials, this may provide an out. May is the operative term, however, seeing as Apple has certainly been known to change its terms on various whims in the past.

Deeply Digital possibilities? This is where things get a bit interesting. Taking all the former concerns into stride (which may well be too difficult to do for many), the most intriguing and underreported innovation may be yet to be discovered within this. The possibility of creating custom widgets for iBooks using HTML5 and Javascript holds intriguing ramifications. Depending upon the potential and limitations of these widgets, it may be possible to begin opening up aspects of learning that transcend the mundane and push toward deeply digital learning. It’s yet unclear, and will require some cracks from programmers (in our camp as well as others) to try to stretch the possibilities of these Dashcode widgets for the iPad to see what they can enable. True computational models and simulations, rather than basic interactive images or animations? Access to probeware and sensors? Outside access to tools and data streams? Potential for real-time formative assessment and reporting on student progress?

It’s likely that some, but not all, of these will indeed be possible, and the iPad is a beautiful platform to create things for with creation tools that are usually equally elegant. Whether these push the possibilities of technology toward capabilities that can truly make a difference for teaching and learning or whether Apple’s format and strictures will limit these examples to another small stride or shallow cut at innovative educational technology remains to be seen.

Freak Control: On computing without keyboards

There have been some interesting posts recently demonstrating and discussing control of devices beyond the keyboard. First, every casual gamer’s dream has now come true: you can play Angry Birds using your brain as a controller. The implications for reaching an even higher vegetative state state of flow are simply staggering.

Second, one story that illustrates Apple’s genius in this arena and a second that questions it. If you missed All Things D’s story about the moment that Apple and Microsoft’s touch interface dreams diverged, chug it into your Instapaper queue right now – it’s a great reminder of how far we’ve come in such a short time, and about how Microsoft continued a strange fumble with their Surface platform while Apple managed this transition from practice with the iPhone to full-on victory with the iPad. (I touched on the consumer side of the success of practicing with the iPhone’s interface as readying the public for the concept of the iPad in my Perspective piece a year or so ago.)

Third, an interesting rant from Matt Honan at Gizmodo claims that Siri’s hands-off interface presents the nuanced user experience we have come to expect from Apple. Gruber agrees, and I have to say I do much of the time as well.

And finally, a group in Tokyo is turning everyday objects into interactive devices using projectors and cameras. I particularly like their turning a banana into a functioning telephone through the use of object detection and focused sound beams.

Happy snacking – maybe you can read this whole post without touching your computer. Just think “scroll up” really hard…

Reflections on a single-device world

We put the last clock radio in our house in the Goodwill pile last week. Seeing it sitting on the pile to go downstairs was a surprising revelation for me. Somehow it felt wrong for a reason I couldn’t place. Then it hit me: a clock radio was my first real gadget purchase.

For those who don’t recall, there was a time when clock radios were quite a novel invention. The ability to wake to the radio instead of some raucous bell was an entirely new concept. And to a budding radio-phile like me, it seemed like the newest of frontiers. I remember looking across the counter at our local Sterling Drug for many a visit, and piling up birthday money and allowance until the mound was enough to purchase this coolest of things. The red glow of the lights and the late-night sessions listening to AM talk radio or trying to pull the strains of Dr. Demento out of the static seem as close now as they did then.Clock radio destined for the dustbin

This was a first – a multi-function gadget. And the mere concept of combining the functions was mesmerizing. Now, it’s entirely replaced by one entirely multi-function gadget. I use my iPhone both to listen to radio as I’m falling asleep and to wake me up. Of course, our family point-and-shoot camera and car GPS device are also starting to gather dust at a surprising rate.

This is no new revelation, of course, but the fundamental nature of my feeling at this loss was interesting to note. What other fundamental weirdness will we be in for as technology continues to contract our world of life and transform the world of education? The first time a teacher enters a classroom without a board he or she can write on? The first time a mom realizes she doesn’t need to buy any spiral-bound notebooks at the back-to-school sales? The first time a principal realizes that she can find out about the misconceptions all of her students hold on a given day about science concepts they are studying, even the students who transferred in to school that morning?

Time will tell with all of these. For now, I need a nap – Siri, can you set a timer to wake me up…?