Archive for December 2011

Building Learn.Ember.js, part 1: I say App, you say Document

December 18th, 2011 by Richard Klancer

Summary: I created a prototype of Learn.Ember.js, an interactive tutorial application for web developers who want to learn about Ember.js. Along the way I was reminded that one of the most useful things about HTML5 is that it helps us to blur the app vs. document distinction in useful ways.

Oh, and by the way, we’re hiring!

Here at the Concord Consortium we believe that interactive computational simulations are powerful tools for learning about the world in ways that were not previously practical, or even possible. Google seems to agree; their philanthropic arm Google.org recently gave us a substantial grant to make an HTML5 version of our Molecular Workbench molecular simulation environment.

Changing the world

But Google didn’t approach us just because they agree that simulations of molecular behavior are a great way to learn about science. They approached us because we have spent 10 years writing well-regarded content for Molecular Workbench. We don’t just make simulations. We embed them in documents that introduce topics gently, encourage you to play with the simulation in productive ways, and in general encourage you to think.

It turns out there are many other domains that can benefit from open-ended tools embedded in structured “learning activities” available via browser. In particular, web development itself can benefit.

Inspiration from learn.knockoutjs.com

Here at Concord I mostly do client-side web app development, and so recently I found myself surveying the new crop of client-side MVC libraries. I was looking for a lighter-weight alternative to SproutCore (which we have used for a few projects) while we waited to see what would come of on the greatly slimmed-down, SproutCore-inspired library then that was then supposed to become SproutCore 2.0, and is now a separate project called Ember.js.

But there a lot of “maybe” development tools out there — tools which might be useful someday, but which I don’t need urgently, and which aren’t such breakthroughs that they need to be understood for their own sake. One of the “maybe” libraries I came across was Steve Sanderson’s impressive Knockout.js.

Since I wasn’t doing this survey “for real”, there was a chance that I would have read through the Knockout documentation in detail, downloaded the library, and made sample pages to play with its features. A small chance. There are only so many hours in a day.

But Knockout.js has a secret weapon: its companion tutorial site, learn.knockoutjs.com. Without quite intending to, within a few minutes of stumbling onto the tutorials I built and ran working examples that felt like plausible components of a Knockout-powered app, right in the tutorial page itself. After I finished the first tutorial I had a much better idea of what kind of problems Knockout solves, and how it solves them, than I would have gotten from the usual desultory flip through the Knockout homepage. (You should try the tutorials yourself!)

Prototyping Learn.Ember.js

As it turns out, Ember.js (née SproutCore 2.0) is shaping up to be a cleanly designed and powerful library with a solid team behind it, and I am enthusiastic about its future.

And as Scott has previously blogged, we at Concord would like to create more value for the open source ecosystem. So I’ve begun work on a side project I call Learn.Ember.js. You can see the first public prototype here. (Warning: this does not work in some browsers, notably older versions of Firefox and — wait for it — IE.)

Once I had the most basic functionality working — 2 Ace editors for the Javascript code and the view template, and an embedded iframe for the results — I wanted to focus on establishing a clean visual design. That meant I had to stop writing code and stop dreaming up potential features long enough to focus on design. Fortunately I was saved from withdrawal symptoms by all the opportunities which that opened up for obsessive font fiddling and CSS tweaking.

The challenge here was not so much the design of the text content — though I tried to borrow the best from well-designed, readable sites I like, such as the new Boston Globe website, the Nieman Foundation’s Nieman Labs blog, arc90′s Readability tool, and Mark Pilgrim’s Dive Into HTML5. Rather, the challenge proved to be finding a way to keep all the buttons and assorted interactive knobby bits from interfering with the text.

My first attempts weren’t very promising. I couldn’t put my finger on why until I realized that the 4-box layout of learn.knockoutjs.com just wasn’t working for me. Somehow I got the idea that in order to make the tutorial readable, I would have to find a way to “unbox” the design and make it look something like a page of a good technical book that just happened to be able to run code. But that introduced its own problems. Where to put the results of the program the user writes (which is an interactive web app unto itself)? Put that in a box, and, together with the Javascript and Handlebars/HTML input, which seem to need to be in boxes — de facto, you have four little boxes again!

Gradually, it occurred to me that the program output could be in flow with the text, right below whichever paragraph prompted you to try running the updated program. Then, with just a little position: fixed and fluid-layout magic, it would be perfectly reasonable to have the whole page scroll, and the tutorial content with it. That is to say, I rediscovered the basic design of every web page ever.

You say app, I say document. Let’s call the whole thing off.

I mention this particular, uh, discovery because for some reason it seems to be common to design news and learning interactives to have little snippets of text written in large type and stuffed into little boxes. I confess to having cargo-culted this particular design idea not long ago; last year I even fired up an ancient Multimedia Beethoven CD-ROM made some time in the last century to confirm that, yup, instructional text is supposed to be really short and go into a little box on the left!

Microsoft Multimedia Beethoven, circa 1992. Via http://www.uah.edu/music/technology/cai/programs/msbeethoven.html

I wonder if this design habit is an artifact of the days of Flash and native applications built using layout manager APIs and visual UI builders. I get the impression that it’s both difficult and out of the ordinary to try to get text and interactive elements to flow together using those technologies. After all, the designer usually doesn’t know what the text is going to be in advance, and you, the developer, would have to come up with a way to keep track of where in the text the widgets go, then create the appropriate widget objects and break up the text string at the appropriate spots, so that you can feed it all to a layout manager that you would probably have to tweak and fiddle for your somewhat unusual use case. Which suggests a great idea — perhaps we could invent tokens that mean “a widget goes here” and have the author use those to mark up the text somehow…

I kid. But in a serious way, because one of the things I liked least about SproutCore is the way it seems to want to pretend that the web hasn’t been invented yet. It provides widgets that are really meant to be a particular size and at a particular, absolutely-positioned offset specified in Javascript. Until the oddly named StaticContentView was invented, the standard UI widget for displaying text was called a LabelView and wanted, again, to be a particular size (regardless of the size of its content) and at a particular location (regardless of the size of the content surrounding it).

The theory was that SproutCore is for designing “apps” rather than “documents”. But as you might guess, I don’t find that distinction very compelling in late 2011. Yes, clearly, there will always be some apps whose UI is legitimately just a box of buttons or a glorified data entry form. And “everything in its right place, and just where it was last time” is exactly the right motto for such apps.

But much of the interesting stuff in your life happens in some kind of stream of context. Facebook and Gmail (especially the new look) are containers for what are basically documents relevant to your life, yet their designers are not shy about inserting app widgets — stuff that does stuff — right into the middle of that “document”-like flow.

Educational apps likewise should include plenty of text that helps you understand the things they help you to do. At Concord, we’ve been calling for a “deeply digital” curriculum that weaves (among other interactive elements) sensors and simulations tightly into the fabric of textbooks and other media.

You occasionally hear “technology X is for app builders, and web technology Y is really for documents“–but that ignores an important category of innovation that is going on right now: apps that are documents. Or, wait, is that documents that are apps…?

What’s next for Learn.Ember.js

But, back to Learn.Ember.js and what’s next. The single page of tutorial text and the trivial example code I have so far are somewhat lazily inspired by the first page of the Knockout.js tutorial; I just needed some text that isn’t plain lorem ipsum. So I need to write more content. But it’s of equal importance to make it trivial for anyone to clone the Learn.Ember.js repo and submit pull requests with new content — or to simply host their own version, modified as they see fit.

For the time being the tutorial text itself is written as a Handlebars template with embedded expressions that tell where to put the buttons; and the initial example code is a string-valued property of a Javascript object. So far, it’s been pretty painless to edit the tutorial text in Handlebars form, but the need to include view class names into the text is an obvious mixing of unrelated concerns — and, worse, the tutorial text is transported to clients as a compiled Handlebars template that is completely invisible to search engines. (Until Javascript gets to work, the index.html file consists of a blank page.)

I think the solution is to put the actual tutorial content, written in clean, semantic HTML5, into the body of the index.html file. Then we can agree as a convention to identify the “run” buttons by applying a particular CSS class, and to represent the location of the output by inserting an empty div with a particular CSS class. The Learn application can then easily use jQuery to scan the DOM as needed, inserting Ember.js views into the right places using Ember.View’s appendTo method and a little bit of DOM manipulation magic.

A remaining question would be whether and how to specify the initial code and the working “help me” code inside the HTML document. Putting the code in script tags with a fake MIME type (text/x-example-javascript) would make it easy to insert the code without having to HTML-escape it and without it running on page load, but then the code isn’t visible to user agents — like search engines — that don’t execute Javascript. Perhaps that is enough, or perhaps the code should go, properly escaped, into hidden <div> elements.

If that were done, then anyone could write their own interactive Ember tutorial by writing an appropriately-marked up HTML file and inserting a few lines into the head of the document to include the Javascript code of the Learn application, which would take care of translating the tutorial document into a working app. And if they were to publish the HTML file to a server, it would be fully searchable.

Before I get that far, of course, I’ll have to tackle navigation between tutorials and pages of a tutorial — a bit of design I left for later. As fodder for a new blog post, of course!

Updated 1am Monday, December 19 with better information about browser compatibility after I made a quick fix to the Learn.Ember.js prototype itself to make it work in Safari, and with a link to all of our open positions rather than just the developer position.

Using Dynamic Models to Discover the Past (and the Future?)

December 16th, 2011 by Sarah Pryputniewicz

What was Earth like 2.8 billion years ago?  The first life was emerging on the planet.  The Sun was weaker than it is today, but geologic evidence shows that the climate was as warm (or warmer) than it is today.  Was Earth colder because of the weak Sun, or warmer, as geologic evidence suggests?  How did this affect how life arose?

A new 3-D model of early Earth suggests that the planet underwent significant changes–from very warm to very cold.  Past models were one-dimensional–holding constant the amount of cloud cover or sea ice–to make the calculations easier.  But with more advanced computing, researchers at the University of Colorado Boulder were able to make better models of the planet’s climate.

“The inclusion of dynamic sea ice makes it harder to keep the early Earth warm in our 3-D model,” Eric Wolf, doctoral student at CU-Boulder’s atmospheric and oceanic sciences department, said. “Stable, global mean temperatures below 55 degrees Fahrenheit are not possible, as the system will slowly succumb to expanding sea ice and cooling temperatures. As sea ice expands, the planet surface becomes highly reflective and less solar energy is absorbed, temperatures cool, and sea ice continues to expand.”

The scientists’ model shows that Earth was periodically covered by glaciers, but the geologic evidence suggests that it was much warmer than that.  The calculations show that an atmosphere that contained 6% carbon dioxide would have kept the temperature high enough for life to thrive, but the soil samples show that the carbon dioxide concentration was not that high. So what’s the warming mechanism?  Eric Wolf and Brain Toon are still searching for it.

Since the 3-D model takes so much computing time (up to three months for a single calculation), we’ll be waiting a while for the answer.

“The ultimate point of this study is to determine what Earth was like around the time that life arose and during the first half of the planet’s history,” said Toon. “It would have been shrouded by a reddish haze that would have been difficult to see through, and the ocean probably was a greenish color caused by dissolved iron in the oceans. It wasn’t a blue planet by any means.” By the end of the Archean Eon some 2.5 billion year ago, oxygen levels rose quickly, creating an explosion of new life on the planet, he said.

And along the way, better models of Earth’s climate will come out of this study, enhancing scientists’ ability to predict what Earth’s future might look like, and scientists will learn more about the conditions of early Earth, which could help in assessing the habitability potential of other planets.

Explore the interactions of greenhouse gases and ice sheets in the High-Adventure Science climate investigation, and explore the search for extraterrestrial life in the High-Adventure Science space investigation.

http://www.sciencedaily.com/releases/2011/12/111205140521.htm

The Great Antarctic Glaciation

December 14th, 2011 by Sarah Pryputniewicz

About 33 million years ago, the Earth abruptly went from being warm and wet to having Antarctic ice cover.  Only 23 million years after the Paleocene-Eocene Thermal Maximum, a time of some of the warmest temperatures on Earth, ice covered the surface.  What happened?

According to a recent study by scientists at Yale and Purdue universities, the carbon dioxide level dropped. Carbon dioxide is a greenhouse gas that is contributing to the increased global temperatures on Earth today.

The scientists pinpointed the threshold for low levels of carbon dioxide below which an ice sheet forms at the South Pole. Matthew Huber, a professor of earth and atmospheric sciences at Purdue, said roughly a 40 percent decrease in carbon dioxide occurred prior to and during the rapid formation of a mile-thick ice sheet over the Antarctic approximately 34 million years ago.

“The evidence falls in line with what we would expect if carbon dioxide is the main dial that governs global climate; if we crank it up or down there are dramatic changes,” Huber said. “We went from a warm world without ice to a cooler world with an ice sheet overnight, in geologic terms, because of fluctuations in carbon dioxide levels.”

Having an ice-covered South Pole appears to be the tipping point for cooling the rest of the planet.  The team found that the threshold level of carbon dioxide necessary for ice formation is about 600 parts per million.  For reference, today’s carbon dioxide level is approximately 390 parts per million.  This is why ice sheets still remain on Earth today.

With carbon dioxide levels forecast to rise to 550-1,000 parts per million in the next 100 years, when will the ice sheets completely melt away?  Because the melting of an ice sheet is different than starting an ice sheet, and because the process is not linear, scientists can’t say for sure.  But it’s clear that once the carbon dioxide levels rise high enough, the Earth will have reached a tipping point in the warming direction and the ice sheets will melt away.

Huber next plans to investigate the impact of an ice sheet on climate.

“It seems that the polar ice sheet shaped our modern climate, but we don’t have much hard data on the specifics of how,” he said. “It is important to know by how much it cools the planet and how much warmer the planet would get without an ice sheet.”

So how warm will Earth be in the future?  What’s the cooling impact of the ice?  Will greenhouse gases continue to rise?  Will increased cloud cover compensate for the lack of ice?

Explore how greenhouse gases and ice affect Earth’s temperature and learn more about feedback and tipping points in the High-Adventure Science climate investigation.

http://www.sciencedaily.com/releases/2011/12/111201174225.htm

When in Drought…

December 12th, 2011 by Sarah Pryputniewicz

New groundwater and soil moisture drought indicator maps produced by NASA are available on the National Drought Mitigation Center’s website. They currently show unusually low groundwater storage levels in Texas. The maps use an 11-division scale, with blues showing wetter-than-normal conditions and a yellow-to-red spectrum showing drier-than-normal conditions. (Credit: NASA/National Drought Mitigation Center)

GRACE groundwater map of continental U.S.

The map (above) shows the change in stored groundwater in the contiguous United States.  Texas, which experienced record heat and wildfires this summer, is experiencing a very severe drought.  The change in stored water should not be a surprise given the weather conditions of the past year.  (By contrast, New England has a surplus of water from a very wet summer and the remnants of Hurricane Irene.)

Drought maps offer farmers, ranchers, water resource managers and even individual homeowners a tool to monitor the health of critical groundwater resources. “People rely on groundwater for irrigation, for domestic water supply, and for industrial uses, but there’s little information available on regional to national scales on groundwater storage variability and how that has responded to a drought,” Matt Rodell, a hydrologist at NASA’s Goddard Space Flight Center, said. “Over a long-term dry period there will be an effect on groundwater storage and groundwater levels. It’s going to drop quite a bit, people’s wells could dry out, and it takes time to recover.”

The question is: how long will it take to replenish the water that has been removed from the aquifers in Texas? Matt Rodell estimates, “Texas groundwater will take months or longer to recharge.  Even if we have a major rainfall event, most of the water runs off. It takes a longer period of sustained greater-than-average precipitation to recharge aquifers significantly.”

Water is a resource that everyone needs.  In dry environments, such as southwestern Texas, water is especially precious.  Water is used for the usual personal purposes, for agricultural purposes, and in natural gas wells.  For example, accessing the natural gas in the Eagle Ford shale deposit, which runs from the Mexican border towards Houston and Austin, requires millions of gallons of water to fracture the shale and release the stored hydrocarbons.

The prolonged Texas drought is putting more pressure on local officials about how best to use the limited amount of groundwater.  What is the best way to use the water supply?  Who gets first dibs?  How much should different businesses pay for water?  These are highly-important questions that can only be answered with a full understanding of how groundwater works.

You can explore how groundwater flows and propose solutions to water-supply issues in the High-Adventure Science water investigation.

http://www.nasa.gov/topics/earth/features/tx-drought.html

Drought spurring fracking concerns

Oil’s Growing Thirst for Water

Texas

Freak Control: On computing without keyboards

December 9th, 2011 by Chad Dorsey

There have been some interesting posts recently demonstrating and discussing control of devices beyond the keyboard. First, every casual gamer’s dream has now come true: you can play Angry Birds using your brain as a controller. The implications for reaching an even higher vegetative state state of flow are simply staggering.

Second, one story that illustrates Apple’s genius in this arena and a second that questions it. If you missed All Things D’s story about the moment that Apple and Microsoft’s touch interface dreams diverged, chug it into your Instapaper queue right now – it’s a great reminder of how far we’ve come in such a short time, and about how Microsoft continued a strange fumble with their Surface platform while Apple managed this transition from practice with the iPhone to full-on victory with the iPad. (I touched on the consumer side of the success of practicing with the iPhone’s interface as readying the public for the concept of the iPad in my Perspective piece a year or so ago.)

Third, an interesting rant from Matt Honan at Gizmodo claims that Siri’s hands-off interface presents the nuanced user experience we have come to expect from Apple. Gruber agrees, and I have to say I do much of the time as well.

And finally, a group in Tokyo is turning everyday objects into interactive devices using projectors and cameras. I particularly like their turning a banana into a functioning telephone through the use of object detection and focused sound beams.

Happy snacking – maybe you can read this whole post without touching your computer. Just think “scroll up” really hard…

More planets!

December 9th, 2011 by Sarah Pryputniewicz

A team of astronomers led by scientists at the California Institute of Technology have found 18 planets orbiting stars more massive than our Sun.  Finding planets is becoming more and more routine with the Kepler telescope, but these planetary discoveries help to answer questions about planetary formation–and raise other questions about planetary orbits.

The scientists focused on stars more than 1.5 times more massive than our Sun.  To look for planets, they used the “wobble” method, which looks for shifts in the apparent wavelengths coming from the star.  The 18 planets that they found are all larger than Jupiter.

According to John Johnson, assistant professor of astronomy at Caltech, these discoveries support a theory of planet formation. There are two competing explanations for how planets form: a) tiny particles clump together to make a planet and b) large amounts of gas and dust spontaneously collapse into big dense clumps that become planets.

The discovery of these planets supports the first explanation.

If this is the true sequence of events, the characteristics of the resulting planetary system — such as the number and size of the planets, or their orbital shapes — will depend on the mass of the star. For instance, a more massive star would mean a bigger disk, which in turn would mean more material to produce a greater number of giant planets.

So far, as the number of discovered planets has grown, astronomers are finding that stellar mass does seem to be important in determining the prevalence of giant planets. The newly discovered planets further support this pattern — and are therefore consistent with the first theory, the one stating that planets are born from seed particles.

The larger the star, the larger the planets that orbit it.

“It’s nice to see all these converging lines of evidence pointing toward one class of formation mechanisms,” Johnson says.

But there’s another mystery that’s come out of this discovery.  The orbits of these 18 newly-discovered large planets are mainly circular.  Planets around other Sun-like stars have circular and elliptical orbits.  Is there something about the larger stars that make it more likely  that planets will have a circular orbit?  Or is it just a phenomenon noticed because of the small sample size? Johnson says he’s now trying to find an explanation.

Stay tuned–not only may we find a planet that could harbor life, we could also learn something about the origin of our own solar system!

Learn more about finding planets and the search for extraterrestrial life in the High-Adventure Science investigation, Is there life in space?

http://www.sciencedaily.com/releases/2011/12/111202155801.htm

Reflections on a single-device world

December 8th, 2011 by Chad Dorsey

We put the last clock radio in our house in the Goodwill pile last week. Seeing it sitting on the pile to go downstairs was a surprising revelation for me. Somehow it felt wrong for a reason I couldn’t place. Then it hit me: a clock radio was my first real gadget purchase.

For those who don’t recall, there was a time when clock radios were quite a novel invention. The ability to wake to the radio instead of some raucous bell was an entirely new concept. And to a budding radio-phile like me, it seemed like the newest of frontiers. I remember looking across the counter at our local Sterling Drug for many a visit, and piling up birthday money and allowance until the mound was enough to purchase this coolest of things. The red glow of the lights and the late-night sessions listening to AM talk radio or trying to pull the strains of Dr. Demento out of the static seem as close now as they did then.Clock radio destined for the dustbin

This was a first – a multi-function gadget. And the mere concept of combining the functions was mesmerizing. Now, it’s entirely replaced by one entirely multi-function gadget. I use my iPhone both to listen to radio as I’m falling asleep and to wake me up. Of course, our family point-and-shoot camera and car GPS device are also starting to gather dust at a surprising rate.

This is no new revelation, of course, but the fundamental nature of my feeling at this loss was interesting to note. What other fundamental weirdness will we be in for as technology continues to contract our world of life and transform the world of education? The first time a teacher enters a classroom without a board he or she can write on? The first time a mom realizes she doesn’t need to buy any spiral-bound notebooks at the back-to-school sales? The first time a principal realizes that she can find out about the misconceptions all of her students hold on a given day about science concepts they are studying, even the students who transferred in to school that morning?

Time will tell with all of these. For now, I need a nap – Siri, can you set a timer to wake me up…?

What caused the Paleocene-Eocene Thermal Maximum?

December 7th, 2011 by Sarah Pryputniewicz

What caused the Paleocene-Eocene Thermal Maximum (PETM)?

About 56 million years ago, Earth’s temperature was a lot warmer than it is today–as much as 21°F higher than today (see the graph).  Earth’s temperature is rising today, likely because of human emissions of greenhouse gases.  But 56 million years ago, there were no human emissions; there were no humans.  What caused the big increase in Earth’s temperature?  And could it happen again today?

Researchers at Rice University suggest that the temperature increase could well be due to releases of stored methane from the oceans.

Methane is a powerful greenhouse gas and a natural product of bacterial decomposition.  In the oceans, methane sinks into the sediments and freezes into a slushy gas hydrate, stabilized in a narrow band under the seafloor.

According to calculations done by the Rice University scientists, the warmer oceans resulted in more methane hydrate being stored.  At warmer temperatures, bacteria decompose organic materials faster, resulting in more methane in a shorter period of time.  They estimate that, just before the PETM, there was as much methane hydrate stored as there is today, in a smaller band than exists today.

If this band is disturbed, as by a meteor impact or earthquake, the methane can be rapidly released into the atmosphere.  More greenhouse gases in the atmosphere result in increased warming.  But there’s no evidence of there having been an impact.  So what happened to release the methane 56 million years ago?

Nobody really knows, but the significance is clear.

“I’ve always thought of (the hydrate layer) as being like a capacitor in a circuit. It charges slowly and can release fast — and warming is the trigger. It’s possible that’s happening right now,” said Gerald Dickens, a Rice professor of Earth science and an author of the study.

That makes it important to understand what occurred in the PETM, he said. “The amount of carbon released then is on the magnitude of what humans will add to the cycle by the end of, say, 2500. Compared to the geological timescale, that’s almost instant.”

“We run the risk of reproducing that big carbon-discharge event, but faster, by burning fossil fuel, and it may be severe if hydrate dissociation is triggered again,” Guangsheng Gu, lead author of the study, said, adding that methane hydrate also offers the potential to become a valuable source of clean energy, as burning methane emits much less carbon dioxide than other fossil fuels.

Learn more about the feedback loops involved in climate change in the High-Adventure Science climate investigation.

http://www.sciencedaily.com/releases/2011/11/111109111542.htm

How old media dissolved the essence of Joi Ito’s NYT story

December 7th, 2011 by Chad Dorsey

I recently fawned over Joi Ito’s NY Times story about how openness and the Internet change the way we approach innovation and daily life. However, the unabridged version he posted to his blog is actually much better. It’s interesting to think for a moment about this episode.

First, the simple fact that this had to be shortened is reflective of old/new media constraints. Clearly, the costs and space of paper itself drive this need at least partially. Electrons are cheap, and Joi has no problem posting something of any length he wants on his blog. Any new media does not live by these constraints. And the collision of the two is befuddling many in the industry right now. The NY Times online version of this story is the same as the print version as far as I can tell, though it does include a link to the MIT Media Lab. And, in fact, the Times’ forward-thinking hyperlink system is what enabled me to link directly to a highlighted sentence in Joi’s story online.

Now, I understand the necessity of editing things down. The vote-with-your-mouse Internet has made that all too clear. And I benefit significantly from the editing of others. But I think this piece suffered at the hands of old media constraints. Let’s look into it a bit.

First, the description of the creation of X.25 in the NYT article has it as a “standard that seemed to anticipate every possible problem and application.” When I first read that, I questioned why we ended up going with IP after all. Reading Joi’s phrasing, however, tells a different story. He states, “The X.25 people were trying to plan and anticipate every possible problem and application. They developed complex and extremely well-thought-out standards that the largest and most established research labs and companies would render into software and hardware.” The subtleties here describe quite a different proposition. “Trying to plan and anticipate every problem” and developing “complex” standards is not exactly the same thing as “seeming to anticipate every…problem.” From an Agile development point of view, one might in fact become increasingly skeptical of any solution that strives to anticipate every problem. This is part of the point I think Joi is trying to make here, and it is blurred in a subtle, but important, way by this edit.

A second edit that removes important concepts comes in the loss of the reference to the RFC process. To the NY Times’ credit, they have weighed in admirably on this in the past, and it is certainly a bit obscure, so I understand the change. However, for anyone familiar with the story, this nuance shows some of the depth behind how openness triumphed in this case. Not by pure magic, but as a product of carefully managed process and group dedication in equal measure.

Though the Times article captures many of the nuances of argument that the Maker movement parallels much about the early days of the Internet, a subtle change loses meaning here, too. Joi describes 3D printers and the related ecosystem as ” cheaper, standardized and connected via the Internet,” three essential elements to the core of the innovation happening here. While the Times’ description of this tightens this up nicely and gets the basics right, it is interesting to note the nuances that are missed.

New platforms and new media permit new messages and new opportunities. That is some of what the story of the Internet’ birth tells us. In the same way, it’s what we also see playing out in educational technology today. Let’s look closely as we go forward, and try not to miss the nuances.

The Joi of Openness

December 6th, 2011 by Chad Dorsey

I just finished reading Joi Ito’s great New York Times essay about the Internet and openness. This is clearly a piece that resonates with many of us at the Concord Consortium as well as in the creative technology community at large. Joi does an excellent job explaining and characterizing what it is about the Internet’s birth and durability over time, capturing that often ineffable quality of open interconnectedness that is responsible for many of the aspects of networked life we take for granted every day.

One of the most exciting aspects of this all is something that Joi’s piece captures well – the freewheeling and wide-ranging freedom that this openness provides everyone who takes part. The fact that anyone with a good idea is in theory equally close to any other (though this apparently leaves many in this country without good network connections in the cold) is what makes the fabled guys-in-a-garage notion able to spring forth as the next Facebook or Google. It’s also what is fueling the burgeoning Maker movement (that the Economist captured somewhat well in a recent article highlighting the Maker Faire that our colleagues at the New York Hall of Science hold each year now).

Fortune cookie says: To succeed, you must share.

Plying this sensibility for the development of educational technology is our stock in trade at the Concord Consortium. Capturing this sensibility in everything we do, on the other hand seems to be in the genes of most all of us. Listening to the table conversations over one of our staff potlucks, I’m always amazed at how one central emotion ties through such a diversity of discourse: a shared interest in creation and the discovery of new things. This overriding interest is probably what makes most of us here science geeks. The idea of applying it in order to make education better for students across the world is what draws us to the Concord Consortium.

It’s in the pursuit of this interest that we all do our work here, but it’s in the enactment of openness that the work is able to thrive. Our code is on Github, our content is shared, and our ideas are in the open air.

I understand intrinsically what Joi describes with his use of the wonderful word “neoteny” to describe the childlike wonder of discovery retained as an adult. It’s why our friends at the Media Lab’s Lifelong Kindergarten Group chose their moniker. And now that we have been introduced to the word, I’m fairly certain it’s one we’ll hear included in the vivacious conversations across the table at our next group potluck. Thanks, Joi, for helping the Times’ readers make the connection between this concept and the openness that is so important to innovation and the Internet.

Update: 12/6 at 10:10 AM: Corrected the idiom to the proper “stock in trade,” and (reluctantly!) corrected the typo “monkier” to read “moniker.”