Safari Javascript bug

Tuesday, October 8th, 2013 by Scott Cytacki

Recently we found that our Lab framework caused the JavascriptCore of Safari 5.1 to crash. Safari 5.1 is the latest version available for OS X 10.6. If you have OS X 10.7 or 10.8, then you have Safari 6 which doesn’t have this problem.

Too long; didn’t read solution: do not name your getters the same as your closure variables.

After at lot of detective work I figured out what the problem was. A block of code like this was causing it:

function testObject(){
  var myProperty = "hello";
  return {
    get myProperty() {
      return myProperty;
    }
  }
}
var myObject = testObject();
console.log(myObject.myProperty);

Here is a fiddle of the same code. If you run that fiddle in Safari 5.1 You will see an reference error to myProperty. Note: you need to make sure script debugging is disabled. If script debugging is enabled then this problem doesn’t occur. That block of code is using a Javascript getter. If you are not familiar with Javascript getters and setters, this post by John Resig is a great place to start.

If the closure variable name is changed to _myProperty like this:

function testObject(){
  var _myProperty = "hello";
  return {
    get myProperty() {
      return _myProperty;
    }
  }
}
var myObject = testObject();
console.log(myObject.myProperty);

Then it works! So there is an issue with Safari when a getter has the same name as the variable in the outer closure. This didn’t make sense though. We use this pattern of a getter accessing a variable of the same name throughout much of our code. Those parts of the code worked fine. After more exploration I found the following block of code works fine in Safari 5.1:

function testObject(){
  var myProperty = "hello";
  return {
    get myProperty() {
      return myProperty;
    },
    methodUsingMyProperty: function () {
      return myProperty;
    }
  };
}
var myObject = testObject();
console.log(myObject.myProperty);
Here is a fiddle of the same code.

This code is using the closure variable in another method that is part of the return object. Based on this, I believe the problem is that Safari is failing to keep the closure variable available when it has the same name as the getter and is only referenced by that getter. I assume it is common for browsers to try to optimize closures by discarding variables that aren’t used. So it seems the code analysis done by Safari to see if a variable is used, is failing to mark a variable as being used when it is in a getter with the same name.

I have submitted a bug to Apple about this. Their bug tracker is private though, so there is no way for you to see that bug.

So to summarize, you should not name your getters the same as your closure variables.

Serious Performance Regression in Firefox 18 and newer

Wednesday, February 27th, 2013 by Stephen Bannasch

The Firefox performance regression introduced into the codebase on 2012-09-29 and present in FF v18, v19, and Nightly versions is much more serious than I previously thought.

Basically FF versions after v16 are now almost unusable running NextGen models of any complexity for longer than 30s.

See: Firefox Performance Comparison 20131902 and Confirmation of FF slowdown over time.

There are several active Mozilla issues that people are working on. This is my original bug report:

And the other bug reports resolving this issue depends on:

Server-side upload time tracking

Monday, January 28th, 2013 by Scott Cytacki

I wanted to see if we could roughly log how long users are spending waiting for learner data uploads. The more accurate way to do this is on the client side. However I wanted to try it on the server side so it could be applied in many cases without needing instrumented clients that send back data to the server.

I looked around for a while to see if this has been documented anywhere, but I didn’t find anything. So I decided to try something and test it to see if it would work.

The Conclusion

Yes it is possible. At least the ‘%t’ option when added to the request headers is the time which the request is started. This time is before most of the POST data is uploaded, so it can be used to get an estimate of upload times. This estimate seems very good with my testing, but it should be verified in the real world of computers in schools before relying on it for something important.

The Test

The idea for this test came from Aaron Unger.

In summary it was tested with a simple Rack app running on a EC2 server that was identical to the servers we use to run our portals. Then on the client side I used curl and Charles (the personal proxy) to send it a large chunk of data and record the timing.

The server was running Apache 2.2.22 and it was configured with a Passenger web app. I won’t go into that setup here. Additionally I added this to the Apache configuration:

RequestHeader set X-Queue-Start "%t"

Then in the web app folder I added this config.ru file:

run lambda { |env| 
  start_time = Time.now
  if env['rack.input']
    env['rack.input'].read
  end
  sleep 5
  [200, {"Content-Type" => "text/plain"}, 
    ["Apache Start Time: #{env['HTTP_X_QUEUE_START']}\n" +
     "Start Time: #{start_time}\n" +
     "End Time: #{Time.now}\n"]]
}

Then on my local machine. I ran Charles the personal proxy. This starts a proxy on port 8888.

I made a large random data file with:

head -c 2000000 random_data

Then I sent that off to the server with curl:

% time curl -x localhost:8888 --data-urlencode something@random_data http://testserver-on-aws
Apache Start Time: t=1359399773413862
Start Time: 2013-01-28 19:02:55 +0000
End Time: 2013-01-28 19:03:00 +0000
.
real    0m8.229s
...

Converting the time stamp shows the apache start time is 3 seconds before the start time. The simple server always waits for 5 seconds so together this makes up the 8 seconds reported. Bingo!

I wasn’t convinced that the 3 seconds was actually the upload time. I thought perhaps it was some apache processing time that happened after the upload. So I used the throttle option in Charles to slow down the upload. Doing this gave the expected result: the apache start time was even earlier than before. And subtracting the end time from the apache start time was very close to the total request time reported on the command line.

Notes

This server side approach does not cover all the time that user is waiting for an upload to complete. I would guess there will be cases when it isn’t accurate. For example some proxy or other network device might delay POST requests in someway and in that case this approach would not record that time.

Google Summer of Code Development: Single Sign-On

Thursday, October 18th, 2012 by Vaibhav Ahlawat

[Editor's note:  Vaibhav Ahlawat was a Google Summer of Code 2012 student at the Concord Consortium.]

At any time, the Concord Consortium runs a number of small research projects and large scale-up projects, but in the past we built each system separately and each required a separate login. Want to teach your fourth graders about evolution? Great. Log in at the Evolution Readiness portal. Wait, you also teach your students about the cloud cycle? That requires logging in at the Universal Design for Learning (UDL) portal.

Clearly, some students and educators find value across different projects, and my goal is to make it a little easier for them to sign in just once and get access to the myriad great resources at the Concord Consortium for teaching science, math and engineering. As a Google Summer of Code student, I’m working under the guidance of Scott Cytacki, Senior Software Developer, to bring different projects under a single authentication system or, in the language of software development, a Single Sign-On.

Single Sign-On will allow both students and teachers to login across different projects with a single username and password, doing away with the need to remember multiple usernames and passwords. They’ll be able to move seamlessly among projects and find the resources they need to teach and learn. I’m also working on code that will allow students and teachers to sign up and login to Concord Consortium’s portals with their existing Google+ or Facebook accounts.

For those who want technical details, read on.

I’m working on moving from Restful Authentication to Devise, both of which are authentication solutions for Rails. These days, Devise is the preferred one among the Rails community and it makes things like password resetting and confirmation email pretty easy. Once we are done with this conversion, adding the support for signup and login using Facebook and Google+ accounts should be simple. For example, to add support for Google Oauth2 authorization protocol, all we have to do is add a gem named omniauth with Oauth2 strategy, which works brilliantly with Devise, then write a couple of functions.

Here’s a snippet of my code, which adds google oauth2 support to Devise

class Users::OmniauthCallbacksController < Devise::OmniauthCallbacksController
    def google_oauth2
 
    # The User.find_for_google_oauth2 method also needs to be implemented.
    # It looks for an existing user by e-mail, or creates one with a random password
    @user = User.find_for_google_oauth2(request.env["omniauth.auth"], current_user)
 
    if @user.persisted?
      flash[:notice] = I18n.t "devise.omniauth_callbacks.success", :kind => "Google"
      sign_in_and_redirect @user, :event => :authentication
    else
      session["devise.google_data"] = request.env["omniauth.auth"]
      redirect_to new_user_registration_url
    end
  end
end
Including support for authentication using the Facebook API can be done simply. Support for Oauth, which is used by many learning management systems, is provided, making integration far more easier than it was before.

I’m happy to help make it easier for Concord Consortium’s resources to be used by many more people.

– By Vaibhav Ahlawat

Video: Under the Hood of Molecular Workbench

Friday, June 15th, 2012 by The Concord Consortium

It takes a lot of computation to model the atomic and molecular world! Fortunately, modern Web browsers have 10 times the computational capacity and speed compared with just 18 months ago. (That’s even faster than Moore’s Law!) We’re now taking advantage of HTML5 plus JavaScript to rebuild Molecular Workbench models to run on anything with a modern Web browser, including tablets and smartphones.

Director of Technology Stephen Bannasch describes the complex algorithms that he’s been programming behind the scenes to get virtual atoms to behave like real atoms, forming gases, liquids and solids while you manipulate temperature and the attractive forces between atoms. See salt crystallize and explore how the intermolecular attractions affect melting and boiling points. Imagine what chemistry class would have been like (or could be like today) if the foundation of your chemical knowledge started here.

Technology and Curriculum Developer Dan Damelin goes on to describe how open source programming opens up possibilities. For instance, Jmol is a Java-based 3D viewer for chemical structures that we were able to incorporate into Molecular Workbench to allow people to easily build activities around manipulation of large and small molecules, and to make connections between static 3D representations and the dynamic models of how molecules interact. We’re planning to build a chemical structure viewer that won’t require Java and will extend another open source project based on JavaScript and WebGL to visualize molecules in a browser.

Interested in this innovative programming? Great! We’re looking for software developers.

A Datasheet for NextGen MW

Monday, June 4th, 2012 by Richard Klancer

The opposite of Thomas Dolby

I was terrible at the first four weeks of organic chemistry. I just couldn’t get the right pictures into my head.

The depictions of the chemical reaction mechanisms I was supposed to memorize seemed like just so many Cs (and Hs and Os and, alarmingly, Fs) laid out randomly as if I were playing Scrabble. And I swear the letters rearranged themselves every time I looked away, like a scene out of a movie about an art student’s science-class nightmares (minus the extended fantasy sequence in which the letters grow fangs and leap off the page to menace the poor protagonist – unless I’ve blocked that part out).

Fortunately, I knew exactly what to do: I had to start picturing molecules in 3D, and in motion, as soon as possible. That ability seemed to take its own sweet time to develop. But once things “clicked” and I could visualize molecules in motion, the reactions finally made sense, as did all the associated talk of electronegativity, nucleophilic attack, and inside-out umbrellas. I aced the final.

Now, our Molecular Workbench software isn’t specifically designed to help undergraduates get through organic chemistry. It is designed to help students at many levels by letting them interact with simulations of the molecular world so they get the right pictures into their heads, sooner. It’s here to help that future art student and movie director beginning to nurse a complex about the 10th grade science class he’s stuck in right now.

The weight of history

But the “Classic” Molecular Workbench we have now was built for a different world. It runs in desktop Java, for one thing, meaning (among other things) that it’ll never run on iPads. More fundamentally, it was built to be “Microsoft Word for molecules” in a time when Microsoft Word was the dominant model for thinking about how to use a computer:

“Hello, blank page! Let’s see, today I’ll make a diffusion simulation. I should write something about it … Let’s make that 12-point Comic Sans. No, my graphic designer brother-in-law keeps telling me not use that so much, so Verdana it is, then. Now how do I add that model again? Oh yeah, Tools -> Insert -> Molecular Model…”

This model is constraining even though it’s always been possible to download and open Molecular Workbench via the Web, and even though MW saves simulation-containing activities to special URLs.

We have somewhat different expectations these days because of the Web, social media, mobile apps, and casual games. If I build a great in-class “activity” based on a series of molecular models, then I should be able to share that activity with the world with minimum difficulty. And if you find one of the simulations I created particularly illustrative, you should be able to put that model in a blog you control, or include the model as part of your answer to a question on http://physicsforums.com/.

Moreover you ought to be able to perturb the running simulation by reaching out and touching it with your fingers, or simply by shaking your tablet to see what effect that has on the simulation “inside” it. You shouldn’t be required to operate the simulation at one remove, via a mouse and keyboard, when it’s not necessary.

That’s why we’re excited about the Google-funded, next-generation Molecular Workbench we have started to build. The HTML5 + JavaScript technology we’re using to build the next generation of our MW software (hereafter called NextGen MW for short) will make it much more practical to enable these kinds of uses.

Boldly doing that thing you should never do

But designing NextGen MW to be a native of the real-time Web of 2012 rather than a visitor from the land of 1990s desktop computing means that we’re committed to rebuilding the capabilities of “Classic” Molecular Workbench from scratch. That is, we’re doing the very thing Joel Spolsky says you must never do! But ignoring platforms which run Java badly or not at all isn’t an option, and neither is trying to run Classic MW in a Google Web Toolkit-style compatibility layer that compiles Java to JavaScript. (With the latter option, we would almost surely be unable to optimize either the computational speed or the overall user experience well enough to make it practical to use NextGen MW on phones, inexpensive tablets, or even expensive tablets. But even that misses the point. We’re not a consumer products company trying to optimize the return on our past investment. We’re an R&D lab. We try new things.)

But writing things from scratch poses a challenge. We want the molecular dynamics simulations run by NextGen MW to run “the same” as the equivalent simulations run in Classic MW. But “the same” is a slippery concept. In traditional software development, asking two different implementations of a function or method to produce the “same” result often means simply that they return identical data given identical input, modulo a few unimportant differences.

It would be nice to extend this idea to the two-dimensional molecular dynamics simulations we are now implementing in NextGen MW. Classic MW doesn’t have a test suite that we can simply adapt and reuse. But, still, we might think to set up identical initial conditions in NextGen MW and Classic MW, let the simulations run for the same length of simulated time, and then check back to make sure that the atoms and molecules end up in approximately the same places, and the measurements (temperature, pressure, etc.) are sufficiently close. And, voilà, proof that at least this NextGen MW model works “the same” as the Classic MW model. (Or that it doesn’t, and NextGen MW needs to be fixed.)

Never the same thing twice?

Unfortunately, this won’t work. Not even a little bit, and the reason is kind of deep. The trajectories of the particles in a molecular dynamics simulation (and in reality) exhibit a phenomenon known as sensitive dependence on initial conditions. Think of two identical simulations with exactly the same initial conditions except a tiny difference. Now, pick a favorite particle and watch “the same” particle in each simulation as you let the simulations run. (And assume the simulations run in lockstep.) For a very short time, the particle will appear to follow the same trajectory in simulation 1 as in simulation 2. But as you let the simulation run a little longer, the trajectories of the two particles will grow farther and farther apart, until, very quickly, looking at simulation 1 tells you nothing about where to find the particle in simulation 2.

Very well, you say: maybe simulation 1 and simulation 2 started a little too far apart. So let’s make the difference in the initial conditions a little smaller. Sure enough, the trajectories stay correlated a little bit longer. But a very little bit. Here’s the rub: if you want to simulation 2 to match simulation 1 for twice as long, you need the initial conditions to be some number, let’s say 10, times closer. But if you need the simulations to match for 1 more “time” as long, that is, 3 times as long, you need the initial conditions to be 10 times closer still, or 100 times closer. And if you want simulation 1 to make a meaningful prediction about simulation 2 for ten times as long? Now you need the initial conditions to be a billion(109) times closer. In practice, this means that if there’s any difference at all between the two initial conditions, no matter how seemingly insignificant, then outside of a short window of time the two simulations will predict very different particle locations and velocities.

Perhaps you think this is a contrived situation having nothing to do with comparing Classic MW and NextGen MW. Can’t we start them with, not just similar, but identical initial conditions? Unfortunately, this escape hatch is barred, too. The tiniest and most seemingly insignificant difference between the algorithms NextGen MW runs and the algorithms Classic MW runs right away result in a small difference in the trajectories, and after that point, sensitive dependence on initial conditions takes over: the subsequent trajectories soon become totally different. Trying to run precisely the same algorithms in NextGen MW as in Classic, down to the exact order of operations, would not only intolerably constrain our ability to develop new capabilities in NextGen MW, but would be futile: the differing numerical approximations made by Java and JavaScript would result in yet another small difference which would in short order become a big difference.

Science!

So, wait a minute: You can’t test NextGen MW against Classic MW because even the tiniest difference between them makes them behave … totally differently? How do we trust either program, then? And how is this science again?

Well, notice that I didn’t say quite say the two programs behave totally differently. Yes, the exact trajectories of the molecules will quickly diverge, but the properties we can actually measure in the real world — temperature, pressure, and the like — unfold according to laws we understand, and should be the same in each (not counting minor, and predictable, statistical fluctuations.) After all, we can do beautifully repeatable experiments on “molecules in a box” in the real world without knowing the location of the molecules exactly. Indeed, when van der Waals improved on the ideal gas law by introducing his equation of state, which includes corrections for molecular volume and intermolecular attraction, the notion that molecules actually existed was not yet universally accepted.

So what we need are molecular models whose temperature, pressure, diffusion coefficient, heat capacity, or the like depend in some way on the correctness of the underlying physics. Ideally, we would like to be able to run a Classic MW model and have it reliably produce a single number which (whatever property it actually measures) is demonstrably different when the physics have been calculated incorrectly. Then we could really compare NextGen MW and Classic MW — and perhaps even find a few lingering errors in Classic MW!

Unfortunately for this dream, our library of models created for Classic MW tend to be complex interactives which require user input and aim to get across the “gestalt” of molecular phenomena (e.g., one model encourages students to recognize that water molecules diffusing across a membrane aren’t actively “aiming for” the partition with a higher solute concentrations but move randomly). The models are not intended to be part of numerical experiments designed carefully to produce estimates otherwise-difficult-to-measure properties of the real world. They require substantial rework if they are to generate single numbers that are known to reliably test the physics calculations. For that matter, there aren’t many Classic models at all that conveniently limit themselves to just the features we have working right now in NextGen MW, and we can’t just wait until we develop all the features before we begin testing.

Charts and graphs that should finally make it clear

Therefore, we have turned to making new Classic MW models that demonstrate the physics we want NextGen MW to calculate, and comparing the numbers generated in Classic MW to the numbers generated when the equivalent model is run in NextGen MW. I’ve begun to think of this process as creating the “datasheet” for Classic and NextGen MW, after the datasheets which contain charts and graphs detailing the performance characteristics of an electronics part, and which an engineer using the part can expect it to obey.

So far, we’ve just gotten started creating the MW datasheet. I’ve written a few ugly scripts in half-remembered Python to create models and plot the results and so far, sure enough, it looks like an issue with the NextGen MW physics engine that I knew needed fixing, needs fixing! (The issue is an overly clever, ad hoc correction I introduced to smooth out some of the peculiar behavior of our pre-existing “Simple Atoms Model.” But that’s good fodder for a future blog post.)

But we have ambitions for these characterization tests. Using the applet form of Classic MW, we hope to make it possible to run each of these “characterization tests” by visiting a page with equivalent Classic and NextGen MW models side by side, with output going to an interactive graph. But with or without this interactive form of the test, once characterization tests have been done they will help us to find appropriate parameters for automated tests that will run whenever we update NextGen MW, so that we can be sure that the physics remain reliable.

I’ll update you as we make progress.

Streaming Arduino Data to a Browser without Flash or Java

Tuesday, March 20th, 2012 by Sam Fentress

What if you were reading a blog or working through an online lesson and you could just plug in your Arduino and start taking data or interacting with models right in your browser?

Here at the Concord Consortium we are very interested in making sensors that are easy to use in the classroom or embedded directly into rich online curriculum. We’ve done some work in the past using applets as an intermediary to read data from commercial sensors and displaying them in lightweight graphs in the browser. When we think of fun, hackable, multi-probe sensors, though, we naturally think of Arduinos — we are open-source geeks after all.

In thinking of ways to display Arduino data in a browser with the minimum amount of fuss, we considered both our existing applet technique and using the new HID capabilities of the Arduino Unos. But while we will probably still find uses for both strategies, it occurred to Scott Cytacki, our Senior Developer, that we could simply use the common Ethernet Shields (or the new Arduino Ethernets) to send the data directly to the browser.

With this idea, it was quick work to hack the Arduino Server example to send JSON values of the analog pins and create a webpage that would rapidly poll the Arduino for data. So here is the first example that I wrote in about 70 lines of code (including the Arduino sketch) usable on any Ethernet-capable Arduino on any browser:

  1. Upload the tiny server sketch to your Arduino
  2. Plug in your ethernet shield, connect the Arduino to your computer with an ethernet cable and wait about 30 seconds for the Arduino server to boot up
  3. Optionally connect a sensor to pin A0. (The demo below is scaled for a L35 temperature sensor, but you don’t need it — you might need to rescale the graph by dragging on the axis to see the plot, though)
  4. Click the “Start Reading” button below

You should see your Arduino data filling up the graph. If not, wait another 20 seconds to ensure the server is fully booted and click the “play” button at the top right to start it again.

Wow, that was actually pretty easy!

I created the slightly more complicated example below reads data from all six analog pins, applies an optional conversion, and graphs any one of the data streams. If you were already reading data above, you don’t need to do anything new, just hit the button:

Direct link to stand-alone version

We think this is really cool, and we can’t wait to come up with new ways to integrate this into online content. Why not feed the temperature data into the HTML5 version of Molecular Workbench we’re developing under our new grant from Google.org, for instance, and see the atoms speed up as the temperature increases? Or set up an online classroom where students across the globe can take environmental readings and easily upload and pool their data?

Even by itself, the example above (perhaps expanded further by an interested reader) makes a great workbench for developing on an Arduino — much better than watching the raw Serial Out panel. And of course all the programming can happen in your friendly JavaScript environment, instead of needing to keep recompiling code and uploading it to your Arduino as you iterate.

Technical details:

  • This Arduino Sketch creates a server running on http://169.254.1.1, which is a private local IP that will automatically try to not conflict with other servers, allowing for easier connection without a DHCP server. The sketch then returns JSON data using the JSON-P technique of calling back a function, which allows us to make cross-domain requests.
  • Click on the tabs at the tops of the embedded jsFiddle examples to see the source code for streaming data to the webpage, or fork and edit any of the examples yourself.
  • The graphs are creating using D3.js, and make use of the simple-graph library created by Stephen Bannasch.

Building Learn.Ember.js, part 1: I say App, you say Document

Sunday, December 18th, 2011 by Richard Klancer

Summary: I created a prototype of Learn.Ember.js, an interactive tutorial application for web developers who want to learn about Ember.js. Along the way I was reminded that one of the most useful things about HTML5 is that it helps us to blur the app vs. document distinction in useful ways.

Oh, and by the way, we’re hiring!

Here at the Concord Consortium we believe that interactive computational simulations are powerful tools for learning about the world in ways that were not previously practical, or even possible. Google seems to agree; their philanthropic arm Google.org recently gave us a substantial grant to make an HTML5 version of our Molecular Workbench molecular simulation environment.

Changing the world

But Google didn’t approach us just because they agree that simulations of molecular behavior are a great way to learn about science. They approached us because we have spent 10 years writing well-regarded content for Molecular Workbench. We don’t just make simulations. We embed them in documents that introduce topics gently, encourage you to play with the simulation in productive ways, and in general encourage you to think.

It turns out there are many other domains that can benefit from open-ended tools embedded in structured “learning activities” available via browser. In particular, web development itself can benefit.

Inspiration from learn.knockoutjs.com

Here at Concord I mostly do client-side web app development, and so recently I found myself surveying the new crop of client-side MVC libraries. I was looking for a lighter-weight alternative to SproutCore (which we have used for a few projects) while we waited to see what would come of on the greatly slimmed-down, SproutCore-inspired library then that was then supposed to become SproutCore 2.0, and is now a separate project called Ember.js.

But there a lot of “maybe” development tools out there — tools which might be useful someday, but which I don’t need urgently, and which aren’t such breakthroughs that they need to be understood for their own sake. One of the “maybe” libraries I came across was Steve Sanderson’s impressive Knockout.js.

Since I wasn’t doing this survey “for real”, there was a chance that I would have read through the Knockout documentation in detail, downloaded the library, and made sample pages to play with its features. A small chance. There are only so many hours in a day.

But Knockout.js has a secret weapon: its companion tutorial site, learn.knockoutjs.com. Without quite intending to, within a few minutes of stumbling onto the tutorials I built and ran working examples that felt like plausible components of a Knockout-powered app, right in the tutorial page itself. After I finished the first tutorial I had a much better idea of what kind of problems Knockout solves, and how it solves them, than I would have gotten from the usual desultory flip through the Knockout homepage. (You should try the tutorials yourself!)

Prototyping Learn.Ember.js

As it turns out, Ember.js (née SproutCore 2.0) is shaping up to be a cleanly designed and powerful library with a solid team behind it, and I am enthusiastic about its future.

And as Scott has previously blogged, we at Concord would like to create more value for the open source ecosystem. So I’ve begun work on a side project I call Learn.Ember.js. You can see the first public prototype here. (Warning: this does not work in some browsers, notably older versions of Firefox and — wait for it — IE.)

Once I had the most basic functionality working — 2 Ace editors for the Javascript code and the view template, and an embedded iframe for the results — I wanted to focus on establishing a clean visual design. That meant I had to stop writing code and stop dreaming up potential features long enough to focus on design. Fortunately I was saved from withdrawal symptoms by all the opportunities which that opened up for obsessive font fiddling and CSS tweaking.

The challenge here was not so much the design of the text content — though I tried to borrow the best from well-designed, readable sites I like, such as the new Boston Globe website, the Nieman Foundation’s Nieman Labs blog, arc90’s Readability tool, and Mark Pilgrim’s Dive Into HTML5. Rather, the challenge proved to be finding a way to keep all the buttons and assorted interactive knobby bits from interfering with the text.

My first attempts weren’t very promising. I couldn’t put my finger on why until I realized that the 4-box layout of learn.knockoutjs.com just wasn’t working for me. Somehow I got the idea that in order to make the tutorial readable, I would have to find a way to “unbox” the design and make it look something like a page of a good technical book that just happened to be able to run code. But that introduced its own problems. Where to put the results of the program the user writes (which is an interactive web app unto itself)? Put that in a box, and, together with the Javascript and Handlebars/HTML input, which seem to need to be in boxes — de facto, you have four little boxes again!

Gradually, it occurred to me that the program output could be in flow with the text, right below whichever paragraph prompted you to try running the updated program. Then, with just a little position: fixed and fluid-layout magic, it would be perfectly reasonable to have the whole page scroll, and the tutorial content with it. That is to say, I rediscovered the basic design of every web page ever.

You say app, I say document. Let’s call the whole thing off.

I mention this particular, uh, discovery because for some reason it seems to be common to design news and learning interactives to have little snippets of text written in large type and stuffed into little boxes. I confess to having cargo-culted this particular design idea not long ago; last year I even fired up an ancient Multimedia Beethoven CD-ROM made some time in the last century to confirm that, yup, instructional text is supposed to be really short and go into a little box on the left!

Microsoft Multimedia Beethoven, circa 1992. Via http://www.uah.edu/music/technology/cai/programs/msbeethoven.html

I wonder if this design habit is an artifact of the days of Flash and native applications built using layout manager APIs and visual UI builders. I get the impression that it’s both difficult and out of the ordinary to try to get text and interactive elements to flow together using those technologies. After all, the designer usually doesn’t know what the text is going to be in advance, and you, the developer, would have to come up with a way to keep track of where in the text the widgets go, then create the appropriate widget objects and break up the text string at the appropriate spots, so that you can feed it all to a layout manager that you would probably have to tweak and fiddle for your somewhat unusual use case. Which suggests a great idea — perhaps we could invent tokens that mean “a widget goes here” and have the author use those to mark up the text somehow…

I kid. But in a serious way, because one of the things I liked least about SproutCore is the way it seems to want to pretend that the web hasn’t been invented yet. It provides widgets that are really meant to be a particular size and at a particular, absolutely-positioned offset specified in Javascript. Until the oddly named StaticContentView was invented, the standard UI widget for displaying text was called a LabelView and wanted, again, to be a particular size (regardless of the size of its content) and at a particular location (regardless of the size of the content surrounding it).

The theory was that SproutCore is for designing “apps” rather than “documents”. But as you might guess, I don’t find that distinction very compelling in late 2011. Yes, clearly, there will always be some apps whose UI is legitimately just a box of buttons or a glorified data entry form. And “everything in its right place, and just where it was last time” is exactly the right motto for such apps.

But much of the interesting stuff in your life happens in some kind of stream of context. Facebook and Gmail (especially the new look) are containers for what are basically documents relevant to your life, yet their designers are not shy about inserting app widgets — stuff that does stuff — right into the middle of that “document”-like flow.

Educational apps likewise should include plenty of text that helps you understand the things they help you to do. At Concord, we’ve been calling for a “deeply digital” curriculum that weaves (among other interactive elements) sensors and simulations tightly into the fabric of textbooks and other media.

You occasionally hear “technology X is for app builders, and web technology Y is really for documents“–but that ignores an important category of innovation that is going on right now: apps that are documents. Or, wait, is that documents that are apps…?

What’s next for Learn.Ember.js

But, back to Learn.Ember.js and what’s next. The single page of tutorial text and the trivial example code I have so far are somewhat lazily inspired by the first page of the Knockout.js tutorial; I just needed some text that isn’t plain lorem ipsum. So I need to write more content. But it’s of equal importance to make it trivial for anyone to clone the Learn.Ember.js repo and submit pull requests with new content — or to simply host their own version, modified as they see fit.

For the time being the tutorial text itself is written as a Handlebars template with embedded expressions that tell where to put the buttons; and the initial example code is a string-valued property of a Javascript object. So far, it’s been pretty painless to edit the tutorial text in Handlebars form, but the need to include view class names into the text is an obvious mixing of unrelated concerns — and, worse, the tutorial text is transported to clients as a compiled Handlebars template that is completely invisible to search engines. (Until Javascript gets to work, the index.html file consists of a blank page.)

I think the solution is to put the actual tutorial content, written in clean, semantic HTML5, into the body of the index.html file. Then we can agree as a convention to identify the “run” buttons by applying a particular CSS class, and to represent the location of the output by inserting an empty div with a particular CSS class. The Learn application can then easily use jQuery to scan the DOM as needed, inserting Ember.js views into the right places using Ember.View’s appendTo method and a little bit of DOM manipulation magic.

A remaining question would be whether and how to specify the initial code and the working “help me” code inside the HTML document. Putting the code in script tags with a fake MIME type (text/x-example-javascript) would make it easy to insert the code without having to HTML-escape it and without it running on page load, but then the code isn’t visible to user agents — like search engines — that don’t execute Javascript. Perhaps that is enough, or perhaps the code should go, properly escaped, into hidden <div> elements.

If that were done, then anyone could write their own interactive Ember tutorial by writing an appropriately-marked up HTML file and inserting a few lines into the head of the document to include the Javascript code of the Learn application, which would take care of translating the tutorial document into a working app. And if they were to publish the HTML file to a server, it would be fully searchable.

Before I get that far, of course, I’ll have to tackle navigation between tutorials and pages of a tutorial — a bit of design I left for later. As fodder for a new blog post, of course!

Updated 1am Monday, December 19 with better information about browser compatibility after I made a quick fix to the Learn.Ember.js prototype itself to make it work in Safari, and with a link to all of our open positions rather than just the developer position.

Open source spin-off projects

Wednesday, November 23rd, 2011 by Scott Cytacki

Developers at the Concord Consortium work on a wide variety of grants, and in the process we create reusable pieces of code. With a little work some of these reusable bits of code can be turned into spin-off projects that have a life of their own. In my opinion these spin-off projects have the best potential for broad long-term impact.

Recently I was reminded about these types of spin-off projects when Richard Klancer relayed a conversation he had with Jeremy Ashkenas. Jeremy has been very successful in this area during his work on DocumentCloud.

We strive to make our individual projects successful, but often their technology is complex and not easy to re-use. The impact of the individual project is the research enabled by the technology, or demonstrating the usefulness of a new concept. However, the collection of technologies used in the project normally becomes a one-off: it is no longer used once the project reaches its 2-5 year end.

Alternatively, within these complex projects are reusable pieces of code that are simple, easy to maintain, and solve a common need. Because of this they have potential to be popular outside of our organization. We do have some partial successes with spin-offs like this.

  • MozSwing – mostly abandoned, though it was used in at least one commercial product
  • Java Sensor Library – collection of JAR files for communicating with a variety of sensors available in schools
  • RaphaelViews – SproutCore 1.x library for creating fully fledged SproutCore 1.x views with Raphael
  • SproutCore TestDriver – ruby gem for running SproutCore Jasmine and QUnit tests on a CI server

None of these has become a successful open source spin-off project. To be successful, such a project needs an active community that includes both developers and users. And the amount of work required to maintain it by Concord Consortium developers needs to be small enough that it doesn’t prevent us from reaching the goals of individual grant projects.

The MozSwing project would require too much maintenance. The Java sensor project is too intertwined in our other Java code. RaphaelViews and CapybaraTestrunner don’t have the above problems, but they have not been polished and announced to the right audience. I don’t think the polishing would take a lot of effort, but making the time and finding the support to do so is hard. We are always working on the next big thing, so it takes discipline to really finish up what is already working internally.

There are more potential open source spin-off projects within the technology at the Concord Consortium that have wider audiences than the ones above. With luck, we can change our culture to encourage this work more and make more of this great stuff accessible.

Do you agree that we should be spinning off more projects? Do you have experience with spinning off projects like these? Any tips?

Idea for a concise syntax of nested models in cucumber

Friday, April 22nd, 2011 by Scott Cytacki

I started thinking about how to more easily specify some of the deeply nested structures we need during testing.

First we already have a step for doing this.  An example which looks like this:

And the following investigations with multiple choices exist:
| investigation         | activity    | section     | page       | multiple_choices  | image_questions   |
| first investigation   | act 1     | section 1  | page 1   | a, b              |                   |
| first investigation   | act 2     | section 2  | page 2   | c, d              |                   |
| second investigation  | act 3     | section 3  | page 3   | b_a, b_b          | image_q           |
| second investigation  | act 4     | section 4  | page 4   | b_c, b_d          |                   |

That is using some previously defined multiple_choice and image_question objects. I was trying to see if I could write it more like this:

And the following investigations with multiple choices exist:
      investigation "first investigation"
        activity "act 1"
          section "section 1"
            page "page 1"
              multiple_choice :prompt => "Prompt 1", :choices => "a,b,c,d", :correct_answer => "a"
              multiple_choice :prompt => "Prompt 2", :choices => "a,b,c,d", :correct_answer => "a"
            page "page 2"
              image_question :prompt => "Image Prompt"

I didn’t see an easy way to do that but with a little method_missing, haml, and using cucumber’s “”” notation, the following looks pretty straight forward:

And the following investigations with multiple choices exist:
      """
      - investigation "first investigation" do
        - activity "act 1" do
          - section "section 1" do
            - page "page 1" do
              - multiple_choice :prompt => "Prompt 1", :choices => "a,b,c,d", :correct_answer => "a" do
              - multiple_choice :prompt => "Prompt 2", :choices => "a,b,c,d", :correct_answer => "a" do
            - page "page 2" do
              - image_question :prompt => "Image Prompt" do
      """

In a simple implementation a map could be used to define the code to add a child object. So for investigation it would be {activities<<child}, and for page it is {add_element(child)}. The creation of of the objects could use factory girl so the names could be any of the existing factories.

I’m curious if anyone has seen something like this that is already written? And I’m wondering if there is something like haml that uses indentation for blocks but defaults to “ruby mode” so there wouldn’t be a need for the “- ” before each line.