Category Archives: Development Blog

Our blog about development.

New features in CODAP

Our Common Online Data Analysis Platform (CODAP) software provides an easy-to-use web-based data analysis tool, geared toward middle and high school students, and aimed at teachers and curriculum developers. CODAP is already full of amazing features. We’re excited to announce several new features! Continue reading

Open invitation to software developers

CODAP Screenshot Our Common Online Data Analysis Platform (CODAP) offers easy-to-use web-based software that makes it possible for students in grades 6 through college to visualize, analyze, and ultimately learn from data. Whether the source of data is a game, a map, an experiment, or a simulation, CODAP provides an immersive, exploratory experience with dynamically linked data representations, including graphs, maps, and tables. CODAP is not dependent on specific content, so data analysis can be integrated into math, science, history, or economics classrooms.

CODAP is HTML5, making use of JavaScript, HTML, and CSS3. Various open source libraries are part of CODAP, including SproutCore, JQuery, Raphaël, Leaflet, and several other smaller libraries. CODAP uses SproutCore as an application framework. You can deploy CODAP as a static website with no server interaction. CODAP can be configured to store documents on your local device, or integrated with an online server for cloud-based document management. It can also log user actions to a server specified in a configuration file.

Our goal is to create a community of curriculum and software developers committed to ensuring that students from middle school through college have the knowledge and skills to learn with data across disciplines. We need your help!

Get involved

Book review: "Simulation and Learning: A Model-Centered Approach" by Franco Landriscina

Interactive science (Image credit: Franco Landriscina)
If future historians were to write a book about the most important contributions of technology to improving science education, it would be hard for them to skip computer modeling and simulation.

Much of our intelligence as humans originates from our ability to run mental simulations or thought experiments in our mind to decide whether it would be a good idea to do something or not to do something. We are able to do this because we have already acquired some basic ideas or mental models that can be applied to new situations. But how do we get those ideas in the first place? Sometimes we learn from our experiences. Sometimes we learn from listening to someone. Now, we can learn from computer simulation, which was carefully programmed by someone who knows the subject matter well and is typically expressed by a computer through interactive visualization based on some sort of calculation. In the cases when the subject matter is entirely alien to students such as atoms and molecules, computer simulation is perhaps the most effective form of instruction. Given the importance of mental simulation in scientific reasoning, there is no doubt that computer simulation, bearing some similarity with mental simulation, should have great potential in fostering learning.

Constructive science (Image credit: Franco Landriscina)
Although enough ink has been spilled on this topic and many thoughts have existed in various forms for decades, I found the book "Simulation and Learning: A Model-Centered Approach" by Dr. Franco Landriscina, an experimental psychologist in Italy, is a masterpiece that I must have on my desk and chew over from time to time. What Dr. Landriscina has accomplished in a book less than 250 pages is amazingly deep and wide. He starts with fundamental questions in cognition and learning that are related to simulation-based instruction. He then gradually builds a solid theoretical foundation for understanding why computer simulation can help people learn and think by grounding cognition in the interplay between mental simulation (internal) and computer simulation (external). This intimate coupling of internalization and externalization leads to some insights as for how the effectiveness of computer simulation as an instructional tool can be maximized in various cases. For example, Landriscina's two illustrations, embedded in this blog post, represent how two ways of using simulations in learning, which I coined as "Interactive Science" and "Constructive Science," differ in terms of the relationships among the foundational components in cognition and simulation.

This book is not only useful to researchers. Developers should benefit from reading it, too. Developers tend to create educational tools and materials based on the learning goals set by some education standards, with less consideration on how complex learning actually happens through interaction and cognition in reality. This succinct book should provide a comprehensive, insightful, and intriguing guide for those developers who would like to understand more deeply about simulation-based learning in order to create more effective educational simulations.

Safari Javascript bug

Recently we found that our Lab framework caused the JavascriptCore of Safari 5.1 to crash. Safari 5.1 is the latest version available for OS X 10.6. If you have OS X 10.7 or 10.8, then you have Safari 6 which doesn’t have this problem.

Too long; didn’t read solution: do not name your getters the same as your closure variables.

After at lot of detective work I figured out what the problem was. A block of code like this was causing it:

function testObject(){
  var myProperty = "hello";
  return {
    get myProperty() {
      return myProperty;
    }
  }
}
var myObject = testObject();
console.log(myObject.myProperty);

Here is a fiddle of the same code. If you run that fiddle in Safari 5.1 You will see an reference error to myProperty. Note: you need to make sure script debugging is disabled. If script debugging is enabled then this problem doesn’t occur. That block of code is using a Javascript getter. If you are not familiar with Javascript getters and setters, this post by John Resig is a great place to start.

If the closure variable name is changed to _myProperty like this:

function testObject(){
  var _myProperty = "hello";
  return {
    get myProperty() {
      return _myProperty;
    }
  }
}
var myObject = testObject();
console.log(myObject.myProperty);

Then it works! So there is an issue with Safari when a getter has the same name as the variable in the outer closure. This didn’t make sense though. We use this pattern of a getter accessing a variable of the same name throughout much of our code. Those parts of the code worked fine. After more exploration I found the following block of code works fine in Safari 5.1:

function testObject(){
  var myProperty = "hello";
  return {
    get myProperty() {
      return myProperty;
    },
    methodUsingMyProperty: function () {
      return myProperty;
    }
  };
}
var myObject = testObject();
console.log(myObject.myProperty);
Here is a fiddle of the same code.

This code is using the closure variable in another method that is part of the return object. Based on this, I believe the problem is that Safari is failing to keep the closure variable available when it has the same name as the getter and is only referenced by that getter. I assume it is common for browsers to try to optimize closures by discarding variables that aren’t used. So it seems the code analysis done by Safari to see if a variable is used, is failing to mark a variable as being used when it is in a getter with the same name.

I have submitted a bug to Apple about this. Their bug tracker is private though, so there is no way for you to see that bug.

So to summarize, you should not name your getters the same as your closure variables.

Serious Performance Regression in Firefox 18 and newer

The Firefox performance regression introduced into the codebase on 2012-09-29 and present in FF v18, v19, and Nightly versions is much more serious than I previously thought.

Basically FF versions after v16 are now almost unusable running NextGen models of any complexity for longer than 30s.

See: Firefox Performance Comparison 20131902 and Confirmation of FF slowdown over time.

There are several active Mozilla issues that people are working on. This is my original bug report:

And the other bug reports resolving this issue depends on:

Server-side upload time tracking

I wanted to see if we could roughly log how long users are spending waiting for learner data uploads. The more accurate way to do this is on the client side. However I wanted to try it on the server side so it could be applied in many cases without needing instrumented clients that send back data to the server.

I looked around for a while to see if this has been documented anywhere, but I didn’t find anything. So I decided to try something and test it to see if it would work.

The Conclusion

Yes it is possible. At least the ‘%t’ option when added to the request headers is the time which the request is started. This time is before most of the POST data is uploaded, so it can be used to get an estimate of upload times. This estimate seems very good with my testing, but it should be verified in the real world of computers in schools before relying on it for something important.

The Test

The idea for this test came from Aaron Unger.

In summary it was tested with a simple Rack app running on a EC2 server that was identical to the servers we use to run our portals. Then on the client side I used curl and Charles (the personal proxy) to send it a large chunk of data and record the timing.

The server was running Apache 2.2.22 and it was configured with a Passenger web app. I won’t go into that setup here. Additionally I added this to the Apache configuration:

RequestHeader set X-Queue-Start "%t"

Then in the web app folder I added this config.ru file:

run lambda { |env| 
  start_time = Time.now
  if env['rack.input']
    env['rack.input'].read
  end
  sleep 5
  [200, {"Content-Type" => "text/plain"}, 
    ["Apache Start Time: #{env['HTTP_X_QUEUE_START']}\n" +
     "Start Time: #{start_time}\n" +
     "End Time: #{Time.now}\n"]]
}

Then on my local machine. I ran Charles the personal proxy. This starts a proxy on port 8888.

I made a large random data file with:

head -c 2000000 random_data

Then I sent that off to the server with curl:

% time curl -x localhost:8888 --data-urlencode something@random_data http://testserver-on-aws
Apache Start Time: t=1359399773413862
Start Time: 2013-01-28 19:02:55 +0000
End Time: 2013-01-28 19:03:00 +0000
.
real    0m8.229s
...

Converting the time stamp shows the apache start time is 3 seconds before the start time. The simple server always waits for 5 seconds so together this makes up the 8 seconds reported. Bingo!

I wasn’t convinced that the 3 seconds was actually the upload time. I thought perhaps it was some apache processing time that happened after the upload. So I used the throttle option in Charles to slow down the upload. Doing this gave the expected result: the apache start time was even earlier than before. And subtracting the end time from the apache start time was very close to the total request time reported on the command line.

Notes

This server side approach does not cover all the time that user is waiting for an upload to complete. I would guess there will be cases when it isn’t accurate. For example some proxy or other network device might delay POST requests in someway and in that case this approach would not record that time.

Google Summer of Code Development: Single Sign-On

[Editor’s note:  Vaibhav Ahlawat was a Google Summer of Code 2012 student at the Concord Consortium.]

At any time, the Concord Consortium runs a number of small research projects and large scale-up projects, but in the past we built each system separately and each required a separate login. Want to teach your fourth graders about evolution? Great. Log in at the Evolution Readiness portal. Wait, you also teach your students about the cloud cycle? That requires logging in at the Universal Design for Learning (UDL) portal.

Clearly, some students and educators find value across different projects, and my goal is to make it a little easier for them to sign in just once and get access to the myriad great resources at the Concord Consortium for teaching science, math and engineering. As a Google Summer of Code student, I’m working under the guidance of Scott Cytacki, Senior Software Developer, to bring different projects under a single authentication system or, in the language of software development, a Single Sign-On.

Single Sign-On will allow both students and teachers to login across different projects with a single username and password, doing away with the need to remember multiple usernames and passwords. They’ll be able to move seamlessly among projects and find the resources they need to teach and learn. I’m also working on code that will allow students and teachers to sign up and login to Concord Consortium’s portals with their existing Google+ or Facebook accounts.

For those who want technical details, read on.

I’m working on moving from Restful Authentication to Devise, both of which are authentication solutions for Rails. These days, Devise is the preferred one among the Rails community and it makes things like password resetting and confirmation email pretty easy. Once we are done with this conversion, adding the support for signup and login using Facebook and Google+ accounts should be simple. For example, to add support for Google Oauth2 authorization protocol, all we have to do is add a gem named omniauth with Oauth2 strategy, which works brilliantly with Devise, then write a couple of functions.

Here’s a snippet of my code, which adds google oauth2 support to Devise

class Users::OmniauthCallbacksController < Devise::OmniauthCallbacksController
    def google_oauth2
 
    # The User.find_for_google_oauth2 method also needs to be implemented.
    # It looks for an existing user by e-mail, or creates one with a random password
    @user = User.find_for_google_oauth2(request.env["omniauth.auth"], current_user)
 
    if @user.persisted?
      flash[:notice] = I18n.t "devise.omniauth_callbacks.success", :kind => "Google"
      sign_in_and_redirect @user, :event => :authentication
    else
      session["devise.google_data"] = request.env["omniauth.auth"]
      redirect_to new_user_registration_url
    end
  end
end
Including support for authentication using the Facebook API can be done simply. Support for Oauth, which is used by many learning management systems, is provided, making integration far more easier than it was before.

I’m happy to help make it easier for Concord Consortium’s resources to be used by many more people.

— By Vaibhav Ahlawat

Video: Under the Hood of Molecular Workbench

It takes a lot of computation to model the atomic and molecular world! Fortunately, modern Web browsers have 10 times the computational capacity and speed compared with just 18 months ago. (That’s even faster than Moore’s Law!) We’re now taking advantage of HTML5 plus JavaScript to rebuild Molecular Workbench models to run on anything with a modern Web browser, including tablets and smartphones.

Director of Technology Stephen Bannasch describes the complex algorithms that he’s been programming behind the scenes to get virtual atoms to behave like real atoms, forming gases, liquids and solids while you manipulate temperature and the attractive forces between atoms. See salt crystallize and explore how the intermolecular attractions affect melting and boiling points. Imagine what chemistry class would have been like (or could be like today) if the foundation of your chemical knowledge started here.

Technology and Curriculum Developer Dan Damelin goes on to describe how open source programming opens up possibilities. For instance, Jmol is a Java-based 3D viewer for chemical structures that we were able to incorporate into Molecular Workbench to allow people to easily build activities around manipulation of large and small molecules, and to make connections between static 3D representations and the dynamic models of how molecules interact. We’re planning to build a chemical structure viewer that won’t require Java and will extend another open source project based on JavaScript and WebGL to visualize molecules in a browser.

Interested in this innovative programming? Great! We’re looking for software developers.

A Datasheet for NextGen MW

The opposite of Thomas Dolby

I was terrible at the first four weeks of organic chemistry. I just couldn’t get the right pictures into my head.

The depictions of the chemical reaction mechanisms I was supposed to memorize seemed like just so many Cs (and Hs and Os and, alarmingly, Fs) laid out randomly as if I were playing Scrabble. And I swear the letters rearranged themselves every time I looked away, like a scene out of a movie about an art student’s science-class nightmares (minus the extended fantasy sequence in which the letters grow fangs and leap off the page to menace the poor protagonist – unless I’ve blocked that part out).

Fortunately, I knew exactly what to do: I had to start picturing molecules in 3D, and in motion, as soon as possible. That ability seemed to take its own sweet time to develop. But once things “clicked” and I could visualize molecules in motion, the reactions finally made sense, as did all the associated talk of electronegativity, nucleophilic attack, and inside-out umbrellas. I aced the final.

Now, our Molecular Workbench software isn’t specifically designed to help undergraduates get through organic chemistry. It is designed to help students at many levels by letting them interact with simulations of the molecular world so they get the right pictures into their heads, sooner. It’s here to help that future art student and movie director beginning to nurse a complex about the 10th grade science class he’s stuck in right now.

The weight of history

But the “Classic” Molecular Workbench we have now was built for a different world. It runs in desktop Java, for one thing, meaning (among other things) that it’ll never run on iPads. More fundamentally, it was built to be “Microsoft Word for molecules” in a time when Microsoft Word was the dominant model for thinking about how to use a computer:

“Hello, blank page! Let’s see, today I’ll make a diffusion simulation. I should write something about it … Let’s make that 12-point Comic Sans. No, my graphic designer brother-in-law keeps telling me not use that so much, so Verdana it is, then. Now how do I add that model again? Oh yeah, Tools -> Insert -> Molecular Model…”

This model is constraining even though it’s always been possible to download and open Molecular Workbench via the Web, and even though MW saves simulation-containing activities to special URLs.

We have somewhat different expectations these days because of the Web, social media, mobile apps, and casual games. If I build a great in-class “activity” based on a series of molecular models, then I should be able to share that activity with the world with minimum difficulty. And if you find one of the simulations I created particularly illustrative, you should be able to put that model in a blog you control, or include the model as part of your answer to a question on http://physicsforums.com/.

Moreover you ought to be able to perturb the running simulation by reaching out and touching it with your fingers, or simply by shaking your tablet to see what effect that has on the simulation “inside” it. You shouldn’t be required to operate the simulation at one remove, via a mouse and keyboard, when it’s not necessary.

That’s why we’re excited about the Google-funded, next-generation Molecular Workbench we have started to build. The HTML5 + JavaScript technology we’re using to build the next generation of our MW software (hereafter called NextGen MW for short) will make it much more practical to enable these kinds of uses.

Boldly doing that thing you should never do

But designing NextGen MW to be a native of the real-time Web of 2012 rather than a visitor from the land of 1990s desktop computing means that we’re committed to rebuilding the capabilities of “Classic” Molecular Workbench from scratch. That is, we’re doing the very thing Joel Spolsky says you must never do! But ignoring platforms which run Java badly or not at all isn’t an option, and neither is trying to run Classic MW in a Google Web Toolkit-style compatibility layer that compiles Java to JavaScript. (With the latter option, we would almost surely be unable to optimize either the computational speed or the overall user experience well enough to make it practical to use NextGen MW on phones, inexpensive tablets, or even expensive tablets. But even that misses the point. We’re not a consumer products company trying to optimize the return on our past investment. We’re an R&D lab. We try new things.)

But writing things from scratch poses a challenge. We want the molecular dynamics simulations run by NextGen MW to run “the same” as the equivalent simulations run in Classic MW. But “the same” is a slippery concept. In traditional software development, asking two different implementations of a function or method to produce the “same” result often means simply that they return identical data given identical input, modulo a few unimportant differences.

It would be nice to extend this idea to the two-dimensional molecular dynamics simulations we are now implementing in NextGen MW. Classic MW doesn’t have a test suite that we can simply adapt and reuse. But, still, we might think to set up identical initial conditions in NextGen MW and Classic MW, let the simulations run for the same length of simulated time, and then check back to make sure that the atoms and molecules end up in approximately the same places, and the measurements (temperature, pressure, etc.) are sufficiently close. And, voilà, proof that at least this NextGen MW model works “the same” as the Classic MW model. (Or that it doesn’t, and NextGen MW needs to be fixed.)

Never the same thing twice?

Unfortunately, this won’t work. Not even a little bit, and the reason is kind of deep. The trajectories of the particles in a molecular dynamics simulation (and in reality) exhibit a phenomenon known as sensitive dependence on initial conditions. Think of two identical simulations with exactly the same initial conditions except a tiny difference. Now, pick a favorite particle and watch “the same” particle in each simulation as you let the simulations run. (And assume the simulations run in lockstep.) For a very short time, the particle will appear to follow the same trajectory in simulation 1 as in simulation 2. But as you let the simulation run a little longer, the trajectories of the two particles will grow farther and farther apart, until, very quickly, looking at simulation 1 tells you nothing about where to find the particle in simulation 2.

Very well, you say: maybe simulation 1 and simulation 2 started a little too far apart. So let’s make the difference in the initial conditions a little smaller. Sure enough, the trajectories stay correlated a little bit longer. But a very little bit. Here’s the rub: if you want to simulation 2 to match simulation 1 for twice as long, you need the initial conditions to be some number, let’s say 10, times closer. But if you need the simulations to match for 1 more “time” as long, that is, 3 times as long, you need the initial conditions to be 10 times closer still, or 100 times closer. And if you want simulation 1 to make a meaningful prediction about simulation 2 for ten times as long? Now you need the initial conditions to be a billion(109) times closer. In practice, this means that if there’s any difference at all between the two initial conditions, no matter how seemingly insignificant, then outside of a short window of time the two simulations will predict very different particle locations and velocities.

Perhaps you think this is a contrived situation having nothing to do with comparing Classic MW and NextGen MW. Can’t we start them with, not just similar, but identical initial conditions? Unfortunately, this escape hatch is barred, too. The tiniest and most seemingly insignificant difference between the algorithms NextGen MW runs and the algorithms Classic MW runs right away result in a small difference in the trajectories, and after that point, sensitive dependence on initial conditions takes over: the subsequent trajectories soon become totally different. Trying to run precisely the same algorithms in NextGen MW as in Classic, down to the exact order of operations, would not only intolerably constrain our ability to develop new capabilities in NextGen MW, but would be futile: the differing numerical approximations made by Java and JavaScript would result in yet another small difference which would in short order become a big difference.

Science!

So, wait a minute: You can’t test NextGen MW against Classic MW because even the tiniest difference between them makes them behave … totally differently? How do we trust either program, then? And how is this science again?

Well, notice that I didn’t say quite say the two programs behave totally differently. Yes, the exact trajectories of the molecules will quickly diverge, but the properties we can actually measure in the real world — temperature, pressure, and the like — unfold according to laws we understand, and should be the same in each (not counting minor, and predictable, statistical fluctuations.) After all, we can do beautifully repeatable experiments on “molecules in a box” in the real world without knowing the location of the molecules exactly. Indeed, when van der Waals improved on the ideal gas law by introducing his equation of state, which includes corrections for molecular volume and intermolecular attraction, the notion that molecules actually existed was not yet universally accepted.

So what we need are molecular models whose temperature, pressure, diffusion coefficient, heat capacity, or the like depend in some way on the correctness of the underlying physics. Ideally, we would like to be able to run a Classic MW model and have it reliably produce a single number which (whatever property it actually measures) is demonstrably different when the physics have been calculated incorrectly. Then we could really compare NextGen MW and Classic MW — and perhaps even find a few lingering errors in Classic MW!

Unfortunately for this dream, our library of models created for Classic MW tend to be complex interactives which require user input and aim to get across the “gestalt” of molecular phenomena (e.g., one model encourages students to recognize that water molecules diffusing across a membrane aren’t actively “aiming for” the partition with a higher solute concentrations but move randomly). The models are not intended to be part of numerical experiments designed carefully to produce estimates otherwise-difficult-to-measure properties of the real world. They require substantial rework if they are to generate single numbers that are known to reliably test the physics calculations. For that matter, there aren’t many Classic models at all that conveniently limit themselves to just the features we have working right now in NextGen MW, and we can’t just wait until we develop all the features before we begin testing.

Charts and graphs that should finally make it clear

Therefore, we have turned to making new Classic MW models that demonstrate the physics we want NextGen MW to calculate, and comparing the numbers generated in Classic MW to the numbers generated when the equivalent model is run in NextGen MW. I’ve begun to think of this process as creating the “datasheet” for Classic and NextGen MW, after the datasheets which contain charts and graphs detailing the performance characteristics of an electronics part, and which an engineer using the part can expect it to obey.

So far, we’ve just gotten started creating the MW datasheet. I’ve written a few ugly scripts in half-remembered Python to create models and plot the results and so far, sure enough, it looks like an issue with the NextGen MW physics engine that I knew needed fixing, needs fixing! (The issue is an overly clever, ad hoc correction I introduced to smooth out some of the peculiar behavior of our pre-existing “Simple Atoms Model.” But that’s good fodder for a future blog post.)

But we have ambitions for these characterization tests. Using the applet form of Classic MW, we hope to make it possible to run each of these “characterization tests” by visiting a page with equivalent Classic and NextGen MW models side by side, with output going to an interactive graph. But with or without this interactive form of the test, once characterization tests have been done they will help us to find appropriate parameters for automated tests that will run whenever we update NextGen MW, so that we can be sure that the physics remain reliable.

I’ll update you as we make progress.

Streaming Arduino Data to a Browser without Flash or Java

What if you were reading a blog or working through an online lesson and you could just plug in your Arduino and start taking data or interacting with models right in your browser?

Here at the Concord Consortium we are very interested in making sensors that are easy to use in the classroom or embedded directly into rich online curriculum. We’ve done some work in the past using applets as an intermediary to read data from commercial sensors and displaying them in lightweight graphs in the browser. When we think of fun, hackable, multi-probe sensors, though, we naturally think of Arduinos — we are open-source geeks after all.

In thinking of ways to display Arduino data in a browser with the minimum amount of fuss, we considered both our existing applet technique and using the new HID capabilities of the Arduino Unos. But while we will probably still find uses for both strategies, it occurred to Scott Cytacki, our Senior Developer, that we could simply use the common Ethernet Shields (or the new Arduino Ethernets) to send the data directly to the browser.

With this idea, it was quick work to hack the Arduino Server example to send JSON values of the analog pins and create a webpage that would rapidly poll the Arduino for data. So here is the first example that I wrote in about 70 lines of code (including the Arduino sketch) usable on any Ethernet-capable Arduino on any browser:

  1. Upload the tiny server sketch to your Arduino
  2. Plug in your ethernet shield, connect the Arduino to your computer with an ethernet cable and wait about 30 seconds for the Arduino server to boot up
  3. Optionally connect a sensor to pin A0. (The demo below is scaled for a L35 temperature sensor, but you don’t need it — you might need to rescale the graph by dragging on the axis to see the plot, though)
  4. Click the “Start Reading” button below

You should see your Arduino data filling up the graph. If not, wait another 20 seconds to ensure the server is fully booted and click the “play” button at the top right to start it again.

Wow, that was actually pretty easy!

I created the slightly more complicated example below reads data from all six analog pins, applies an optional conversion, and graphs any one of the data streams. If you were already reading data above, you don’t need to do anything new, just hit the button:

Direct link to stand-alone version

We think this is really cool, and we can’t wait to come up with new ways to integrate this into online content. Why not feed the temperature data into the HTML5 version of Molecular Workbench we’re developing under our new grant from Google.org, for instance, and see the atoms speed up as the temperature increases? Or set up an online classroom where students across the globe can take environmental readings and easily upload and pool their data?

Even by itself, the example above (perhaps expanded further by an interested reader) makes a great workbench for developing on an Arduino — much better than watching the raw Serial Out panel. And of course all the programming can happen in your friendly JavaScript environment, instead of needing to keep recompiling code and uploading it to your Arduino as you iterate.

Technical details:

  • This Arduino Sketch creates a server running on http://169.254.1.1, which is a private local IP that will automatically try to not conflict with other servers, allowing for easier connection without a DHCP server. The sketch then returns JSON data using the JSON-P technique of calling back a function, which allows us to make cross-domain requests.
  • Click on the tabs at the tops of the embedded jsFiddle examples to see the source code for streaming data to the webpage, or fork and edit any of the examples yourself.
  • The graphs are creating using D3.js, and make use of the simple-graph library created by Stephen Bannasch.