Monthly Archives: January 2013

Server-side upload time tracking

I wanted to see if we could roughly log how long users are spending waiting for learner data uploads. The more accurate way to do this is on the client side. However I wanted to try it on the server side so it could be applied in many cases without needing instrumented clients that send back data to the server.

I looked around for a while to see if this has been documented anywhere, but I didn’t find anything. So I decided to try something and test it to see if it would work.

The Conclusion

Yes it is possible. At least the ‘%t’ option when added to the request headers is the time which the request is started. This time is before most of the POST data is uploaded, so it can be used to get an estimate of upload times. This estimate seems very good with my testing, but it should be verified in the real world of computers in schools before relying on it for something important.

The Test

The idea for this test came from Aaron Unger.

In summary it was tested with a simple Rack app running on a EC2 server that was identical to the servers we use to run our portals. Then on the client side I used curl and Charles (the personal proxy) to send it a large chunk of data and record the timing.

The server was running Apache 2.2.22 and it was configured with a Passenger web app. I won’t go into that setup here. Additionally I added this to the Apache configuration:

RequestHeader set X-Queue-Start "%t"

Then in the web app folder I added this config.ru file:

run lambda { |env| 
  start_time = Time.now
  if env['rack.input']
    env['rack.input'].read
  end
  sleep 5
  [200, {"Content-Type" => "text/plain"}, 
    ["Apache Start Time: #{env['HTTP_X_QUEUE_START']}\n" +
     "Start Time: #{start_time}\n" +
     "End Time: #{Time.now}\n"]]
}

Then on my local machine. I ran Charles the personal proxy. This starts a proxy on port 8888.

I made a large random data file with:

head -c 2000000 random_data

Then I sent that off to the server with curl:

% time curl -x localhost:8888 --data-urlencode something@random_data http://testserver-on-aws
Apache Start Time: t=1359399773413862
Start Time: 2013-01-28 19:02:55 +0000
End Time: 2013-01-28 19:03:00 +0000
.
real    0m8.229s
...

Converting the time stamp shows the apache start time is 3 seconds before the start time. The simple server always waits for 5 seconds so together this makes up the 8 seconds reported. Bingo!

I wasn’t convinced that the 3 seconds was actually the upload time. I thought perhaps it was some apache processing time that happened after the upload. So I used the throttle option in Charles to slow down the upload. Doing this gave the expected result: the apache start time was even earlier than before. And subtracting the end time from the apache start time was very close to the total request time reported on the command line.

Notes

This server side approach does not cover all the time that user is waiting for an upload to complete. I would guess there will be cases when it isn’t accurate. For example some proxy or other network device might delay POST requests in someway and in that case this approach would not record that time.

Constructive chemistry funded by the National Science Foundation

One of the most effective pedagogies in science education is to challenge students to design and construct something that performs a function, solves a problem, or proves a hypothesis. Learning by design is a very compelling way of engaging students to learn science profoundly. Given the extensive incorporation and emphasis of engineering design across disciplines in the Next Generation Science Standards, design-based learning will only grow more important in US science education.

The problem, however, is that many science concepts are related to things that are too small, too big, too complex, too expensive, or too dangerous to be built in the classroom realistically. (If you are a LEGO fan, you may argue that LEGO can be used to build anything, but most LEGO models simulate the appearance but not the function -- a LEGO bike probably cannot roll and LEGO molecules probably do not assemble themselves. To scientists and engineers, functions are all that matters.)

Three approaches of using science models.
A good solution is to have students design computer models that work in cyberspace. This virtualization allows students to take on any design challenge without regard to the expense, hazard, and scale of the challenge. If the computer modeling environment is supported by computational science derived from fundamental laws, it will have the predictive power that permits anyone to design and test any model that falls within the range governed by the laws. Software systems that provide user interfaces for designing, constructing, testing, and evaluating solutions iteratively can potentially become powerful learning systems as they create an abundance of opportunities to motivate students to learn and apply the pertinent science concepts actively. This is the vision of "Constructive Science" that I had dreamed about almost four years ago. This constructive approach opens up a much larger learning space that can result in deeper and broader learning--beyond simply observing and interacting with existing science simulations that were created to assist teaching and learning.

This dream got a shot in the arm today by a small grant awarded by the National Science Foundation. This TUES Type-1 grant will support a collaboration with Bowling Green State University and Dakota County Technical College to pilot test the idea of "Constructive Chemistry" at the college level. Choosing chemistry as a test bed to explore this Constructive Science approach is most appropriate, as chemistry is all about atoms and molecules that are just too small to make any design-based learning option other than computational modeling viable. Decades of research in computational chemistry has developed the computational power needed to make the science right. We believe that using these computational methods should yield chemistry simulations that are sufficiently authentic for teaching and learning.

9 Highlights of 2012

It was a great year for the Concord Consortium!

  1. We won a Smaller Business Association of New England (SBANE) Innovation Award!
  2. Next-Generation Molecular Workbench interactives starred in the MIT MOOC (Massive Open Online Course) “Introduction to Solid State Chemistry” through a new collaboration with edX.
  3. Chad Dorsey described our vision of deeply digital education at the national Cyberlearning Research Summit.
  4. Six new projects were funded by the National Science Foundation: InquirySpace, Understanding Sub-Microscopic Interactions, High-Adventure Science: Earth’s Systems and Sustainability, GeniVille, Graph Literacy, and Sensing Science.
  5. The What Works Clearinghouse (WWC), a federally funded organization that scans educational research for high-quality studies, recognized our Technology Enhanced Elementary and Middle School Science (TEEMSS) software and materials.
  6. The Concord Consortium Collection was accessioned into the National Science Digital Library (NSDL).
  7. Our debut webcast featured Chad Dorsey, speaking about the scientific and engineering practices of the Next Generation Science Standards and our free, technology-based activities.
  8. We had two fabulous Google Summer of Code students.
  9. Our staff population increased by 10%, thanks to our new Software Portfolio and Project Manager Jen Goree, Web Developer Parker Morse, and Software Developer Tom Dyer, who just started (technically in 2013, but we’re so excited, we’ve included him on this 2012 list)!

2013 promises to be another great year! Follow us on Facebook, Twitter, Google+, and subscribe to our mailing list to receive print or email news updates.

NSTA Reports features the Engineering Energy Efficiency Project

Link to NSTA news
NSTA Reports is the National Science Teachers Association’s newspaper published nine times a year as a free member service. In January, our Engineering Energy Efficiency Project was one of the three projects featured in a report about "meaningfully integrating science and engineering."

The Engineering Energy Efficiency Project is funded by the National Science Foundation through a research grant.