Crowdsourcing photosynth

I wrote about Photosynth when it first came out as a plugin back in August 2007.Then, I wasn’t sure, and felt that it was a technology looking for a reason. Since then, Microsoft have done a few very, very cool things with it. The most important of these is that anyone can now create Photosynths (essentially, think image stitching, but in all dimensions..).

All you have to do is go to the Photosynth site, download the app and chuck some photos at it. It munges away for a bit and then after a bit uploads them all to the Photosynth site and gives you a link. It helps very much when you’re taking the photos to think about the fact that you want them to be connected: they obviously have to be the same scene, and I’ve found that standing reasonably still and taking around you tends to work reasonably well.

A “good synth” (the software tells you how “synthy” your selection is once it’s uploaded it – presumably a measure of how well it has managed to stitch stuff together) is pretty satisfying, although there are some obviously winning features which are missing. The single most obvious one of these is that you can’t add links or hotspots to the synth you create. For museums particularly, I think this’ll be a problem.

I did a synth a while back of the Boxkite at Bristol Museum. It’s a nice object to use (or so I thought) – it’s up in the rafters and you can walk all around it, taking photos from 360 degrees. As it happens, the result is pretty good, but not great. I’m wondering whether the software might have confused one side of the object from the other. Either way, it gives an insight into how museums could start using Photosynth to enhance collections online. More interestingly, perhaps (given the fair size of the Photosynth plugin), it could be used in-gallery (maybe with a Microsoft Surface..) to let audiences really engage with objects. Have a poke around the Photosynth site to get a feel for other museum stuff.

Extending Photosynth a bit further is what this post is all about, though.

When I saw the astonishing CNN Photosynth from Obama’s Inaugeration I started thinking about how else you could use it to enhance online experiences. I had what I thought at the time was an original idea (looking now I realise that Nick Poole had commented on my original post suggesting exactly this!) – how about using Flickr as a source for building a Photosynth?

Apollo 10 Command Module

Apollo 10 Command Module - thanks to Gaetan Lee

I needed an iconic object that would have been Creative Commons licensed on Flickr. Apollo 10 turned out to be a good one – I ran a search on Flickr and found 40 CC photos I could use, all taken in the Making the Modern World gallery of the Science Museum, my old stomping ground.

There’s no API I’m aware of for Photosynth yet. This is another missing trick – imagine if you could step straigt from Flickr to a 3D synthed view of any search… – so for my experiment I had to download the entire set of search results. For this, I used a cunning app called Downloadr, which lets you automatically download all Flickr pics which match a certain search. Then it was just a matter of re-uploading the images via Photosynth.

The result is here. Given that this is entirely made up of images taken at completely different times and by different people, I think it works pretty well. The crowd sourcing element adds a lot to Photosynth, I think. It’s still a shame that it isn’t possible to add links or otherwise play with the resulting synth – I think it’d add a lot.

Let me know if you think of other objects that could be synthed in this way and I’ll give it a go… : bootstrapping the NAW

What seems like a looong time ago I came up with an idea for “bootstrapping” the Non API Web (NAW), particularly around extracting un-structured content from (museum) collections pages.

The idea of scraping pages when there’s a lack of data access API isn’t new: Dapper launched a couple of years ago with a model for mapping and extracting from ‘ordinary’ html into a more programmatically useful format like RSS, JSON or XML. Before that there have been numerous projects that did the same (PiggyBank, Solvent, etc); Dapper is about the friendliest web2y interface so far, but it still fails IMHO in a number of ways.

Of course, there’s always the alternative approach, which Frankie Roberto outlined in his paper at Museums and the Web this year: don’t worry about the technology; instead approach the institution for data via an FOI request…

The original prototype I developed was based around a bookmarklet: the idea was that a user would navigate to an object page (although any templated “collection” or “catalogue” page is essentially the treated the same). If they wanted to “collect” the object on that page they’d click the bookmarklet, a script would look for data “shapes” against a pre-defined store, and then extract the data. Here’s some screen grabs of the process (click for bigger)

Science Museum object page An object page on the Science Museum website
Bookmarklet pop-up User clicks on the bookmarklet and a popup tells them that this page has been “collected” before. Data is separated by the template and “structured”
Bookmarklet pop-up Here, the object hasn’t been collected but the tech spots that the template is the same, so knows how to deal with the “data shape”
Defining fields in the interface The interface, showing how the fields are defined

I got talking to Dan Zambonini a while ago and showed him this first-pass prototype and he got excited about the potential straight away. Since then we’ve met a couple of times and exchanged ideas about what to do with the system, which we code-named “”.

One of the ideas we pushed about early on was the concept of building web spidering into the system: instead of primarily having end-users as the “data triggers”, it should – we reasoned – be reasonably straightforward to define templates and then send a spider off to do the scraping instead.

The spider

Dan has taken that idea and run with it. He built a spider in PHP, gave it a set of rules for templates and link-navigation and set it going. A couple of days ago he sent me a link to the data he’s collected – at time of writing, over 44,000 museum objects from 7 museums.

Dan has put together a REST-like querying method for getting at this data. Queries are passed in via URL and constructed in the form attribute/value – the query can be as long as you like, allowing fine-grained data access.

Data is returned as XML – there isn’t a schema right now, but that can follow in further prototypes. Dan has done quite a lot of munging to normalise dates and locations and then squeezed results into a simplified Dublin Core format.

Here’s an example query (click to see results – opens new window):

So this means “show me everything where location.made=Japan'”

Getting more fine-grained:,entertainment

Yes, you guessed it – this is “things where location.made=Japan and dc.subject=weapons or entertainment”

Dan has done some lovely first-pass displays of ways in which this data could be used:

Also, any query can be appended with “/format/html” to show a simple html rendering of the request:

What does this all mean?

The exposing of museum data in “machine-useful” form is a topic about which you’ll have noticed I’m pretty passionate. It’s a hard call, though (and one I’m working on with a number of other museum enthusiasts) – to get museums to understand the value of exposing data in this way.

The method is a lovely workaround for those who don’t have, can’t afford or don’t understand why machine-accessible object data is important. On the one hand, it’s a hack – screenscraping is by definition a “dirty” method for getting at data. We’d all much prefer it if there was a better way – preferably, that all museums everywhere did this anyway. But the reality is very, very different. Most museums are still in the land of the NAW. I should also add that some (including the initial 7 museums spidered for the purposes of this prototype) have some API’s that they haven’t exposed. can help those who have already done the work of digitising but haven’t exposed the data in a machine-readable format.

Now that we’ve got this kind of data returned, we can of course parse it and deliver back…pretty much anything, from mobile-formatted results to ecards to kiosk to…well, use your imagination…

What next?

I’m running another mashed museum day the day before the annual Museums Computer Group conference in Leicester, and this data will be made available to anyone who wants to use it to build applications, visualisations or whatever around museum objects. Dan has got a bunch of ideas about how to extend the application, as have I – but I guess the main thing is that now it’s exposed, you can get into it and start playing!

How can I find out more?

We’re just in the process of putting together a simple series of wiki pages with some FAQ’s. Please use that, or the comments on this post to get in touch. Look forward to hearing from you!

Launchball: we did it differently, and got it right..

Yesterday there was a flurry of excitement on Twitter (a “flutter of tweets”?) as the Science Museum’s Launchball was named SXSW “Best of Show“. This is an awesome achievement. SXSW is a hugely well regarded conference and for a museum to win not only the Games section but the coveted BOS as well is just enormous news.

I was still at the Science Museum as Head of Web for the first two thirds of the Launchball project, a fact of which I’m incredibly proud. As it happens, I got to do the fun bit without any of the hard work which always takes up the final push for the summit of any digital project…

Launchball is by pretty much any standards an enormous success. It has received over 1.5 million page views in the first six months of life. After I posted it to Digg it took on its own virality, taking the Science Museum web server down because of the immense levels of traffic. It has a following which you can see in the fact that users feel enthusiatic enough about it to create entire sites dedicated to possible solutions. You can see by the comments on this site, for example, how communities started to evolve around the game.

The success of Launchball is, in retrospect, fairly easy to ascribe. I thought it might be interesting to focus on the elements that I feel made up this success, given my (two-thirds) complete knowledge of the way that the project was driven. Fundamentally, these elements centred around freedom in the way the project was allowed to run, flexibility and adaptability of content and testing teams, creativity of the people involved and a certain element of luck that all the elements came together in the right order and at the right time.

We (the web team) pushed for – and were given – a huge amount of scope in helping to define the creative concept behind the game. This is relatively unusual in my experience – often the web team is seen as a service mechanism to deliver content by curators, education staff or other content teams. In this instance, I pushed very hard for recognition that – given the people involved – the web team creative input was absolutely key to delivering a successful experience for Launchpad Online.

Way back at the beginning of the project – looong before any creative agencies were involved – we sat down in a small group knowing only the budget and timescale, and braindumped what we thought we should aim to do. I’d had a tiny fledgling idea about a physics engine environment which encouraged users to play and “learn by stealth”. I’m a Heath Robinson fan (who isn’t?) and an inventor at heart, and the idea of having an environment in which you could play around with a bunch of gadgets, solve some fun problems and maybe learn something too was hugely compelling.

We started by running a brainstorm with the content team, and then honed this down with just the web team. We chose to have a defined output of 3 or 4 concepts. My brainstorms always begin with this: “We have infinite budget, infinite time. Now what do we want to do?”. I see little point in being constraining when what you’re trying to do is capture everything…

Out of this we came out with 4 key concepts – “Build it and share it”, “Ask an Explainer”, “Simulation”, “Real Experiments”. Each had social elements, interactivity, and were designed to be built around a central Flash-based interactive.

We then presented these concepts to the various stakeholders – the content teams, sponsors and education experts and used their feedback to focus and distill the final vision for the interactive and site. In the end we took the first concept but took popular elements from the others. The end result was the vision of a physics engine environment. We used mindmap software and Powerpoint to develop wireframes so we could convey our ideas to the stakeholders. Here’s a segment of one of the key documents:

[slideshare id=301581&doc=launchball-wireframes-1205233313984495-3&w=425]

You’ll notice that the whole concept of a “stage” upon which various gadgets are moved is already pretty well established – we still hadn’t taken on a creative or technical agency, wanting instead to be very sure we had a strong vision and brief to take to them at the right time.

One of the key things that we wanted to get right all the way through the process was to avoid a very obvious temptation: to try and re-create the Launchpad exhibits in a virtual medium. This would have been terribly easy, and completely wrong: Launchpad itself is a very physical experience, deliberately avoiding virtuality on-gallery. Instead, we wanted an environment which spoke to the essence of Launchpad: experimentation, fun, a strong element of self-guided learning, but without aping the physicality of the exhibits.

Once we had sign-off of the concept, we then went through the briefing and pitching process, choosing the wonderful Preloaded as the design agency. Behind the scenes, we used Eduserv (more specifically, Stephen Pope, one of the best web developers I’ve ever had the pleasure to work with…) to hook in the Sitecore CMS to store levels and user preferences. Outside, of course, was a framework of project management, run by “I’m just good at nagging” Jane Audas

Preloaded did an astonishing job with the concept, taking it from paper-based design and really running with it to make it something with enormous class and style. The addition of ambient washes of music came from nowhere, for example, and really add hugely to the experience.

Round about this time I left the museum, so missed out on – as I say – the inevitable last minute tweaks, irritations, budget issues and timescales that always lurk around any project. From a distance, it all looked smooth, and maybe that’s all that counts 🙂

Either which way, I’d just like to say a massive well done to everyone involved. I think Launchball really sets the bar (really, really high…) for not just museum interactive exhibits but for online gameplay as a whole. It’s just absolutely great that the world seems to have recognised this as well.