Photosynth – the emphasis is wrong

Microsoft have released another demo for Photosynth, this time of space shuttle Endeavour on the launchpad.

To date I’ve *almost* loved this technology – the means by which the software automatically patches together bits of images is ridiculously cool. The end result is very slick, but feels slightly as if the presentation is more important than the content, and hence a slightly empty experience.

Photosynth screengrabWhen I was playing with it earlier, I realised why. Although they’re positioned as being the raison d’etre, the images aren’t actually the exciting thing about this demo. The exciting thing is when you click the “fly around” icon and see the 3D markers which have been generated automatically from the 2D pictures. If the software could go a little bit further and generate wireframes from the spatial and colour information in the pictures then an entire browseable 3D view could be built up. Instead of just flying around the shuttle, you’d be able to walk straight into it, under it or fly over it.

Now take this and extend the concept again – imagine if you could then take that generated 3D rendering and then build this out into existing virtual worlds – for example Second Life or Multiverse

Then you’d be able to take a bunch of pictures of the real world and have that rendered into virtual world pretty much automatically. Result: the entire real world world browseable online…

4 thoughts on “Photosynth – the emphasis is wrong”

  1. Hi Mike,

    I kind of know what you mean with this one. Photosynth has the feel of tech that is looking for a home.

    Scraping flickr for content, and the associated implication that every node in the potentially inifinite network is enriched by the associative semantic metadata of every other relevant node is just brilliant. But at the same time, it is a sad truth that unless and until embedded in a no-brainer app it is likely to remain an ephemeral piece of really cool technology.

    The bit about Blaise Aguera y Arcas’s presentation to TED2007 which really got the hairs on the back of my neck standing up was the ‘infinite zoom’ functionality he demonstrated by embedding microdot-sized technical specs into a car ad. I would *love* to be able to embed museum object metadata right into the object image and then to disclose it to users in this really cool way.

    Reply
  2. I am an engineer, and most of the work I do is retrofitting new stuff to old sites. I usually go out and take a bunch of pictures, so when we’re back in the office talking about the job, we can refer to exactly what’s on site. However, unless you’ve been to site yourself, the photos are just a big jumble of disconnected images. It’s also impossible to extract distances or geometries from the photos. If we need to build a 3D model for new construction — which happens quite frequently — then that represents hours of measuring and CAD drafting.

    So I’m in complete agreement that Photosynth is focusing on the wrong result!

    It would be incredibly powerful to be able to build a 3D model just by taking a bunch of photos. Building a full color solid model isn’t even necessary (although it would be very cool); even just a point cloud of sorts that could be tweaked by a user to map out features and measure distances, that would be such a killer app for the type of work I do.

    Reply
  3. Greg, thanks for the comment – I agree, and I don’t see (knowing not much about it..) that this would be that far beyond feasible, given today’s processing power.

    The public beta of Photosynth at http://photosynth.net gives a better idea of what is possible, and I guess we’ll see that space developing a fair bit over time.

    Reply

Leave a comment