Scraping, scripting, hacking

I just finished my talk at Mashed Library 2009 – an event for librarians wanting to mash and mix their data. My talk was almost definitely a bit overwhelming, judging by the backchannel, so I thought I’d bang out a quick blog post to try and help those I managed to confuse.

My talk was entitled “Scraping, Scripting and Hacking your way to API-less data”, and intended to give a high-level overview of some of the techniques that can be used to “get at data” on the web when the “nice” options of feeds and API’s aren’t available to you.

The context of the talk was this: almost everything we’re talking about with regard to mashups, visualisations and so on relies on data being available to us. In the cutting edge of Web2 apps, everything has got an API, a feed, a developer community. In the world of museums, libraries and government, this just isn’t the case. Data is usually held on-page as html (xhtml if we’re lucky), and programmatic access is nowhere to be found. If we want to use that data, we need to find other ways to get at it.

My slides are here:

[slideshare id=1690990&doc=scrapingscriptinghacking-090707060418-phpapp02]

A few people asked that I provide the URLs I mentioned together with a bit of context. Many of the slides above have links to examples, but here’s a simple list for those who’d prefer that:

Phew. Now I can see why it was slightly overwhelming 🙂

5 thoughts on “Scraping, scripting, hacking”

  1. Talk made perfect sense to me. 🙂

    Useful stuff, Yahoo seems to come out with more and more good things (never quite sure what their business plan is). yql looked useful.

    And agree with regex. Find it nearly impossible to write my own, hard to get my head in to the right way of approaching a problem.

    With all mashups I think there is a split between the informal one-off data extracting/mashing and real Uni/Library services. To be of benefit to library users mashups need to be part of formal services (boring i know) and how we transform informal stuff with (for eg) yahoo pipes in to services which are users can use is something I’m not always clear on.

    Chris

  2. Wasn’t there , but the general idea seemed clear to me from the slides. Some interesting ideas there, wasn’t aware of YQ, Google doc trick etc

  3. Wonderful ideas – nice article. I have always felt that a web site needs to decide – is it serving data, user functionality or both ? Often times, the designers and owners of web sites confuse the two.

    For example, a site serving stock market data will have tools that will show trends – but only the trends they think are important. If a user wants to check his own custom trend, he/she is out of luck.

    I always recommend biterscripting ( http://www.biterscripting.com ) for mining data from web sites. That way one gets his raw data, and can process it in the way he sees fit.

    Libraries are servers of data (raw information). In terms of whether (or how) they should develop APIs, it needs to be done rather carefully – eles they will end up being like these stock market web sites – where to get complete information on a particular stock in the format one desires – one has to go to several web sites to accomplish that.

    Sen

Comments are closed.