“I don’t really see the difference – I can query a SQL server to find header values and keys.”

OK, well assuming you (a) have a connection to the SQL endpoint (unusual for the ordinary individual wanting to get public data) and (b) are familiar with the particular brand of SQL syntax they are using, you could get a list of tables and fields. For one site.

So if (for instance) every local authority was to publish its waste collection timetables via SQL you would need to have a connection to each one and work out the particular fields in each case. Then you would have to work out a separate query for each one to return data in the format you require and store that somewhere. Then you would have to perform a query on the stored data.

On the other hand, if they all had a SPARQL endpoint you could find out how many bins are collected on a Wednesday with one query.

“I tried a ‘DESCRIBE ‘ query on the endpoint, in case it held the descriptive data on this URI only for it to result in an empty result set.”

The truth is that at the moment it’s all a bit ropey, mainly because nobody looks at it. The momentum is growing, maybe not quickly enough, but remember that ordinary web was the preserve of academia for quite a few years before “ordinary” people got the whole idea of it.

My own view is that once someone gets a real handle on what an RDF browser could really do and manages to make a good one then we’ll see people not only using such tools but demanding better access to data.