@Richard Watson

“This could be true, but this is the whole point of ontology. As long as someone uses an ontology correctly, or devises their own in a meaningful way, then the SPARQL endpoint is documented, in as much as it can be asked to describe all its own concepts.”

I don’t really see the difference – I can query a SQL server to find header values and keys.

The one advantage an ontology has over ordered data is that typically it is published, dereferenecable and has documentation associated with this URI outlining the usage, underlying models and structure.

Take for example data.gov.uk:

Listing all the types of things in the endpoints, I find generic ontologies for data (SCOVO), geo, etc but the ontologies I am most interested in (the ones peculiar to data.gov.uk itself) look like http ontologies, but aren’t:

http://transport.data.gov.uk/0/ontology/traffic#CountPoint

What’s a ‘Count Point’? How is this gathered? what does this constitute?

I tried a ‘DESCRIBE ‘ query on the endpoint, in case it held the descriptive data on this URI only for it to result in an empty result set.

If you have seen the work that I do, or the things I advocate, you will see that I have already been won over by the arguments for Linked Data as a mechanism of publication.

However, it’s still very much a Curate’s Egg – the great cases of LoD publishing (BBC, the interconnected sets of medical gene/drug data, etc, etc) are somewhat spoiled by the publishing attempts that don’t provide a good basis or springboard for those that are new to the area.