Planet RDF

It's triples all the way down

July 29

Sebastian Trueg: YouID Identity Claim

di:sha1;eCt+TB1Pj/vgY05nqB48sd1seqo=?http=trueg.selfhost.eu%3A8899


Posted at 13:54

Libby Miller: HP Cooltown notes

Last week, a tweet from

Posted at 12:42

Redlink: Linked Data track at the ApacheCon Europe 2014

ApacheCon brings together the open source community to learn about and collaborate on the technologies and projects driving the future of open source, big data and cloud computing. Apache projects have and continue to be hugely influential in the innovation and development of software development across a plethora of categories from content, databases and servers, to big data, cloud, mobile and virtual machine.

The developers, programmers, committers and users driving this innovation and utilising these tools will meet in Budapest on November 17-19, for collaboration, education and community building.

In the last years Linked Data has become an important topic in the Apache Software Foundation, with projects such as JenaMarmottaStanbolClerezza and Any23. Redlink supports the event by co-chairing a dedicated track about Linked Data. The track aims to be a place where all these projects can meet to explore synergies across the different projects and developers. It is also particularly interesting for us to connect with other data-intensive projects to discuss their approaches with Semantic Web technologies.

Last week the Apache Software Foundation officially announced the schedule. The programme has many interesting technical talks. Here is where you can meet some Redlinkers presenting our technology:

Looking forward to meet you in Budapest this coming November!

Posted at 07:57

July 24

Redlink: Redlink now a Supporter Member of the Open Data Institute (ODI)

Press release, Salzburg, Austria – July 24, 2014

logo redlink

Redlink is now a supporter member of the Open Data Institute. As a an innovative startup in the enterprise linked data sector, Redlink brings the value of semantic processing and linked data services built on free and open-source software and delivered as a platform-as-a-service to a wider audience of developers, public institutions and IT integrators. This membership represents for Redlink a major step in promoting open data culture in Europe and an integral part of our ongoing work as technology enablers.

Founded by Sir Tim Berners-Lee and Professor Sir Nigel Shadbolt, and opened in December 2012, the ODI is an independent,

non-profit, non-partisan, limited by guarantee company. With a 5,000 sq ft convening space in the heart of London’s thriving Shoreditch area, and a global remit, ODI work to catalyse an open data culture to create economic, environmental, and social value. The ODI helps unlock supply, generates demand, creates and disseminates knowledge to address local and global issues.

Gavin Starks, ODI CEO: “In joining the ODI, Redlink is showing leadership in its sector, recognising the social, economic and environmental potential of open data. More than 70 pioneering member companies have now joined the ODI to deliver new products and services and create value for business, and society”.

Redlink was born in March 2013 from the core committers of Apache Marmotta and Apache Stanbol to democratise semantic technologies and to help organisations take full advantage of linked data made publicly available by governments for structuring any form of unstructured data.

John Pereira, Redlink CEO: “We have come a long way in understanding the importance of freeing data from legacy and proprietary formats, the results are clear with the many initiatives and available open datasets. Now we need to demonstrate the business value. At Redlink our contribution is to simplify the use of semantic processing and linked data technology to power the new generation of exciting linked data driven applications”

About Redlink

Redlink GmbH  (http://redlink.co), headquartered in Austria, helps enterprises make sense of their data by semantically enriching, linking and searching the vast amounts of unstructured data. Redlink is the company behind the open source projects Apache Stanbol and Apache Marmotta, and is committed to the wide adoption of open source semantic technologies to support a broad set of mission-critical and real-time production uses.

Media Contacts

John Pereira
Redlink GmbH
+43 660 277 1228
john.pereira@redlink.co

Andrea Volpini
Redlink GmbH
+39 348 761 7242
andrea.volpini@redlink.co

Emma Thwaites
The ODI Communications Team
emma@theodi.org

Posted at 11:59

AKSW Group - University of Leipzig: AKSW Colloquium “Knowledge Extraction and Presentation” on Monday, July 28, 3.00 p.m. in Room P702

Knowledge Extraction and Presentation

On Monday, July 28,  in room P702 at 3.00 p.m., Edgard Marx proposes a question answering system. He has a computer science background (BSc. and MSc. in Computer Science/PUC-Rio) and is a member of AKSW (Agile Knowledge Engineering and Semantic Web). Edgard has been engaging in Semantic Web technology research since  2010 and is mainly working on evangelization and developing of conversion and mapping tools.

Abstract

The use of Semantic Web technologies led to an increasing number of structured data published on the Web.
Despite the advances on question answering systems retrieving and presenting the desired information from RDF structured sources is still substantial challenging.
In this talk we will present our proposal and working draft to address this challenges.

About the AKSW Colloquium

This event is part of a series of events about Semantic Web technology. Please see http://wiki.aksw.org/Colloquium for further information about previous and future events. As always, Bachelor and Master students are able to get points for attendance and there is complimentary coffee and cake after the session.

Posted at 09:55

Norm Walsh: DocBook and HTML 5(.x)

HTML 5(.x) today reminds me a lot of DocBook 1(.x) twenty years ago. That's neither criticism nor compliment, merely observation.

Posted at 03:34

July 23

Frederick Giasson: Big Structures: Where the Semantic Web Meets Artificial Intelligence

Mike Bergman just published the second part1 of his series of blog posts that summarize the evolution of the Semantic Web in the last decade, and how our experience of the last 7 years of research in that field has led to these observations.

The second part of that series is: Big Structure: At The Nexus of Knowledge Bases, the Semantic Web and Artificial Intelligence.

He continues to outline some issues with the Semantic Web, but more importantly how it fits in a much broader ecosystem, namely KBAI (Knowledge Based AI). He explains the difference between data integration and data interoperability and how these problems could benefit leveraging a sub-set of the Artificial Intelligence domain related to data interoperability:


ai_data_interoperability
These two blog posts set the foundation and the direction where Structured Dynamics is heading in the coming years and where we will focus our research projects and how we will help our clients with their data integration and interoperability issues.

We welcome hearing from you!

Posted at 17:48

July 22

Dublin Core Metadata Initiative: Paul Walk of EDINA appointed Independent Member of the DCMI Governing Board"

2014-07-22, DCMI is pleased to announce the appointment of Paul Walk to DCMI's Governing Board for a three year term. In 2013, Paul joined EDINA, University of Edinburgh, as Head of Technology Strategy and Planning, an exciting role which has placed him back into a service development and delivery environment -- albeit one operating at a national as well as an institutional scale. Paul has been an active participant in DCMI work for a number of years. While at UKOLN at the University of Bath where, for seven years, he served the Joint Information Systems Committee (JISC) and then its successor, Jisc, as a strategic technical advisor, with a focus in the areas of information standards development, resource discovery and digital infrastructure. Paul brings a strong set of management and process skills that will be essential as we move forward with our new organization structure, and will be a welcome asset to the Board.

Posted at 23:59

Dublin Core Metadata Initiative: DC-2014 post-conference workshop: "Training the Trainers of Linked Data"

2014-07-22, DCMI and the Texas Digital Library invite you to register for this day-long, hands-on "Training the Trainers of Linked Data" post-conference workshop to be held on Saturday, October 11, 2014. Linked Data has gained momentum, and practitioners are eager to use its principles to derive more value from metadata. Available handbooks and training materials focus on an audience with a computer science background. However, people with a non-technical education find it hard to understand what Linked Data can mean for them. This full-day, hands-on workshop will provide an overview of methods and case studies from the handbook "Linked Data for Libraries, Archives and Museums" (2014, ALA/Neal-Schuman). Using freely available tools and data, this workshop will teach you how to clean, reconcile, enrich, and publish your metadata. Participants will learn about concepts, methods, and tools that they can use on their own, or to teach others within their own institutions, to get more value from metadata. Space for this special event is limited, so register now for DC-2014 at http://purl.org/dcevents/dc-2014/register.

Posted at 23:59

Dublin Core Metadata Initiative: DC-2014 special session: "Fonds & Bonds" archival metadata workshop

2014-07-22, DCMI, the Texas Digital Library and the Harry Ransom Center are pleased to announce this DC-2014 special pre-conference event that will bring together experts and practitioners to explore archival description in the cultural heritage descriptive landscape and the emergence of authority files/identity description as an opportunity for cultural heritage cross-community collaboration. In addition, this day-long workshop to be held at the Harry Ransom Center on the University of Texas at Austin campus will provide attendees with the latest information on key metadata editing and management tools used by the working archivist. You will not want to miss "Fonds & Bonds: Archival Metadata, Tools, and Identity Management." Space for this special event is limited, so register now for DC-2014 at http://purl.org/dcevents/dc-2014/register.

Posted at 23:59

July 21

Frederick Giasson: New UMBEL Concept Noun Tagger Web Service & Other Improvements

Last week, we released the UMBEL Concept Plain Tagger web service endpoint. Today we are releasing the UMBEL Concept Noun Tagger. umbel_ws

This noun tagger uses UMBEL reference concepts to tag an input text, and is based on the plain tagger, except as noted below.

The noun tagger uses the plain labels of the reference concepts as matches against the nouns of the input text. With this tagger, no manipulations are performed on the reference concept labels nor on the input text except if you specify the usage of the stemmer. Also, there is NO disambiguation performed by the tagger if multiple concepts are tagged for a given keyword.

Intended Users

This tool is intended for those who want to focus on UMBEL and do not care about more complicated matches. The output of the tagger can be used as-is, but it is intended to be the input to more sophisticated reference concept matching and disambiguation methods. Expect additional tagging methods to follow.

Stemming Option

This web service endpoint does have a stemming option. If the option is specified, then the input text will be stemmed and the matches will be made against an index where all the preferred and alternative labels have been stemmed as well. Then once the matches occurs, the tagger will recompose the text such that unstemmed versions of the input text and the tagged reference concepts are presented to the user.

Depending on the use case. users may prefer turning on or off the stemming option on this web service endpoint.

The Web Service Endpoint

The web service endpoint is freely available. It can return its resultset in JSON, Clojure code or EDN (Extensible Data Notation).

This endpoint will return a list of matches on the preferred and alternative labels of the UMBEL reference concepts that match the noun tokens of an input text. It will also return the number of matches and the position of the tokens that match the concepts.

The Online Tool

We also provide an online tagging tool that people can use to experience interacting with the web service.

The results are presented in two sections depending on whether the preferred or alternative label(s) were matched. Multiple matches, either by concept or label type, are coded by color. Source words with matches and multiple source occurrences are ranked first; thereafter, all source words are presented alphabetically.

The tagged concepts can be clicked to have access to their full description.

umbel_tagger_noun

Other UMBEL Website Improvements

We also did some more improvements to the UMBEL website.

Search Autocompletion Mode

First, we created a new autocomplete option on the UMBEL Search web service endpoint. Often people know the concept they want to look at, but they don’t want to go to a search results page to select that concept. What they want is to get concept suggestions instantly based on the letters they are typing in a search box.

Such a feature requires a special kind of search which we call an “autocompletion search”. We added that special mode to the existing UMBEL search web service endpoint. Such a search query takes about 30ms to process. Most of that time is due to the latency of the network since the actual search function takes about 0.5 millisecond the complete.

To use that new mode, you only have to append /autocomplete to the base search web service endpoint URL.

Search Autocompletion Widget

Now that we have this new autocomplete mode for the Search endpoint, we also leveraged it to add autocompletion behavior on the top navigation search box on the UMBEL website.

Now, when you start typing characters in the top search box, you will get a list of possible reference concept matches based on the preferred labels of the concepts. If you select one of them, you will be redirected to their description page.

concept_autocomplete

Tagged Concepts Within Concept Descriptions

Finally, we improved the quality of the concept description reading experience by linking concepts that were mentioned in the descriptions to their respective concept pages. You will now see hyperlinks in the concept descriptions that link to other concepts.

linked_concepts

Posted at 12:34

July 20

Bob DuCharme: When did linking begin?

Pointing somewhere with a dereferenceable address, in the twelfth (or maybe fifth) century.

Posted at 14:40

July 17

Ebiquity research group UMBC: Preprint: Interpreting Medical Tables as Linked Data to Generate Meta-Analysis Reports

clinicalTable3500

Varish Mulwad, Tim Finin and Anupam Joshi, Interpreting Medical Tables as Linked Data to Generate Meta-Analysis Reports, 15th IEEE Int. Conf. on Information Reuse and Integration, Aug 2014.

Evidence-based medicine is the application of current medical evidence to patient care and typically uses quantitative data from research studies. It is increasingly driven by data on the efficacy of drug dosages and the correlations between various medical factors that are assembled and integrated through meta–analyses (i.e., systematic reviews) of data in tables from publications and clinical trial studies. We describe a important component of a system to automatically produce evidence reports that performs two key functions: (i) understanding the meaning of data in medical tables and (ii) identifying and retrieving relevant tables given a input query. We present modifications to our existing framework for inferring the semantics of tables and an ontology developed to model and represent medical tables in RDF. Representing medical tables as RDF makes it easier for the automatic extraction, integration and reuse of data from multiple studies, which is essential for generating meta–analyses reports. We show how relevant tables can be identified by querying over their RDF representations and describe two evaluation experiments: one on mapping medical tables to linked data and another on identifying tables relevant to a retrieval query.

Posted at 10:38

July 16

Tetherless World Constellation group RPI: Notes on public talks

Massimo and I worked together on two posters about automatic provenance capturing for research publications and we won the ESIP FUNding Friday award. What left unforgettable to me, however, is the great lesson I learnt from giving the 2 minute pitch in front of the ESIP folks.

During the 2 minutes talk, I just could not help staring at the two posters we printed and made on the day before and that morning. Now I know the reason — it’s because I only practiced my speech with one of the posters displayed on my laptop. For the other poster, I have no chance to practice talking about it at all. I became dependent on the presence of the posters in front of me and cannot make the talk in front of people, instead of posters.

Possible solutions to make my eyes move away from the posters when talking? The best I thought of is to get REALLY familiar with the topic I’m gonna present — at least so familiar that I don’t need to look at any auxiliary facility such as a poster to remind myself what to say, better if being able to save some spare attention for the audience — to receive their feedback and adjust accordingly in real time. The need to ignore the audience for a while to concentrate on “what should I say here?” indicates that I’m not familiar enough with the topic.

In addition to the content, presenters also need to get familiar with the way of presenting the content. This could include scrutinizing the practice talk sentence by sentence to make sure “I said what I meant and I meant what I said”. Not until such clarity and confidence are reached can one start thinking about all the fancy stuff like speaking pace, volume variations and eye contacts with audience. Well, those are fancy to me, not necessarily for good speakers.

So there is really a lot to work on for a public talk, especially if it’s the first time for the presenter to talk about the idea. The work is so much that it cannot be done over the night before the talk. We need to work on the familiarity, clarity and confidence of our ideas on a daily basis. It helps to write down what we mean and talk about it often.

 

Posted at 17:35

July 15

Ebiquity research group UMBC: :BaseKB offered as a better Freebase version

:BaseKB

In The trouble with DBpedia, Paul Houle talks about the problems he sees in DBpedia, Freebase and Wikidata and offers up :BaseKB as a better “generic database” that models concepts that are in people’s shared consciousness.

:BaseKB is a purified version of Freebase which is compatible with industry-standard RDF tools. By removing hundreds of millions of duplicate, invalid, or unnecessary facts, :BaseKB users speed up their development cycles dramatically when compared to the source Freebase dumps.

:BaseKB is available for commercial and academic use under a CC-BY license. Weekly versions (:BaseKB Now) can be downloaded from Amazon S3 on a “requester-paid basis”, estimated at $3.00US per download. There are also BaseKB Gold releases which are periodic :BaseKB Now snapshots. These can be downloaded free via Bittorrent or purchased as a Blu Ray disc.

It looks like it’s worth checking out!

Posted at 19:49

Semantic Web Company (Austria): From Taxonomies over Ontologies to Knowledge Graphs

With the rise of linked data and the semantic web, concepts and terms like ‘ontology’, ‘vocabulary’, ‘thesaurus’ or ‘taxonomy’ are being picked up frequently by information managers, search engine specialists or data engineers to describe ‘knowledge models’ in general. In many cases the terms are used without any specific meaning which brings a lot of people to the basic question:

What are the differences between a taxonomy, a thesaurus, an ontology and a knowledge graph?

This article should bring light into this discussion by guiding you through an example which starts off from a taxonomy, introduces an ontology and finally exposes a knowledge graph (linked data graph) to be used as the basis for semantic applications.

1. Taxonomies and thesauri

Taxonomies and thesauri are closely related species of controlled vocabularies to describe relations between concepts and their labels including synonyms, most often in various languages. Such structures can be used as a basis for domain-specific entity extraction or text categorization services. Here is an example of a taxonomy created with PoolParty Thesaurus Server which is about the Apollo programme:

Apollo programme taxonomyThe nodes of a taxonomy represent various types of ‘things’ (so called ‘resources’): The topmost level (orange) is the root node of the taxonomy, purple nodes are so called ‘concept schemes’ followed by ‘top concepts’ (dark green) and ordinary ‘concepts’ (light green). In 2009 W3C introduced the Simple Knowledge Organization System (SKOS) as a standard for the creation and publication of taxonomies and thesauri. The SKOS ontology comprises only a few classes and properties. The most important types of resources are: Concept, ConceptScheme and Collection. Hierarchical relations between concepts are ‘broader’ and its inverse ‘narrower’. Thesauri most often cover also non-hierarchical relations between concepts like the symmetric property ‘related’. Every concept has at least on ‘preferred label’ and can have numerous synonyms (‘alternative labels’). Whereas a taxonomy could be envisaged as a tree, thesauri most often have polyhierarchies: a concept can be the child-node of more than one node. A thesaurus should be envisaged rather as a network (graph) of nodes than a simple tree by including polyhierarchical and also non-hierarchical relations between concepts.

2. Ontologies

Ontologies are perceived as being complex in contrast to the rather simple taxonomies and thesauri. Limitations of taxonomies and SKOS-based vocabularies in general become obvious as soon as one tries to describe a specific relation between two concepts: ‘Neil Armstrong’ is not only unspecifically ‘related’ to ‘Apollo 11′, he was ‘commander of’ this certain Apollo mission. Therefore we have to extend the SKOS ontology by two classes (‘Astronaut’ and ‘Mission’) and the property ‘commander of’ which is the inverse of ‘commanded by’.

Apollo ontology relationsThe SKOS concept with the preferred label ‘Buzz Aldrin’ has to be classified as an ‘Astronaut’ in order to be described by specific relations and attributes like ‘is lunar module pilot of’ or ‘birthDate’. The introduction of additional ontologies in order to expand expressivity of SKOS-based vocabularies is following the ‘pay-as-you-go’ strategy of the linked data community. The PoolParty knowledge modelling approach suggests to start first with SKOS to further extend this simple knowledge model by other knowledge graphs, ontologies and annotated documents and legacy data. This paradigm could be memorized by a rule named ‘Start SKOS, grow big’.

3. Knowledge Graphs

Knowledge graphs are all around (e.g. DBpedia, Freebase, etc.). Based on W3C’s Semantic Web Standards such graphs can be used to further enrich your SKOS knowledge models. In combination with an ontology, specific knowledge about a certain resource can be obtained with a simple SPARQL query. As an example, the fact that Neil Armstrong was born on August 5th, 1930 can be retrieved from DBpedia. Watch this YouTube video which demonstrates how ‘linked data harvesting’ works with PoolParty.

Knowledge graphs could be envisaged as a network of all kind things which are relevant to a specific domain or to an organization. They are not limited to abstract concepts and relations but can also contain instances of things like documents and datasets.

Why should I transform my content and data into a large knowledge graph?

The answer is simple: to being able to make complex queries over the entirety of all kind of information. By breaking up the data silos there is a high probability that query results become more valid.

With PoolParty Semantic Integrator, content and documents from SharePoint, Confluence, Drupal etc. can be tranformed automatically to integrate them into enterprise knowledge graphs.

Taxonomies, thesauri, ontologies, linked data graphs including enterprise content and legacy data – all kind of information could become part of an enterprise knowledge graph which can be stored in a linked data warehouse. Based on technologies like Virtuoso, such data warehouses have the ability to serve as a complex question answering system with excellent performance and scalability.

4. Conclusion

In the early days of the semantic web, we’ve constantly discussed whether taxonomies, ontologies or linked data graphs will be part of the solution. Again and again discussions like ‘Did the current data-driven world kill ontologies?‘ are being lead. My proposal is: try to combine all of those. Embrace every method which makes meaningful information out of data. Stop to denounce communities which don’t follow the one or the other aspect of the semantic web (e.g. reasoning or SKOS). Let’s put the pieces together – together!

 

Posted at 08:57

July 14

AKSW Group - University of Leipzig: [CfP] Semantic Web Journal: Special Issue on Question Answering over Linked Data

Dear all,
The Semantic Web Journal is launching a special issue on Question Answering over Linked Data, soliciting original papers that
* address the challenges involved in question answering over linked data,
* present resources and tools to support question answering over linked data, or
* describe question answering systems and applications.
Submission deadline is November 30th, 2014. For more detailed information please visit:
With kind regards,
Axel Ngonga and Christina Unger

Posted at 14:58

AKSW Group - University of Leipzig: New Version of FOX

Dear all,
We are very pleased to announce a new version of FOX [1]. Several improvements have been carried out:
(1) We have fixed minor issues in the code. In addition, we have updated several libraries.
(2) As a result, the FOX output parameters have changed minimally. An exact specification of the parameters with examples is available at the demo page. [2]
(3) Moreover, we now make bindings available for Java[3] and Python[4] to use FOX’s web service within your application.
Enjoy and cheers,
The FOX team

Posted at 14:57

Frederick Giasson: New UMBEL Concept Tagger Web Service

We just released a new UMBEL web service endpoint and online tool: the Concept Tagger Plain. umbel_ws

This plain tagger uses UMBEL reference concepts to tag an input text. The OBIE (Ontology-Based Information Extraction) method is used, driven by the UMBEL reference concept ontology. By plain we mean that the words (tokens) of the input text are matched to either the preferred labels or alternative labels of the reference concepts. The simple tagger is merely making string matches to the possible UMBEL reference concepts.

This tagger uses the plain labels of the reference concepts as matches against the input text. With this tagger, no manipulations are performed on the reference concept labels nor on the input text (like stemming, etc.). Also, there is NO disambiguation performed by the tagger if multiple concepts are tagged for a given keyword.

Intended Users

This tool is intended for those who want to focus on UMBEL and do not care about more complicated matches. The output of the tagger can be used as-is, but it is intended to be the initial input to more sophisticated reference concept matching and disambiguation methods. Expect additional tagging methods to follow (see conclusion).

The Web Service Endpoint

The web service endpoint is freely available. It can return its resultset in JSON, Clojure code or EDN (Extensible Data Notation).

This endpoint will return a list of matches on the preferred and alternative labels of the UMBEL reference concepts that match the tokens of an input text. It will also return the number of matches and the position of the tokens that match the concepts.

The Online Tool

We also provide an online tagging tool that people can use to experience interacting with the web service.

The results are presented in two sections depending on whether the preferred or alternative label(s) were matched. Multiple matches, either by concept or label type, are coded by color. Source words with matches and multiple source occurrences are ranked first; thereafter, all source words are presented alphabetically.

The tagged concepts can be clicked to have access to their full description.

reference_concept_tagger_uiEDN and ClojureScript

An interesting thing about this user interface is that it has been implemented in ClojureScript and the data serialization exchanged between this user interface and the tagger web service endpoint is in EDN. What is interesting about that is that when the UI receives the resultset from the endpoint, it only has to evaluate the EDN code using the ClojureScript reader (cljs.reader/read-string) to consider the output of the web service endpoint as native data to the application.

No parsing of non-native data format is necessary, which makes the code of the UI simpler and makes the data manipulation much more natural to the developer since no external API is necessary.

What is Next?

This is the first of a series of tagging web service endpoints that will be released. Our intent is to release UMBEL tagging services that have different level of sophistication. Depending on how someone wants to use UMBEL, he will have access to different tagging services that he could use and supplement with their own techniques to end up with their desired results.

The next taggers (not in order) that are planned to be released are:

  • Plaintagger – no weighting or classification except by occurrence count
    • Entity plain tagger (using the Wikidata dictionary)
    • Scones plain tagger – concept + entity
  • Nountagger – with POS, only tags the nouns; generally, the preferred, simplest baselinetagger
    • Concept noun tagger
    • Entity noun tagger
    • Scones noun tagger
  • N-gramtagger – a phrase-basedtagger
    • Concept n-gram tagger
    • Entity n-gram tagger
    • Scones n-gram tagger
  • Completetagger – combinations of above with different machine learning techniques
    • Concept complete tagger
    • Entity complete tagger
    • Scones complete tagger.

So, we welcome you to try out the system online and we welcome your comments and suggestions.

Posted at 14:44

July 13

John Goodwin: Benford’s Law and the Administrative Geography of Great Britain

Just listened to the latest episode of the

Posted at 18:10

July 11

Norm Walsh: Back to Ubuntu

The best QA wins.

Posted at 12:52

July 10

W3C Data Activity: CSV on the Web: Metadata Vocabulary for Tabular Data and other updates

The CSV on the Web Working Group has published a First Public Working Draft of a Metadata Vocabulary for Tabular Data. This is accompanied by an update to the Model for Tabular Data and Metadata on the Web document, alongside … Continue reading

Posted at 07:45

July 07

Frederick Giasson: Validating RDF Data by Evaluating RDF/Clojure Code

I recently started to investigate different ways to serialize RDF triples using Clojure code 1 2 3. I had at least two goals in mind: first, ending up with an RDF serialization format that is valid Clojure code and that could easily be manipulated using core Clojure functions. The second goal was to be able to “execute” the code to validate the data according to the semantics of the ontologies used to define the data.

This blog post focuses on showing how the second goal can be implemented.

Before doing so, let’s take some time to explore what the sayings of ‘Code as Data' and ‘Data as Code' may mean in that context.

Code as Data, Data as Code

What is Code as Data? It means that the program code you write is also data that can be manipulated by a program. In other words, the code you are writing can be used as input [to a macro], which can then be transformed and then evaluated. The code is considered to be data to be manipulated by a macro system to output executable code. The code itself becomes data that can be manipulated with some internal mechanism in the language. But the result of these manipulations is still executable code.

What is Data as Code? It means that you can use a programming language’s code to embed (serialize) data. It means that you can specify your own sublanguage (DSL), translate it into code (using macros) and execute the resulting code.

The initial goal of a RDF/Clojure serialization is to specify a way to write RDF triples (data) as Clojure (code). That code is data that can be manipulated by macros to produce executable code. The evaluation of the resulting code is the validation of the data structures (the graph defined by the triples) according to the semantics defined in the ontologies. This means that validating the graph may also occur by evaluating the resulting code (and running the functions).

Ontology Creation

In my previous blog posts about serializing RDF data as Clojure code, I noted that the properties, classes and datatypes that I was referring to in those blog posts were to be defined elsewhere in the Clojure application and that I would cover it in another blog post. Here it is.

All of the ontology properties, classes and datatypes that we are using to serialize the RDF data are defined as Clojure code. They can be defined in a library, directly in your application’s code or even as data that gets emitted by a web service endpoint that you evaluate at runtime (for data that has not yet been evaluated).

In the tests I am doing, I define RDF properties as Clojure functions; the RDF classes and datatypes are normal records that comply with the same RDF serialization rules as defined for the instance records.

Some users may wonder: why is everything defined as a map but not the properties? Though each property’s RDF description is available as a map, we use it as Clojure meta-data for that function. We consider that properties are functions and not a map. As you will see below, these functions are used to validate the RDF data serialized in Clojure code. That is the reason why they are represented as Clojure functions and not as maps like everything else.

Someone could easily leverage the RDF/Clojure serialization without worrying about the ontologies. He could get the triples that describes the records without worrying about the semantics of the data as represented by the ontologies. However, if that same person would like to reason over the data that is presented to him — if he wants to make sure the data is valid and coherent –then he will require the ontologies descriptions.

Now let’s see how these ontologies are being generated.

Creating OWL Classes

As I said above, an OWL class is nothing but another record. It is described using the same rules as previously defined4. However, it is described using the OWL language and refers to a specific semantic. Creating such a class is really easy. We just have to follow the semantics of the OWL language, and the rules of RDF/Clojure serialization. For example, take this example that creates a simple FOAF person class:

(def foaf:+person
  "The class of all the persons."
  {#'uri "http://xmlns.com/foaf/0.1/Person"
   #'rdf:type #'owl:+class
   #'rdfs:label "Person"
   #'rdfs:comment "The class of all the persons."})

As you can see, we are describing the class the same way we were defining normal instance records. However, we are doing it using the OWL language.

Creating OWL Datatypes

Datatypes are also serialized like normal RDF/Clojure records; that is, just like classes. However, since the datatypes are fairly static in the way we define them, I created a simple macro called gen-datatype that can be used to generate datatypes:

(defmacro gen-datatype
  "Create a new datatype that represents a OWL datatype class.
   [name] is the name of the datatype to create.
   Optional parameters are:
     [:uri] this is the URI of the datatype to create
     [:base] this is the URI of base XSD datatype of this new datatype
     [:pattern] this is a regex pattern to use to use to validate that
                a given string represent a value that belongs to that datatype
     [:docstring] the docstring to use when creating this datatype"

  [name & {:keys [uri base pattern docstring]}]
  `(def ~name
     ~(str docstring)
     (merge {#'rdf:type "http://www.w3.org/TR/rdf-schema#Datatype"}
            (if ~uri {#'rdf.core/uri ~uri})<br />
            (if ~pattern {#'xsp:pattern ~pattern})
            (if ~base {#'xsp:base ~base}))))

You can use this macro like this:

(gen-datatype *full-us-phone-number
              :uri "http://purl.org/ontology/foo#phone-number"
              :pattern "^[0-9]{1}-[0-9]{3}-[0-9]{3}-[0-9]{4}$"
              :base "http://www.w3.org/2001/XMLSchema#string"
              :docstring "Datatype representing a phone US phone number")

And it will generate a datatype like this:

{#'ontologies.core/xsp:base "http://www.w3.org/2001/XMLSchema#string"
 #'ontologies.core/xsp:pattern "^[0-9]{1}-[0-9]{3}-[0-9]{3}-[0-9]{4}$"
 #'rdf.core/uri "http://purl.org/ontology/foo#phone-number"
 #'ontologies.core/rdf:type "http://www.w3.org/TR/rdf-schema#Datatype"}

What this datatype defines is a class of literals that represents the full version of an US phone number. I will explain how such a datatype is used to validate RDF data records below.

Creating OWL Properties

Properties are different from classes and datatypes. They are represented as functions in the RDF/Clojure serialization. I created another simple macro called gen-property to generate these OWL properties:

(defmacro gen-property
  "Create a new property that represents a OWL property.
     [name] is the name of the property/function to create. This is the name that will be
            used in your Clojure code.
     [:uri] this is the URI of the property to create
     [:description] this is the description of the property to create
     [:domain] this is the domain of the URI to create. The domain is represented by one or multiple
               classes that represent that domain. If there is more than one class that represent the domain
               you can specify the ^intersection-of or the ^union-of meta-data to specify if the classes
               should be interpreted as a union or an intersection of the set of classes.
     [:range] this is the range of the URI to create. The range is represented by one or multiple
               classes that represent that range. If there is more than one class that represent the range
               you can specify the ^intersection-of or the ^union-of meta-data to specify if the classes
               should be interpreted as a union or an intersection of the set of classes.
     [:sub-class-of] one or multiple classes that are super-classes of this class
     [:equivalent-property] one or multiple classes that are equivalent classes of this class
     [:is-object-property] true if the property being created is an object property
     [:is-datatype-property] true if the property being created is a datatype property
     [:is-annotation-property] true if the property being created is an annotation property
     [:cardinality] cardinality of the property"

  [name &amp; {:keys [uri
                  label
                  description
                  domain
                  range
                  sub-property-of
                  equivalent-property
                  is-object-property
                  is-datatype-property
                  is-annotation-property
                  cardinality]}]
  (let [vals (gensym "label-")
        docstring (if description
                    (str description ".\n [" vals "] is the preferred label to specify.")
                    (str ""))
        type (if is-object-property
               #'owl:+object-property
               (if is-annotation-property
                 #'owl:+annotation-property
                 #'owl:+datatype-property))
        metadata (merge (if uri {#'rdf.core/uri uri})
                        (if type {#'rdf:type type})
                        (if label {#'iron:pref-label label})
                        (if description {#'iron:description description})
                        (if range {#'rdfs:range range})
                        (if domain {#'rdfs:domain domain})
                        (if cardinality {#'owl:cardinality cardinality}))]
     `(defn ~(with-meta name metadata)
        ~(str docstring)
        [~vals]
        (rdf.property/validate-property #'~name ~vals))))

Note that this macro currently only accommodates a subset of the OWL language. For example, there is no way to use the macro to specify cardinality, etc. I only created what was required for writing this blog post.

You can then use this macro to create new properties like this:

(gen-property foo:phone
              :is-datatype-property true
              :label "phone number"
              :uri "http://purl.org/ontology/foo#phone"
              :range *full-us-phone-number
              :domain #'owl:+thing
              :cardinality 1)

(gen-property foo:knows
              :is-object-property true
              :label "a person that knows another person"
              :uri "http://purl.org/ontology/foo#knows"
              :range #'umbel.ref/umbel-rc:+person
              :domain #'umbel.ref/umbel-rc:+person)

Some other Classes, Datatypes and Properties

So, here is the list of classes, datatypes and properties that will be used later in this blog post for demonstrating how validation occurs in such a framework:

(in-ns 'rdf.core)
(defn uri
  [s]
  (try
    (URI. #^String s)
    (catch Exception e
      (throw (IllegalStateException. (str "Invalid URI: \"" s "\""))))))

(defn datatype
  [s]
  (if (var? s)
    (if (not= (get @s #'ontologies.core/rdf:type) "http://www.w3.org/TR/rdf-schema#Datatype")
      (throw (IllegalStateException. (str "Provided value for datatype is not a datatype: \"" s "\""))))
    (throw (IllegalStateException. (str "Provided value for datatype is not a datatype: \"" s "\"")))))

(in-ns 'ontologies.core)

(gen-property iron:pref-label
              :uri "http://purl.org/ontology/iron#prefLabel"
              :label "Preferred label"
              :description "Preferred label for describing a resource"
              :domain #'owl:+thing
              :range #'rdfs:*literal
              :is-datatype-property true)

(def owl:+thing
  "The class of OWL individuals."
  {#'uri "http://www.w3.org/2002/07/owl#Thing"
   #'rdf:type #'rdfs:+class
   #'rdfs:label "Thing"
   #'rdfs:comment "The class of OWL individuals."})

(gen-datatype xsd:*string
              :uri "http://www.w3.org/2001/XMLSchema#string"
              :docstring "Datatypes that represents all the XSD strings")

Concluding with Ontologies

Ontologies are easy to write in RDF/Clojure. There is a simple set of macros that can be used to help create the ontology classes, properties and datatypes. However, in the future I am anticipating to create a library that would use the OWLAPI to take any OWL ontology and to serialize it using these rules. The output could be Clojure code like this, or JAR libraries. Additionally, some investigation will be done to use more Clojure idiomatic projects like Phil Lord’s Tawny-OWL project.

RDF Data Instantiation Using Clojure Code

Now that we have the classes, datatypes and properties defined in our Clojure application, we can start defining data records like this:

(def valid-record (r {uri "http://foo-bar.com/test/"
                      rdf:type owl:+thing
                      foo:phone ["1-421-353-9057"]
                      iron:pref-label {value "Test cardinality validation"
                                       lang "en"
                                       datatype xsd:*string}}))

Data Validation

Now that we have all of the ontologies defined in our Clojure application, we can start to define records. Let’s start with a record called valid-record that describes something with a phone number and a preferred label. The data is there and available to you. Now, what if I would like to do a bit more than this, what if I would like to validate it?

Validating such a record is as easy as evaluating it. What does that mean? It means that each value of the map that describes the record will be evaluated by Clojure. Since each key refers to a function, then evaluating each value means that we evaluate the function and use the value as specified by the description of the record. Then we iterate over the whole map to validate all of the triples.

To perform this kind of process, we can create a validate-resource function that looks like:

(defn validate-resource [resource]
  (doseq [[property value] resource]
    (do (println (str "validating resource property: " property))
    (if (fn? @property)
      (@property value)))))

You can use it like this:

(validate-resource valid-record)

If no exceptions are thrown, then the record is considered valid according to the ontology specifications. Easy, no? Now let’s take a look at how this works.

If you check the gen-property macro, you will notice that every time a function is evaluated, the #'rdf.property/validate-property function is called. What this function does is to perform the validation of the property given the specified value(s). The validation is done according to the description of the property in the ontology specification. Such a validate-property looks like:

(defn validate-property
  "Validate that the values of the property are valid according to the description of that property
   [property] should be the reference to the function, like #'foo-phone
   [values] are the actual values of that property"

  [property values]
  (do
    (validate-owl-cardinality property values)
    (validate-rdfs-range property values)))

So what it does is to run a series of other functions to validate different characteristics of a property. For this blog post, we demonstrate how the following characteristics are being validated:

  1. Cardinality of a property
  2. URI validation
  3. Datatype validation
  4. Range validation when the range is a class.

Cardinality Validation

Validating the cardinality of a property means that we check if the number of values of a given property is as specified in the ontology. In this example, we validate the exact cardinality of a property. It could be extended to validate the maximum and minimum cardinalities as well.

The function that validates the cardinality is the validate-owl-cardinality function that is defined as:

(defn validate-owl-cardinality
  [property values]
  (doseq [[meta-key meta-val] (seq (meta property))]
    ; Only validate if there is a owl/cardinality property defined in the metadata
    (if (= meta-key #'ontologies.core/owl:cardinality)
      ; If the value is a string, a var or a map, we check if the cardinality is 1
      (if (or (string? values) (map? values) (var? values))
        (if (not= meta-val 1)
          (throw (IllegalStateException.
                  (format "CARDINALITY VALIDATION ERROR: property %s has 1 values and was expecting %d values" property meta-val))))
        ; If the value is an array, we validate the expected cardinality
        (if (not= (count values) meta-val )
          (throw (IllegalStateException.
                  (format "CARDINALITY VALIDATION ERROR: property %s has %d values and was expecting %d values" property (count values) meta-val))))))))

For each property, it checks to see if the owl:cardinality property is defined. If it is, then it makes sure that the number of values for that property is valid according to what is defined in the ontology. If there is a mismatch, then the validation function will throw an exception and the validation process will stop.

Here is an example of a record that has a cardinality validation error as defined by the property (see the description of the property below):

(def card-validation-test (r {uri "http://foo-bar.com/test/"
                              rdf:type owl:+thing
                              foo:phone ["1-421-353-9057" "(1)-(412)-342-3246"]
                              iron:pref-label {value "Test cardinality validation"
                                               lang "en"
                                               datatype xsd:*string}}))
user> (validate-resource card-validation-test)
IllegalStateException CARDINALITY VALIDATION ERROR: property #'dataset-test.core/foo:phone has 2 values and was expecting 1 values  rdf.property/validate-owl-cardinality (property.clj:36)

URI Validation

Everything you define in RDF/Clojure has a URI. However, not every string is a valid URI. All of the URIs you may define can be validated as well. When you define a URI, you use the #'rdf.core/uri function to specify the URI. That function is defined as:

(defn uri
  [s]
  (try
    (URI. #^String s)
    (catch Exception e
      (throw (IllegalStateException. (str "Invalid URI: \"" s "\""))))))

As you can see, we are using the java.net.URI function to validate the URI you are defining for your records/classes/properties/datatypes. If you make a mistake when writing a URI, then a validation error will be thrown and the validation process will stop.

Here is an example of a record that has an invalid URI:

(def uri-validation-test (r {uri "-http://foo-bar.com/test/"
                             rdf:type owl:+thing
                             foo:phone "1-421-353-9057"
                             iron:pref-label {value "Test URI validation"
                                              lang "en"
                                              datatype xsd:*string}}))
user> (validate-resource uri-validation-test)
IllegalStateException Invalid URI: "-http://foo-bar.com/test/"  rdf.core/uri (core.clj:16)

Datatype Validation

In OWL, a datatype property is used to refer to literal values that belong to classes of literals (datatypes classes). A datatype class is a class that represents all the literals that belong to that class of literal values as defined by the datatype. For example, the *full-us-phone-number datatype we described above defines the class of all the literals that are full US phone numbers.

Validating the value of a property according to its datatype means that we make sure that the literal value(s) belong to that datatype. Most of the time, people will use the XSD datatypes. If custom datatypes are created, then they will be based on one of the XSD datatypes, and a regex pattern will be defined to specify how the literal should be constructed.

(defn validate-rdfs-range
  [property values]
  (do
    ; If the value is a map, then validate the "value", "lang" and "datatype" assertions
    (if (map? values)
      (validate-map-properties values))
    (doseq [[meta-key ranges] (seq (meta property))]
      ; make sure a range is defined for this property
      (if (= meta-key #'ontologies.core/rdfs:range)
        (let [ranges (if (vector? ranges)
                       ranges
                       ^:intersection-of [ranges])]
          (if (true? (:intersection-of (meta ranges)))
            ; consider that all the values of the range is a intersection-of
            (doseq [range ranges]<br />
              (if (is-datatype-property? property)
                ; we are checking the range of a datatype property
                ; @TODO here we have to change that portion to call a function that will do the validation
                ;       according to the existing XSD types, or any custom datatype based on these core
                ;       XSD datatypes. Just like the DVT (Dataset Validation Tool)
                ;
                ;       For now, we simply test using a datatype that has a pattern defined.
                (let [pattern (get range #'ontologies.core/xsp:pattern)]
                  (if pattern
                    ; a validation pattern has been defined for this value
                    (if (vector? values)
                      ; Validate all the values of the property according to this Datatype
                      (doseq [v values]
                        (validate-range-pattern v pattern ranges))
                      ; Validate the value according to the datatype
                      (validate-range-pattern values pattern ranges))))
                ; we are checking the range of an object property
                (if (vector? values)
                  (doseq [v values]
                    (validate-range-object v range property))
                  (validate-range-object values range property))))
            ; consider that all the values of the range is an union-of
            (println "@TODO Ranges union validation")))))))

(defn- validate-range-pattern
  [v pattern range]
  (if (string? v)
    (if (nil? (re-seq (java.util.regex.Pattern/compile pattern) v))
      (throw (IllegalStateException.
              (format "Value \"%s\" invalid according to the definition of the datatype \"%s\""  v range))))
    (if (and (map? v) (nil? (validate-map-properties v)))
      (if (nil? (re-seq (java.util.regex.Pattern/compile pattern) (get v 'value)))
        (throw (IllegalStateException.
                (format "Value \"%s\" invalid according to the definition of the datatype \"%s\""  v range)))))))

(defn- validate-map-properties
  [m]
  (doseq [[p v] m]
        (if (fn? @p)
          (@p v))))

What this function does is to validate the range of a property. It checks what kind of values that exist for the input property according to the RDF/Clojure specification (is it a string, a map, an array, a var, etc.?). Then it checks if the property is an object property or a datatype property. If it is a datatype property, then it checks if a range has been defined for it. If it does, then it validates the value(s) according to the datatype defined in the range of the property.

Here is an example of a few records that have different datatype validation errors:

(def datatype-validation-test (r {uri "http://foo-bar.com/test/"
                                  rdf:type owl:+thing
                                  foo:phone "1-421-353-90573"
                                  iron:pref-label {value "Test cardinality validation"
                                                   lang "en"
                                                   datatype xsd:*string}}))
(def datatype-validation-test-2 (r {uri "http://foo-bar.com/test/"
                                  rdf:type owl:+thing
                                  foo:phone "1-421-353-9057"
                                  iron:pref-label {value "Test datatype validation"
                                                   lang "en"
                                                   datatype "not-a-datatype"}}))

(def xsd:*string-not-a-datatype)

(def datatype-validation-test-3 (r {uri "http://foo-bar.com/test/"
                                    rdf:type owl:+thing
                                    foo:phone "1-421-353-9057"
                                    iron:pref-label {value "Test datatype validation"
                                                     lang "en"
                                                     datatype xsd:*string-not-a-datatype}}))

(def datatype-validation-test-4 (r {uri "http://foo-bar.com/test/"
                                    rdf:type owl:+thing
                                    foo:phone [{value "1-421-353-9057"
                                                datatype xsd:<em>string-not-a-datatype}]
                                    iron:pref-label {value "Test datatype validation"
                                                     lang "en"
                                                     datatype xsd:</em>string}}))
user> (validate-resource datatype-validation-test)
IllegalStateException Value "1-421-353-90573" invalid according to the definition of the datatype "[{#'ontologies.core/xsp:pattern "^[0-9]{1}-[0-9]{3}-[0-9]{3}-[0-9]{4}$", #'rdf.core/uri "http://purl.org/ontology/foo#phone-number", #'ontologies.core/rdf:type "http://www.w3.org/TR/rdf-schema#Datatype"}]"  rdf.property/validate-range-pattern (property.clj:150)

user> (validate-resource datatype-validation-test-2)
IllegalStateException Provided value for datatype is not a datatype: "not-a-datatype"  rdf.core/datatype (core.clj:31)

user> (validate-resource datatype-validation-test-3)
IllegalStateException Provided value for datatype is not a datatype: "#'dataset-test.core/xsd:*string-not-a-datatype"  rdf.core/datatype (core.clj:30)

user> (validate-resource datatype-validation-test-4)
IllegalStateException Provided value for datatype is not a datatype: "#'dataset-test.core/xsd:*string-not-a-datatype"  rdf.core/datatype (core.clj:30)

As you can see, the validate-rdfs-range is incomplete regarding datatype validation. I am still updating this function to make sure that we validate all the existing XSD datatypes. Then we have to better validate the custom datatypes to make sure that we consider their xsp:base type, etc. The code that should be created is similar to the one I created for the Data Validation Tool (which is written in PHP).

Range validation when the range is a class

Finally, let’s shows how the range of an object property can be validated. Validating the range of an object property means that we make sure that the record referenced by the object property belongs to the class of the range of the property.

For example, consider a property foo:knows that has a range that specifies that all the values of foo:knows needs to belong to the class umbel-rc:+person. This means that all of the values defined for the foo:knows property for any record needs to refer to a record that is of type umbel-rc:+person. If it is not the case, then there is a validation error.

Here is an example of a record where the foo:knows property is not properly used:

(def wrench (r {uri "http://foo-bar.com/test/bob"
               rdf:type umbel.ref/umbel-rc:+product
               iron:pref-label "The biggest wrench ever"}))

(def object-range-validation-test (r {uri "http://foo-bar.com/test/bob"
                                      rdf:type umbel.ref/umbel-rc:+person
                                      foo:knows wrench
                                      iron:pref-label {value "Test object range validation"
                                                       lang "en"
                                                       datatype xsd:*string}}))

Remember we defined the foo:knows property with the range of umbel-rc:+person. However, in the example, the reference is to a wrench record that is of type umbel-rc:+product. Thus, we get a validation error:

user> (validate-resource object-range-validation-test)
IllegalStateException The resource "http://umbel.org/umbel/rc/Product" referenced by the property "#'dataset-test.core/foo:knows" does not belong to the class "#'umbel.ref/umbel-rc:+person" as defined by the range of the property  rdf.property/validate-range-object (property.clj:142)

The function that validates the ranges of the object properties is defined as:

(defn- validate-range-object
  [r range property]
  (do (println range)
  (let [r (if (var? r)
            (deref r)
            (if (map? r)
              (r)
              (if (string? r)
                ; @TODO get the resource's description from a dataset index
                ({}))))
        uri (get (deref (get r #'ontologies.core/rdf:type)) #'rdf.core/uri)
        uri-ending (do (println uri) (if (> (.lastIndexOf uri "/") -1)
                     (subs uri (inc (.lastIndexOf uri "/")))
                     (str "")))
        super-classes (try
                        (read-string (:body (clj-http.client/get (str "http://umbel.org/ws/super-classes/" uri-ending)
                                                                 {:headers {"Accept" "application/clojure"}
                                                                  :throw-exceptions false})))
                        (catch Exception e
                          (eval nil)))
        range-uri (get @range #'rdf.core/uri)]
    (if-not (some #{range-uri} super-classes)
      (throw (IllegalStateException. (str "The resource \"" uri "\" referenced by the property \"" property "\" does not belong to the class \"" range "\" as defined by the range of the property" )))))))

Normally, this kind validation should be done using the descriptions of the loaded ontologies. However, for the benefit of this blog post, I used a different way to perform this validation. I purposefully used some UMBEL Reference Concepts as the type of the records I described. Then the object range validation function leverages the UMBEL super-classes web service endpoint to check get the super-classes of a given class.

So what this function does is to check the type of the record(s) referenced by the foo:knows property. Then it checks the type of these record(s). What needs to be validated is whether the type(s) of the referenced record is the same, or is included, in the class defined in the range of the foo:knows property.

In our example, the range is #'umbel-rc:+person. This means that the foo:knows property can only refer to umbel-rc:+person records. In the example where we have a validation error, the type of the wrench record is umbel-rc:+product. What the validation function does is to get the list of all the super classes of the umbel-rc:+product class, and check if it is a sub-class of the umbel-rc:+person class. In this case, it is not, thus an error is thrown.

What is interesting with this example is the UMBEL super-classes web service endpoint does return the list of super classes as Clojure code. Then we use the read-string function to evaluate the list before manipulating it as if it was part of the application’s code.

Conclusion

What is elegant with this kind RDF/Clojure serialization is that the validation of RDF data is the same as evaluating the underlying code (Data as Code). If the data is invalid, then exceptions are thrown and the validation process aborts.

One thing that I yet have to investigate with such a RDF/Clojure serialization is how the semantics of the properties, classes and datatypes could be embedded into the RDF/Clojure records such that we end up with stateful RDF records that embed their own semantic at a specific point in time. This leverage would mean that even if an ontology changes in the future, the records will still be valid according to the original ontology that was used to describe them at a specific point in time (when they got written, when they got emitted by a web service endpoint, etc.).

Also, as some of my readers pointed out with my previous blog post about this subject, the fact that I use vars to serialize the RDF triples means that the serialization won’t produce valid ClojureScript code since vars doesn’t exists in ClojureScript. Paul Gearon was proposing to use keywords as the key instead of vars. Then to get the same effect as with the vars, to use a lookup index to call the functions. This avenue will be investigated as well and should be the topic of a future blog post about this RDF/Clojure serialization.

Posted at 18:27

Copyright of the postings is owned by the original blog authors. Contact us.