Planet RDF

It's triples all the way down

March 11

Sebastian Trueg: Protecting And Sharing Linked Data With Virtuoso

Disclaimer: Many of the features presented here are rather new and can not be found in  the open-source version of Virtuoso.

Last time we saw how to share files and folders stored in the Virtuoso DAV system. Today we will protect and share data stored in Virtuoso’s Triple Store – we will share RDF data.

Virtuoso is actually a quadruple-store which means each triple lives in a named graph. In Virtuoso named graphs can be public or private (in reality it is a bit more complex than that but this view on things is sufficient for our purposes), public graphs being readable and writable by anyone who has permission to read or write in general, private graphs only being readable and writable by administrators and those to which named graph permissions have been granted. The latter case is what interests us today.

We will start by inserting some triples into a named graph as dba – the master of the Virtuoso universe:

Virtuoso Sparql Endpoint

Sparql Result

This graph is now public and can be queried by anyone. Since we want to make it private we quickly need to change into a SQL session since this part is typically performed by an application rather than manually:

$ isql-v localhost:1112 dba dba
Connected to OpenLink Virtuoso
Driver: 07.10.3211 OpenLink Virtuoso ODBC Driver
OpenLink Interactive SQL (Virtuoso), version 0.9849b.
Type HELP; for help and EXIT; to exit.
SQL> DB.DBA.RDF_GRAPH_GROUP_INS ('', 'urn:trueg:demo');

Done. -- 2 msec.

Now our new named graph urn:trueg:demo is private and its contents cannot be seen by anyone. We can easily test this by logging out and trying to query the graph:

Sparql Query
Sparql Query Result

But now we want to share the contents of this named graph with someone. Like before we will use my LinkedIn account. This time, however, we will not use a UI but Virtuoso’s RESTful ACL API to create the necessary rules for sharing the named graph. The API uses Turtle as its main input format. Thus, we will describe the ACL rule used to share the contents of the named graph as follows.

@prefix acl: <> .
@prefix oplacl: <> .
<#rule> a acl:Authorization ;
  rdfs:label "Share Demo Graph with trueg's LinkedIn account" ;
  acl:agent <> ;
  acl:accessTo <urn:trueg:demo> ;
  oplacl:hasAccessMode oplacl:Read ;
  oplacl:hasScope oplacl:PrivateGraphs .

Virtuoso makes use of the ACL ontology proposed by the W3C and extends on it with several custom classes and properties in the OpenLink ACL Ontology. Most of this little Turtle snippet should be obvious: we create an Authorization resource which grants Read access to urn:trueg:demo for agent The only tricky part is the scope. Virtuoso has the concept of ACL scopes which group rules by their resource type. In this case the scope is private graphs, another typical scope would be DAV resources.

Given that file rule.ttl contains the above resource we can post the rule via the RESTful ACL API:

$ curl -X POST --data-binary @rule.ttl -H"Content-Type: text/turtle" -u dba:dba http://localhost:8890/acl/rules

As a result we get the full rule resource including additional properties added by the API.

Finally we will login using my LinkedIn identity and are granted read access to the graph:

SPARQL Endpoint Login

We see all the original triples in the private graph. And as before with DAV resources no local account is necessary to get access to named graphs. Of course we can also grant write access, use groups, etc.. But those are topics for another day.

Technical Footnote

Using ACLs with named graphs as described in this article requires some basic configuration. The ACL system is disabled by default. In order to enable it for the default application realm (another topic for another day) the following SPARQL statement needs to be executed as administrator:

prefix oplacl: <>
with <urn:virtuoso:val:config>
delete {
  oplacl:DefaultRealm oplacl:hasDisabledAclScope oplacl:Query , oplacl:PrivateGraphs .
insert {
  oplacl:DefaultRealm oplacl:hasEnabledAclScope oplacl:Query , oplacl:PrivateGraphs .

This will enable ACLs for named graphs and SPARQL in general. Finally the LinkedIn account from the example requires generic SPARQL read permissions. The simplest approach is to just allow anyone to SPARQL read:

@prefix acl: <> .
@prefix oplacl: <> .
<#rule> a acl:Authorization ;
  rdfs:label "Allow Anyone to SPARQL Read" ;
  acl:agentClass foaf:Agent ;
  acl:accessTo <urn:virtuoso:access:sparql> ;
  oplacl:hasAccessMode oplacl:Read ;
  oplacl:hasScope oplacl:Query .

I will explain these technical concepts in more detail in another article.

Posted at 14:21

Sebastian Trueg: Sharing Files With Whomever Is Simple

Dropbox, Google Drive, OneDrive, – they all allow you to share files with others. But they all do it via the strange concept of public links. Anyone who has this link has access to the file. On first glance this might be easy enough but what if you want to revoke read access for just one of those people? What if you want to share a set of files with a whole group?

I will not answer these questions per se. I will show an alternative based on OpenLink Virtuoso.

Virtuoso has its own WebDAV file storage system built in. Thus, any instance of Virtuoso can store files and serve these files via the WebDAV API (and an LDP API for those interested) and an HTML UI. See below for a basic example:

Virtuoso DAV Browser

This is just your typical file browser listing – nothing fancy. The fancy part lives under the hood in what we call VAL – the Virtuoso Authentication and Authorization Layer.

We can edit the permissions of one file or folder and share it with anyone we like. And this is where it gets interesting: instead of sharing with an email address or a user account on the Virtuoso instance we can share with people using their identifiers from any of the supported services. This includes Facebook, Twitter, LinkedIn, WordPress, Yahoo, Mozilla Persona, and the list goes on.

For this small demo I will share a file with my LinkedIn identity (Virtuoso/VAL identifier people via URIs, thus, it has schemes for all supported services. For a complete list see the Service ID Examples in the ODS API documentation.)

Virtuoso Share File

Now when I logout and try to access the file in question I am presented with the authentication dialog from VAL:

VAL Authentication Dialog

This dialog allows me to authenticate using any of the supported authentication methods. In this case I will choose to authenticate via LinkedIn which will result in an OAuth handshake followed by the granted read access to the file:

LinkedIn OAuth Handshake


Access to file granted

It is that simple. Of course these identifiers can also be used in groups, allowing to share files and folders with a set of people instead of just one individual.

Next up: Sharing Named Graphs via VAL.

Posted at 14:21

Sebastian Trueg: Digitally Sign Emails With Your X.509 Certificate in Evolution

Digitally signing Emails is always a good idea. People can verify that you actually sent the mail and they can encrypt emails in return. A while ago Kingsley showed how to sign emails in Thunderbird.I will now follow up with a short post on how to do the same in Evolution.

The process begins with actually getting an X.509 certificate including an embedded WebID. There are a few services out there that can help with this, most notably OpenLink’s own YouID and ODS. The former allows you to create a new certificate based on existing social service accounts. The latter requires you to create an ODS account and then create a new certificate via Profile edit -> Security -> Certificate Generator. In any case make sure to use the same email address for the certificate that you will be using for email sending.

The certificate will actually be created by the web browser, making sure that the private key is safe.

If you are a Google Chrome user you can skip the next step since Evolution shares its key storage with Chrome (and several other applications). If you are a user of Firefox you need to perform one extra step: go to the Firefox preferences, into the advanced section, click the “Certificates” button, choose the previously created certificate, and export it to a .p12 file.

Back in Evolution’s settings you can now import this file:

To actually sign emails with your shiny new certificate stay in the Evolution settings, choose to edit the Mail Account in question, select the certificate in the Secure MIME (S/MIME) section and check “Digitally sign outgoing messages (by default)“:

The nice thing about Evolution here is that in contrast to Thunderbird there is no need to manually import the root certificate which was used to sign your certificate (in our case the one from OpenLink). Evolution will simply ask you to trust that certificate the first time you try to send a signed email:

That’s it. Email signing in Evolution is easy.

Posted at 14:21

Davide Palmisano: SameAs4J: little drops of water make the mighty ocean

Few days ago Milan Stankovich contacted the Sindice crew informing us that he wrote a simply Java library to interact with the public Sindice HTTP APIs. We always appreciate such kind of community efforts lead to collaboratively make Sindice a better place on the Web. Agreeing with Milan, we decided to put some efforts on his initial work to make such library the official open source tool for Java programmers.
That reminded me that, few months ago, I did for the same thing Milan did for us. But (ashamed) I never informed those guys about what I did. is a great and extremely useful tool on the Web that makes concretely possible to interlink different Linked data clouds. Simple to use (both for humans via HTML and for machines with a simple HTTP/JSON API) and extremely reactive, it allows to get all the owl:sameAs object for a given URI. And, moreover, it’s based on
Do you want to know the identifier of in Freebase or Yago? Just ask it to

So, after some months I just refined a couple of things, added some javadocs, set up a Maven repository and made SameAs4j publicly available (MIT licensed) to everyone on Google Code.
It’s a simple but reliable tiny set of Java classes that allows you to interact with programatically in your Java Semantic Web applications.

Back to the beginning: every pieces of open source software is like a little drop of water which makes the mighty ocean, so please submit any issue or patch if interested.

Posted at 14:10

Davide Palmisano: FBK, Any23 and my involvement in

After almost two years spent working at Asemantics, I left it to join the Fondazione Bruno Kessler (FBK), a quite large research institute based in Trento.

These last two years have been amazing: I met very skilled and enthusiastic people working with them on a broad set of different technologies. Every day spent there has been an opportunity for me to learn something new from them, and at the very end they are now very good friends more than colleagues. Now Asemantics is part of the bigger Pro-netics Group.

Moved from Rome, I decided to follow Giovanni Tummarello and Michele Mostarda to launch from scratch a new research unit at FBK called “Web of Data”. FBK is a well-established organization with several units acting on a plethora of different research fields. Every day there is the opportunity to join workshops and other kind of events.

Just to give you an idea of how the things work here, in the April 2009 David Orban gave a talk here on “The Open Internet of Things” attended by a large number of researchers and students. Aside FBK, in Trento there is a quite active community hanging out around the Semantic Web.

The Semantic Valley”, that’s how they call this euphoric movement around these technologies.

Back to me, the new “Web of Data” unit has joined the army and the last minute release of Any23 0.2 is only the first outcome of this joint effort on the Semantic Web Index between DERI and FBK.

In particularly, the Any23 0.2 release has been my first task here. It’s library, a service, an RDF distiller. It’s used on board the Sindice ingestion pipeline, it’s publicly available here and yesterday I spent a couple of minutes to write this simple bookmarklet:‘’%20+%20window.location);

Once on your browser, it returns a bunch of distilled RDF triples using the Any23 servlet if pressed on a Web page.

So, what’s next?

The Web of Data unit has just started. More things, from the next release of to other projects currently in inception, will see the light. I really hope to keep on contributing on the concrete consolidation of the Semantic Web, the Web of Data or Web3.0 or whatever we’d like to call it.

Posted at 14:10

Davide Palmisano: Cheap Linked Data identifiers

This is a (short) technical post.

Everyday, I face the problem of getting some Linked Data URIs that uniquely identify a “thing” starting from an ambiguous, poor and flat keyword or description. One of the first step dealing with the development of application that consumes Linked Data is to provide a mechanism that allows to link our own data sets to one (or more) LoD bubble. To gain a clear idea on why identifiers matters I suggest you to read this note from Dan Brickley: starting from some needs we encountered within the NoTube project he clearly underlined the importance of LoD identifiers. Even if the problem of uniquely identifying words and terms falls in the biggest category usually known as term disambiguation, I’d like to clarify here, that what I’m going to explain is a narrow restriction of the whole problem.

What I really need is a simple mechanism that allows me to convert one specific type of identifiers to a set of Linked Data URIs.

For example, I need something that given a book ISBN number it returns me a set of URIs that are referring to that book. Or, given the title of a movie I expect back some URIs (from DBpedia or LinkedMDB or whatever) identifying and describing it in a unique way.

Isn’t SPARQL enough for you to do that?

Yes, obviously the following SPARQL query may be sufficient:

but what I need is something quicker that I may invoke as an HTTP GET like:


returning back to me a simple JSON:

{ "mappings": [
"status": "ok"

But the real issue here is the code overhead necessary if you want to add other kind of identifiers resolution. Let’s imagine, for instance, that I already implemented this kind of service and I want to add another resolution category. What I should do is to hard code another SPARQL query, modify the code allowing to invoke it as a service and redeploy it.

I’m sure we could do better.

If we give a closer look at the above SPARQL query, we easily figure out that the problem could be highly generalized. In fact, often resolving such kind of resolution means perform a SPARQL query asking for URIs that have a certain value for a certain property. As dbprop:isbn for the ISBN case.

And this is what I did the last two days: The NoTube Identity Resolver.

A simple Web service (described in the figure below) fully customizable by simply editing an XML configuration file.

NoTube Identity Resolver architecture

The resolvers.xml file allows you to provide a simple description of the resolution policy that will be accessible with a simple HTTP GET call.

Back to the ISBN example, the following piece of XML is enough to describe the resolver:

<resolver id=”2″ type=”normal”>


  • category is the value that have to be passed as parameter in the HTTP GET call to invoke this resolver
  • endpoint is the address of a SPARQL Endpoint where make the resolution
  • lookup is the name of the property intended to be
  • type (optional) the rdf:type of the resources to be resolved
  • sameas boolean value enabling or not the calling of the service to gain equivalent URIs
  • matching (allowing only URI and LITERAL as value) this element describes the type of the value to be resolved.

Moreover, the NoTube Identity Resolver gives you also the possibility to specify more complex resolution policies through a SPARQL query as shown below:

<resolver id="3" type="custom">
<sparql><![CDATA[SELECT DISTINCT ?subject
WHERE { ?subject a <>.
?subject <> ?title.
FILTER (regex(?title, "#VALUE#")) }]]>

In other words, every resolver described in the resolvers.xml file allows you to enable one kind of resolution mechanism without writing a line af Java code.

Do you want to try?

Just download the war package, get this resolvers.xml (or write your own), export the RESOLVERS_XML_LOCATION environment variable pointing to the folder where the resolvers.xml is located, deploy the war on your Apache Tomcat application server, start the application and try it out heading your browser to:


That’s all folks

Posted at 14:10

Davide Palmisano: RWW 2009 Top 10 Semantic Web products: one year later…

Just few days ago the popular ReadWriteWeb published a list of the 2009 Top Ten Semantic Web products as they did one year ago with the 2008 Top Ten.

This two milestones are a good opportunity to make something similar to a balance. Or just to do a quick overview on what’s changed in the “Web of Data”, only one year later.

The 2008 Top Ten foreseen the following applications, listed in the same ReadWriteWeb order and enriched with some personal opinions.

Yahoo Search Monkey

It’s great. Search Monkey represents the first kind of next-generation search engines due its capability to be fully customized by third party developers. Recently, a breaking news woke up the “sem webbers” of the whole planet: Yahoo started to show structured data exposed with RDFa in the search results page. That news bounced all over the Web and those interested in SEO started to appreciate Semantic Web technologies for their business. But, unfortunately, at the moment I’m writing, RDFa is not showed anymore on search results due to an layout update that broke this functionality. Even if there are rumors on a imminent fixing of this, the main problem is the robustness and the reliability of that kind of services: investors need to be properly guaranteed on the effectiveness of their investments.


Probably, this neat application has became really popular when it has been acquired by Microsoft. It allows to make simple natural language queries like “film where Kevin Spacey acted” and, a first glance, the results seems really much better than other traditional search engines. Honestly I don’t really know what are the technologies they are using to do this magic. But, it would be nice to compare their results with an hypothetical service that translates such human text queries in a set of SPARQL queries over DBpedia. Anyone interested in do that? I’ll be more than happy to be engaged in a project like that.

Open Calais

With a large and massive branding operation these guys built the image of this service as it be the only one fitting everyone’s need when dealing with semantic enrichment of unstructured free-texts. Even this is partly true (why don’t mentioning the Apache UIMA Open Calais annotator?), there are a lot of other interesting services that are, for certain aspects, more intriguing than the Reuters one. Don’t believe me? Let’s give a try to AlchemyAPI.


I have to admit my ignorance here. I never heard about it, but it looks very very interesting. Certainly this service that offers, mainly, some sort of semantic advertisement is more than promising. I’ll keep an eye on it.


Down at the moment I’m writing. 😦


Many friends of mine are using it and this could be enough to give it popularity. Again, I don’t know if they are using some of the W3C Semantic Web technologies to models their data. RDF or not, this is a neat example of semantic web application with a good potential: is this enough to you?


Another case of personal ignorance. This magic is, mainly, a restaurant review site. BooRah uses semantic analysis and natural language processing to aggregate reviews from food blogs. Because of this, BooRah can recognize praise and criticism in these reviews and then rates restaurants accordingly to them. One criticism? The underlying data are perhaps not so much rich. Sounds impossible to me that searching for “Pizza in Italy” returns nothing.

Blue Organizer (or GetGlue?)

It’s not a secret that I consider Glue one of the most innovative and intriguing stuff on the Web. And when it appeared on the ReadWriteWeb 10 Top Semantic Web applications was far away from what is now. Just one year later, GetGlue (Blue Organizer seems to be the former name) appears as a growing and live community of people that realized how is important to wave the Web with the aim of a tool that act as a content cross-recommender. Moreover GetGlue provides a neat set of Web APIs that I’m widely using within the NoTube project.


A clear idea, a powerful branding and a well designed set of services accessible with Web APIs make Zemanta one of the most successful product on the stage. Do I have to say anything more? If you like Zemanta I suggest you to keep an eye also on Loomp, a nice stuff presented at the European Semantic Technology Conference 2009.

Mainly, a semantic search engine over a huge database containing more than 400,000 hotels in the US. Where’s the semantic there? crawls and semantically extracts the information implicitly hidden in those records. A good example of how innovative technologies could be applied to well-know application domains as the hotels searching one.

On year later…

Indubitably, 2009 has been ruled by the Linked Data Initiative, as I love to call it. Officially Linked Data is about “using the Web to connect related data that wasn’t previously linked, or using the Web to lower the barriers to linking data currently linked using other methods” and, if we look to its growing rate, could be simple to bet on it success.

Here is the the 2009 top-ten where I omitted GetGlue, Zemanta and OpenCalais since they already appeared also in the 2008 edition:

Google Search Options and Rich Snippets

When this new feature of Google has been announced the whole Semantic Web community realized that something very powerful started to move along. Google Rich Snippet makes use of the RDFa contained in the HTML Web pages to power rich snippets feature.


It’s a very very nice feeds aggregator built upon Google Reader, Twitter and FriendFeed. It’s easy to use, nice and really useful (well, at least it seems so to me) but, unfortunately, I cannot see where is the Semantic aspects here.


This JavaScript cool stuff allows publishers to add contextual information to links via pop-ups which display when users hover over or click on them. Watching HTML pages built with the aid of this tool, Apture closely remembers me the WordPress Snap-Shot plugin. But Apture seems richer than Snap-Shot since it allows the publishers to directly add links and other stuff they want to display when the pages are rendered.

BBC Semantic Music Project

Built upon (one of the most representative Linked Data cloud) it’s a very remarkable initiative. Personally, I’m using it within the NoTube project to disambiguate bands. Concretely, given a certain band identifier, I make a query to the BBC /music that returns me a URI. With this URI I ask the service to give me other URIs referring to the same band. In this way I can associate to every bands a set of Linked Data URIs where obtain a full flavor of coherent data about them.


It’s an open, semantically marked up shared database powered by a great company based in San Francisco. Its popularity is growing fast, as ReadWriteWeb already noticed. Somehow similar to Wikipedia, Freebase provides all the mechanisms necessary to syndicate its data in a machine-readable form. Mainly, with RDF. Moreover, other Linked Data clouds started to add owl:sameAs links to Freebase: do I have to add something else?


DBpedia is the nucleus of the Web of Data. The only thing I’d like to add is: it deserves to be on the ReadWriteWeb 2009 top-ten more than the others.

It’s a remarkable US government initiative to “increase public access to high value, machine readable datasets generated by the Executive Branch of the Federal Government.”. It’s a start and I dream to see something like this even here in Italy.

So what’s up in the end?

It’s my opinion that the 2009 has been the year of Linked Data. New clouds born every month, new links between the already existent ones are established and a new breed of developers are being aware of the potential and the threats of Linked Data consuming applications. It seems that the Web of Data is finally taking shape even if something strange is still in the air. First of all, if we give a closer look to the ReadWriteWeb 2009 Top Ten I have to underline that 3 products on 10 already were also in the 2008 chart. Maybe the popular blog liked to stress on the progresses that these products made but it sound a bit strange to me that they forgot nice products such as the FreeMix, Alchemy API, Sindice, OpenLink Virtuoso and the usage of GoodRelations ontology. Secondly, 3 products listed in the 2009 chart are public-funded initiatives that, even if is reasonable due to the nature of the products, it leave me with the impression that private investors are not in the loop yet.

What I expect from the 2010, then?

A large and massive rush to using RDFa for SEO porpoises, a sustained grow of Linked Data clouds and, I really hope, the rise of a new application paradigm grounded to the consumption of such interlinked data.

Posted at 14:10

Davide Palmisano: the italian political activism and the semantic web

Beppe Grillo

A couple of years ago, during his live show, the popular italian blogger and activist Beppe Grillo provided a quick demonstration about how the Web concretely realizes the “six degrees of separation”. The italian blogger, today a Web enthusiast, shown that it was possible to him to get in contact with someone very famous using a couple of different websites: imdb, Wikipedia and few others. Starting from a movie where he acted, he could reach the movie producer and the producer could be in contact with another actor due to previous work with this latter and so on. 
This demonstration consisted in a series of links that were opened leading to some Web pages containing information where extract the relationships that the showman wants to achieve.
This gig came back to my mind while I was thinking on how, what I call the “Linked Data Philosophy”, is impacting on the traditional Web and I imagined what Beppe Grillo could show nowadays.
Just the following, simple, trivial and short SPARQL query:
<insert here the SPARQL query>
Although Beppe is a great comedian it may be hard also for him making people laugh with this. But, the point here is not about laughs but about data: in this sense, the Web of Data is providing an outstanding and an extremely powerful way to access to incredible twine of machine readable interlinked data.
Recently, another nice and remarkable italian initiative appeared on the Web: It’s, mainly, a service where the Italian congressmen are displayed and they are positioned on a chart basing on the similarity of their votes on law proposals.
Ok. Cool. But how the Semantic Web could improve this stuff?
First of all, it would be very straightforward to provide a SPARQL endpoint providing some good RDF for this data. Like the following example:
<foaf:name>Mario Rossi</foaf:name>
<openp:politicalGroup rdf:resource=””/&gt;
where names, descriptions, political belonging and more are provided. Moreover a property called openp:similarity could be used to map closer congressmen, using the same information of the already cited chart. 
Secondly, all the information about congressmen are published on the official Italian chambers web site. Wrapping this data, could provide an extremely exhaustive set of official information and, more important, links to DBpedia will be the key to get a full set of machine processable data also from other Linked Data clouds.
How to benefits from all of this? Apart the fact of employing a cutting-edge technology to syndicate data, everyone who wants link the data provided by on his web pages can easily do it using RDFa. Like the follow example, where a fragment of an HTML page representing a news on the above congressman:
contains some RDFa linking that page to the cloud.
With these technologies as a basis, a new breed of applications (like web crawlers, for those interested in SEO) will access and process these data in a new, fashionable and extremely powerful way.

A couple of years ago, during his live show, the popular italian blogger and activist Beppe Grillo provided a quick demonstration about how the Web concretely realizes the “six degrees of separation”. The italian blogger, today a Web enthusiast, shown that it was possible to him to get in contact with someone very famous using a couple of different websites: imdb, Wikipedia and few others. Starting from a movie where he acted, he could reach the movie producer and the producer could be in contact with another actor due to previous work with this latter and so on. 

This demonstration consisted in a series of links that were opened leading to some Web pages containing information where extract the relationships that the showman wants to achieve.

This gig came back to my mind while I was thinking on how, what I call the “Linked Data Philosophy”, is impacting on the traditional Web and I imagined what Beppe Grillo could show nowadays.

Just the following, simple, trivial and short SPARQL query:

construct {
    ?actor1 foaf:knows ?actor2
    where {
    ?movie dbpprop:starring ?actor1.
    ?movie dbpprop:starring ?actor2.
    ?movie a dbpedia-owl:Film.
    FILTER(?actor1 = <;)

Although Beppe is a great comedian it may be hard also for him making people laugh with this. But, the point here is not about laughs but about data: in this sense, the Web of Data is providing an outstanding and an extremely powerful way to access to incredible twine of machine readable interlinked data.

Recently, another nice and remarkable italian initiative appeared on the Web: It’s, mainly, a service where the Italian congressmen are displayed and they are positioned on a chart basing on the similarity of their votes on law proposals.

Ok. Cool. But how the Semantic Web could improve this stuff?

First of all, it would be very straightforward to provide a SPARQL endpoint providing some good RDF for this data. Like the following example:

    <rdf:Description rdf:about=””&gt;
        <rdf:type rdf:resource=””/&gt;
        <foaf:name>Mario Rossi</foaf:name>
        <owl:sameas rdf:resource=””/&gt;

where names, descriptions, political belonging and more are provided. Moreover a property called openp:similarity could be used to map closer congressmen, using the same information of the already cited chart. 

Secondly, all the information about congressmen are published on the official Italian chambers web site. Wrapping this data, could provide an extremely exhaustive set of official information and, more important, links to DBpedia will be the key to get a full set of machine processable data also from other Linked Data clouds.

How to benefits from all of this? Apart the fact of employing a cutting-edge technology to syndicate data, everyone who wants link the data provided by on his web pages can easily do it using RDFa.

With these technologies as a basis, a new breed of applications (like web crawlers, for those interested in SEO) will access and process these data in a new, fashionable and extremely powerful way.

Is the time for those guys to embrace the Semantic Web , isn’t it?

Posted at 14:10

Libby Miller: TIL: Gifski

For a presentation at work where it’s tricky to add video but an image is ok, gifski worked brilliantly for converting a video to a gif. Even with the defaults it was fine. I needed to tweak it a bit as I needed it a bit smaller, -W worked great for that for me, but there are a bunch of other ways too.

Here’s one of the Montpelier partridge from January last year.

Posted at 14:09

Libby Miller: #Mayke Day 4 – TTN and LoRaWAN – TiL

Tarim and I have been trying to get a LoRaWAN network up and running in Bristol using some of the old Bristol Wireless antenna locations. First step for me was in January when we got together and tried to get a Raspberry Pi Gateway working, with so much #fayle – a subtly broken Pi, a dodgy PSU connector, and I did not know that the Raspberry Pi imager process had changed for Bullseye (you have to set a user in settings, and enable ssh there – you can also put the wifi details in, so it’s handy if you know about it).

Aaanyway for #mayke (now on Mastodon) I’ve been trying for a couple of days to get a TTGO LoRa32 OLED v1.3(?) I bought ages ago to work with the Pi gateway. In summary: argh. there’s so many partial examples around and different naming things and allsorts. But are some notes on what works.

On the Raspberry Pi: 3B+ and a IC880A board that Tarim had – then install Bullseye (with ssh access and wifi and a pi user) and then install using The Things Network (TTN)’s example gateway instructions. All fine. My only daftness here was finding this command: /opt/ttn-station/bin/station -p and assuming (why?) that I was tailing the logs instead of running another instance on top of the systemctl one. Which led to all sorts of weird errors, including ones related to not resetting the device e.g.


 [HAL:INFO] [lgw_spi_close:159] Note: SPI port closed

[lgw_start:764] Failed to setup sx125x radio for RF chain 0



The TTGO was more tricky. There seem to be multiple libraries at multiple levels of abstraction and I wanted one that was Arduino-IDE compatible. It’s really hard to find out what pin mapping you need for these slightly obscure (and superceded) TTGO boards. Then there’s the difference between LoRaWAN Specification 1.0.3 and LoRaWAN Specification 1.1.1. After a while I realised that the MCCI_LoRaWAN_LMIC_library (0.9.2) I was using in the code I had found on the internet was made for 1.0.3 – and then configuring a TTN device was muuch easier with fewer baffling options.

One final self-own by my frenetic searching of forums looking for a bit of code with the right pin mapping for the TTGO

I somehow found some old code (I think it was this – don’t use it, 5 years’ old! – which I think is based on an old version of this, but adapted for the TTGO) which didn’t recognise all the event types from TTN. Updated below, basically adding this in setup()

in setup()
and LMIC_setLinkCheckMode(1) again in case EV_JOINED.
Thank you TTN forum users, and again.

A couple more things – though there are probably more I’ve forgotten.

  1. The gateway was ok to set up on the TTN console, but setting up devices was not – all the names for the different device ids were completely baffling and seem to have changed over time. You also need to set up an application before you can add a device. Two key learnings (a) you can get the little / big endian -ness and the right format for the ids by clicking on the ids themselves in the console, see image below and (b) the Gateway has the JoinEUI you need to set up a device (check the Gateway’s messages for this, see image below).
  2. You HAVE TO hand edit ./project_config/lmic_project_config.h in MCCI_LoRaWAN_LMIC_library on your machine to pick the right region (on a mac, mine was in /Users/[me]/Documents/Arduino/libraries/MCCI_LoRaWAN_LMIC_library/project_config/lmic_project_config.h)

Formatting endianness and chars

LSB is little- MSB is big- and <> switches between chars with the preceding 0x business and without. DEVEUI and APPEUI are little and APPKEY is big.

JoinEUI for devices is in the gateway messages like this:

I somewhat enjoyed the detective work and even read some of TFM. So a happy #mayke for me.

The final code I used:

// MIT License
// Based on examples from
// Copyright (c) 2015 Thomas Telkamp and Matthijs Kooijman

#include <Arduino.h>
#include "lmic.h"
#include <hal/hal.h>
#include <SPI.h>

#define LEDPIN 2

unsigned int counter = 0;
char TTN_response[30];

// This EUI must be in little-endian format, so least-significant-byte
// first. When copying an EUI from ttnctl output, this means to reverse
// the bytes.

// Copy the value from Device EUI from the TTN console in LSB mode.
static const u1_t PROGMEM DEVEUI[8]= { 0x.., 0x.., .. };
void os_getDevEui (u1_t* buf) { memcpy_P(buf, DEVEUI, 8);}

// Copy the value from Application EUI from the TTN console in LSB mode
static const u1_t PROGMEM APPEUI[8]= { 0x.., 0x.., .. };
void os_getArtEui (u1_t* buf) { memcpy_P(buf, APPEUI, 8);}

// This key should be in big endian format (or, since it is not really a
// number but a block of memory, endianness does not really apply). In
// practice, a key taken from ttnctl can be copied as-is. Anyway its in MSB mode.
static const u1_t PROGMEM APPKEY[16] = { 0x.., .. };
void os_getDevKey (u1_t* buf) { memcpy_P(buf, APPKEY, 16);}

static osjob_t sendjob;

// Schedule TX every this many seconds (might become longer due to duty
// cycle limitations).
const unsigned TX_INTERVAL = 120;

// Pin mapping
const lmic_pinmap lmic_pins = {
    .nss = 18,
    .rxtx = LMIC_UNUSED_PIN,
    .rst = 14,
    .dio = {26, 33, 32}  // Pins for the Heltec ESP32 Lora board/ TTGO Lora32 with 3D metal antenna

void do_send(osjob_t* j){
    // Payload to send (uplink)
    static uint8_t message[] = "Hello OTAA!";

    // Check if there is not a current TX/RX job running
    if (LMIC.opmode & OP_TXRXPEND) {
        Serial.println(F("OP_TXRXPEND, not sending"));
    } else {
        // Prepare upstream data transmission at the next possible time.
        LMIC_setTxData2(1, message, sizeof(message)-1, 0);
        Serial.println(F("Sending uplink packet..."));
        digitalWrite(LEDPIN, HIGH);
    // Next TX is scheduled after TX_COMPLETE event.

void onEvent (ev_t ev) {
    Serial.print(": ");
    Serial.print(": ");
    switch(ev) {
        case EV_SCAN_TIMEOUT:
        case EV_BEACON_FOUND:
        case EV_BEACON_MISSED:
        case EV_BEACON_TRACKED:
        case EV_JOIN_FAILED:
        case EV_REJOIN_FAILED:
        case EV_LOST_TSYNC:
        case EV_RESET:
        case EV_RXCOMPLETE:
            // data received in ping slot
        case EV_LINK_DEAD:
        case EV_LINK_ALIVE:

        case EV_SCAN_FOUND:
        case EV_TXSTART:
        case EV_TXCANCELED:
        case EV_RXSTART:
            // do not print anything -- it wrecks timing 

        case EV_TXCOMPLETE:
            Serial.println(F("EV_TXCOMPLETE (includes waiting for RX windows)"));

            if (LMIC.txrxFlags & TXRX_ACK) {
              Serial.println(F("Received ack"));

            if (LMIC.dataLen) {
              int i = 0;
              Serial.print(F("Data Received: "));
              Serial.write(LMIC.frame+LMIC.dataBeg, LMIC.dataLen);

              for ( i = 0 ; i < LMIC.dataLen ; i++ )
                TTN_response[i] = LMIC.frame[LMIC.dataBeg+i];
              TTN_response[i] = 0;


            // Schedule next transmission
            os_setTimedCallback(&sendjob, os_getTime()+sec2osticks(TX_INTERVAL), do_send);
            digitalWrite(LEDPIN, LOW);

            // Schedule next transmission
            os_setTimedCallback(&sendjob, os_getTime()+sec2osticks(TX_INTERVAL), do_send);
        case EV_JOINING:
            Serial.println(F("EV_JOINING: -> Joining..."));

        case EV_JOINED: {


            Serial.println(F("Unknown event"));


void setup() {
    delay(2500);                      // Give time to the serial monitor to pick up

    // Use the Blue pin to signal transmission.

    // LMIC init

    // Reset the MAC state. Session and pending data transfers will be discarded.
    LMIC_setClockError(MAX_CLOCK_ERROR * 1 / 100);
    // Set up the channels used by the Things Network, which corresponds
    // to the defaults of most gateways. Without this, only three base
    // channels from the LoRaWAN specification are used, which certainly
    // works, so it is good for debugging, but can overload those
    // frequencies, so be sure to configure the full frequency range of
    // your network here (unless your network autoconfigures them).
    // Setting up channels should happen after LMIC_setSession, as that
    // configures the minimal channel set.

    LMIC_setupChannel(0, 868100000, DR_RANGE_MAP(DR_SF12, DR_SF7),  BAND_CENTI);      // g-band
    LMIC_setupChannel(1, 868300000, DR_RANGE_MAP(DR_SF11, DR_SF7B), BAND_CENTI);      // g-band
    LMIC_setupChannel(2, 868500000, DR_RANGE_MAP(DR_SF10, DR_SF7),  BAND_CENTI);      // g-band
    LMIC_setupChannel(3, 867100000, DR_RANGE_MAP(DR_SF9, DR_SF7),  BAND_CENTI);      // g-band
    LMIC_setupChannel(4, 867300000, DR_RANGE_MAP(DR_SF8, DR_SF7),  BAND_CENTI);      // g-band
    LMIC_setupChannel(5, 867500000, DR_RANGE_MAP(DR_SF7, DR_SF7),  BAND_CENTI);      // g-band
    LMIC_setupChannel(6, 867700000, DR_RANGE_MAP(DR_SF7, DR_SF7),  BAND_CENTI);      // g-band

    // TTN defines an additional channel at 869.525Mhz using SF9 for class B
    // devices' ping slots. LMIC does not have an easy way to define set this
    // frequency and support for class B is spotty and untested, so this
    // frequency is not configured here.

    // Disable link check validation
    //LMIC_setClockError(MAX_CLOCK_ERROR * 1 / 100);

    // TTN uses SF9 for its RX2 window.
    LMIC.dn2Dr = DR_SF9;

    // Set data rate and transmit power for uplink (note: txpow seems to be ignored by the library)

    // Start job
    do_send(&sendjob);     // Will fire up also the join

void loop() {

Posted at 14:09

Libby Miller: Time squish

I keep seeing these two odd time effects in my life and wondering if they are connected.

The first is that my work-life has become either extremely intense – and I don’t mean long hours, I mean intense brainwork for maybe a week – that wipes me out – and then the next is inevitably slower and less intense. Basically everything gets bunched up together. I feel like this has something to do with everyone working from home, but I’m not really sure how to explain it (though it reminds me of my time at Joost where we’d have an intense series of meetings with everyone together every few months, because we were distributed. But this type is not organised, it just happens). My partner pointed out that this might simply be poor planning on my part (thanks! I’m quite good at planning actually).

The second is something we’ve noticed at the Cube – people are not committing to doing stuff (coming to an event, volunteering etc) until very close to the event. Something like 20-30% of our tickets for gigs are being sold the day before or on the day. I don’t think it’s people waiting for something better. I wonder if it’s Covid-related uncertainty? (also 10-15% don’t turn up, not sure if that’s relevant).

Anyone else seeing this type of thing?

Posted at 14:09

Libby Miller: Sparkfun Edge, MacOS X, FTDI

More for my reference than anything else. I’ve been trying to get the toolchain set up to use a Sparkfun Edge. I had the Edge, the Beefy3 FTDI breakout, and a working USB cable.

Blurry pic of cats taken using Sparkfun Edge and HIMAX camera

This worked great for the speech example, for me (although the actual tensorflow part never understands my “yes” “no” etc, but anyway, I was able to successfully upload it)

$ git clone --depth 1
$ cd tensorflow
$ gmake -f tensorflow/lite/micro/tools/make/Makefile TARGET=sparkfun_edge micro_speech_bin
$ cp tensorflow/lite/micro/tools/make/downloads/AmbiqSuite-Rel2.2.0/tools/apollo3_scripts/ tensorflow/lite/micro/tools/make/downloads/AmbiqSuite-Rel2.2.0/tools/apollo3_scripts/
$ python3 tensorflow/lite/micro/tools/make/downloads/AmbiqSuite-Rel2.2.0/tools/apollo3_scripts/ --bin tensorflow/lite/micro/tools/make/gen/sparkfun_edge_cortex-m4_micro/bin/micro_speech.bin --load-address 0xC000 --magic-num 0xCB -o main_nonsecure_ota --version 0x0
$ python3 tensorflow/lite/micro/tools/make/downloads/AmbiqSuite-Rel2.2.0/tools/apollo3_scripts/ --load-address 0x20000 --bin main_nonsecure_ota.bin -i 6 -o main_nonsecure_wire --options 0x1
$ export BAUD_RATE=921600
$ export DEVICENAME=/dev/cu.usbserial-DN06A1HD
$ python3 tensorflow/lite/micro/tools/make/downloads/AmbiqSuite-Rel2.2.0/tools/apollo3_scripts/ -b ${BAUD_RATE} ${DEVICENAME} -r 1 -f main_nonsecure_wire.bin -i 6

But then I couldn’t figure out how to generalise it to use other examples – I wanted to use the camera because ages ago I bought a load of tiny cameras to use with the Edge.

So I tried this guide, but couldn’t figure out where it the installer had put the compiler. Seems basic but….??

So in the end I used the first instructions to download the tools, and then the second to actually do the compilation and installation on the board.

$ find . | grep lis2dh12_accelerometer_uart
# you might need this - 
# mv tools/apollo3_scripts/ tools/apollo3_scripts/ 
$ cd ./tensorflow/lite/micro/tools/make/downloads/AmbiqSuite-Rel2.2.0/boards_sfe/edge/examples/lis2dh12_accelerometer_uart/gcc/
$ export PATH="/Users/libbym/personal/mayke2021/tensorflow/tensorflow/lite/micro/tools/make/downloads/gcc_embedded/bin/:$PATH"
$ make clean
$ make COM_PORT=/dev/cu.usbserial-DN06A1HD bootload_asb ASB_UPLOAD_BAUD=921600

etc. Your COM port will be different, find it using

ls /dev/cu*

If like me the FTDI serial port KEEPS VANISHING ARGH – this may help (I’d installed 3rd party FTDI drivers ages ago and they were conflicting with the Apple’s ones. Maybe. Or the reboot fixed it. No idea).

Then you have to use a serial programme to get the image. I used the arduino serial since it was there and then copy and pasted the output into a textfile, at which point you can use


to convert it to a png. Palavers.

Posted at 14:09

Libby Miller: ESP32 M5StickC, https, websockets, and Slack

I got one of these lovely M5StickCs for a present, and had a play with it as part of Makevember. I wanted to make a “push puppet” (one of those toys that you push upwards and they collapse) that reacted to Slack commands. Not for any reason really, though I like the idea of tiny colleagues that stand up when addressed on slack. Makevember doesn’t need a reason. Or at any rate, it doesn’t need a good reason.

Here are some notes about https and websockets on the ESP32 pico which is the underlying board for the M5StickC.

I made a “slack wobbler” a couple of years ago, also in makevember – an ESP8266 that connected to slack, then wobbled when someone was mentioned, using a servo. Since then I ran into some https problems, obviously also encountered by Jeremy21212121 who fixed it using a modified version of a websockets server. This works for the ESP8266 – turns out you can also get the same result using httpsClient.setInsecure() using BearSSL. I’ve put an example of that here.

For ESP32 it seems a bit different. As far as I can tell you need the certificate not the fingerprint in this case. You can get it using openssl s_client -connect

For ESP32 you also need to use the correct libraries for wifi and wifimulti. The websocket client library is this one.

And a final note – the M5StickC is very cool but doesn’t enable you to use many of its GPIO ports. The only one I can find that allows you to use a servo directly is on the Grove connector, which I bodged some female jumper wires into, though you can get a grove to servo converter (there are various M5Stick hats you can use for multiple servos). Here’s some code. And a video.

Posted at 14:09

Libby Miller: Sock-puppet – an improved, simpler presence robot

Makevember and lockdown have encouraged me to make an improved version of libbybot, which is a physical version of a person for remote participation. I’m trying to think of a better name – she’s not all about representing me, obviously, but anyone who can’t be somewhere but wants to participate. [update Jan 15: she’s now called “sock_puppet”].

This one is much, much simpler to make, thanks to the addition of a pan-tilt hat and a simpler body. It’s also more expressive thanks to these lovely little 5*5 led matrixes.

Her main feature is that – using a laptop or phone – you can see, hear and speak to people in a different physical place to you. I used to use a version of this at work to be in meetings when I was the only remote participant. That’s not much use now of course. But perhaps in the future it might make sense for some people to be remote and some present.

New recent features:

  • easy to make*
  • wears clothes**
  • googly eyes
  • expressive mouth (moves when the remote participant is speaking, can be happy, sad, etc, whatever can be expressed in 25 pixels)
  • can be “told” wifi details using QR codes
  • can move her head a bit (up / down / left / right)

* ish
**a sock

I’m still writing docs, but the repo is here.

Libbybot-lite – portrait by Damian

Posted at 14:09

Libby Miller: Libbybot – a posable remote presence bot made from a Raspberry Pi 3 – updates

A couple of people have asked me about my presence-robot-in-a-lamp, libbybot – unsurprising at the moment maybe – so I’ve updated the code in github to use the most recent RTCMultiConnection (webRTC) library and done a general tidy up.

I gave a presentation at EMFCamp about it a couple of years ago – here are the slides:


Posted at 14:09

Andrew Matthews: Knowledge Graphs 101

This is the first in a short series introducing Knowledge Graphs. It covers just the basics, showing how to write, store, query and work with graph data using RDF (short for Resource Description Format). I will keep it free of theory and interesting but unnecessary digressions. Let me know in the comments if you find […]

Posted at 14:09

Andrew Matthews: Preparing a Project Gutenberg ebook for use on a 6″ ereader

For a while I’ve been trying to find a nice way to convert project Gutenberg books to look pleasant on a BeBook One. I’ve finally hit on the perfect combination of tools, that produces documents ideally suited to 6″ eInk ebook readers like my BeBook. The tool chain involves using GutenMark to convert the file […]

Posted at 14:09

Andrew Matthews: Some pictures of Carlton Gardens

Carlton Gardens, a set on Flickr. This was my first outing with the Pentax K-x that I got recently. In these pictures, I’m trying to get to grips with the camera, so I didn’t have any particular objective other than to take pictures. The light was so harsh it was very difficult for me to […]

Posted at 14:09

Andrew Matthews: Note to Self: Convert UTF-8 w/ BOM to ASCII (WIX + DB) using GNU uconv

This one took me a long time to work out, and it took a non-latin alphabet user (Russian) to point me at the right tools. Yet again, I’m guilty of being a complacent anglophone. I was producing a database installer project using WIX 3.5, and ran into all sorts of inexplicable problems, which I finally […]

Posted at 14:09

Andrew Matthews: Automata-Based Programming With Petri Nets – Part 1

Petri Nets are extremely powerful and expressive, but they are not as widely used as state machines. That's a pity, they allow us to solve problems beyond the reach of state machines. This post is the first in a mini-series on software development with Petri Nets. All of the code for a full feature-complete Petri Net library is available online at on GitHub. You're welcome to take a copy, play with it and use it in your own projects.

Posted at 14:09

Andrew Matthews: Quantum Reasoners Hold Key to Future Web

Last year, a company called DWave Systems announced their quantum computer (the ‘Orion’) – another milestone on the road to practical quantum computing. Their controversial claims seem worthy in their own right but they are particularly important to the semantic web (SW) community. The significance to the SW community was that their quantum computer solved […]

Posted at 14:09

Andrew Matthews: Semantic Overflow Highlights I

Semantic Overflow has been active for a couple of weeks. We now have 155 users and 53 questions. We’ve already had some very interesting questions and some excellent detailed and thoughtful responses. I thought, on Egon’s instigation, to  bring together, from the site’s BI stats, some of the highlights of last week. The best loved […]

Posted at 14:09

Andrew Matthews: – the Web 2.0 Q&A site for all things Web 3.0. is a new site based on the hugely popular, devoted to Q&A on anything related to the semantic web. The site is very new (created today) and I’m trying to get as many people to visit as I can, so please come and post your questions and together we’ll create a thriving community […]

Posted at 14:09

Andrew Matthews: Quote of the Day – Chris Sells on Cocktail Parties

I can relate to this: I’ll take a lake of fire any day over more than three strangers in a room with which I share no common task and with whom I’m expected to socialize How to express this to my wife without her thinking that I am suffering from a combination of acrophobia and […]

Posted at 14:09

Andrew Matthews: Australian Port – a new WMD?

Proving that Cockroaches are not indestructible, Kerry neatly (if inadvertently) demonstrated that Australian port is capable of killing things that heat, cold and lethal levels of ionizing radiation cannot. Of course Kerry was gagging for days just at the thought that the thing had been in her glass all along – it probably hadn’t – […]

Posted at 14:09

Andrew Matthews: Relational Modeling? Not as we know it!

... there's plenty of ways that RDF specifically addresses the problems it seeks to address - data interchange, standards definition, KR, mashups - in a distributed web-wide way. RDBMSs address the problems that were faced by programmers at the coal face in the 60s and 70s - Efficient, Standardized, platform-independent data storage and retrieval. The imperative that created a need for RDBMSs in the 60s is not going away, so I doubt databases will be going away any time soon either. In fact they can be exposed to the world as triples without too much trouble. The problem is that developers need more than just data storage and retrieval. They need intelligent data storage and retrieval.

Posted at 14:09

Andrew Matthews: Pattern Matching in C#

I recently used Matthew Podwyszocki’s pattern matching classes for a top level exception handler in an App I’m writing. Matthew’s classes are a really nice fluent interface attaching predicates to functions generating results. I used it as a class factory to select between handlers for exceptions. Here’s an example of how I used it: ExceptionHandler […]

Posted at 14:09

Andrew Matthews: Object Orientation? Not as we know it.

I thought I’d start with a lyric: That one’s my mother and That one’s my father and The one in the hat, that’s me. You could be forgiven for wondering what Ani Difranco has to do with this blog’s usual themes, but rest assured, I won’t stray too far. My theme today is the limitations […]

Posted at 14:09

Andrew Matthews: New Resources for LinqToRdf

John Mueller recently sent through a link to a series of articles on working with RDF. As well as being a useful introduction to working with RDF, they use LinqToRdf for code examples. Modeling your Data with RDF (Part 1) Understanding and Using Resource Description Framework Files (Part 2) They provide information on hosting RDF […]

Posted at 14:09

Andrew Matthews: Not another mapping markup language!

Kingsley Idehen has again graciously given LinqToRdf some much needed link-love. He mentioned it in a post that was primarily concerned with the issues of mapping between the ontology, relational and object domains. His assertion is that LinqtoRdf, being an offshoot of an ORM related initiative, is reversing the natural order of mappings. He believes […]

Posted at 14:09

Copyright of the postings is owned by the original blog authors. Contact us.