Planet RDF

It's triples all the way down

September 22

Michael Hausenblas: Cloud Cipher Capabilities

… or, the lack of it.

A recent discussion at a customer made me having a closer look around support for encryption in the context of XaaS cloud service offerings as well as concerning Hadoop. In general, this can be broken down into over-the-wire (cf. SSL/TLS) and back-end encryption. While the former is widely used, the latter is rather seldom to find.

Different reasons might exits why one wants to encrypt her data, ranging from preserving a competitive advantage to end-user privacy issues. No matter why someone wants to encrypt the data, the question is do systems support this (transparently) or are developers forced to code this in the application logic.

IaaS-level. Especially in this category, file storage for app development, one would expect wide support for built-in encryption.

On the PaaS level things look pretty much the same: for example, AWS Elastic Beanstalk provides no support for encryption of the data (unless you consider S3) and concerning Google’s App Engine, good practices for data encryption only seem to emerge.

Offerings on the SaaS level provide an equally poor picture:

  • Dropbox offers encryption via S3.
  • Google Drive and Microsoft Skydrive seem to not offer any encryption options for storage.
  • Apple’s iCloud is a notable exception: not only does it provide support but also nicely explains it.
  • For many if not most of the above SaaS-level offerings there are plug-ins that enable encryption, such as provided by Syncdocs or CloudFlogger

In Hadoop-land things also look rather sobering; there are few activities around making HDFS or the likes do encryption such as ecryptfs or Gazzang’s offering. Last but not least: for Hadoop in the cloud, encryption is available via AWS’s EMR by using S3.

Posted at 01:46

September 18

Ebiquity research group UMBC: paper: Context Sensitive Access Control in Smart Home Environments


Sofia Dutta, Sai Sree Laya Chukkapalli, Madhura Sulgekar, Swathi Krithivasan, Prajit Kumar Das, and Anupam Joshi, Context Sensitive Access Control in Smart Home Environments, 6th IEEE International Conference on Big Data Security on Cloud, May 2020

The rise in popularity of Internet of Things (IoT) devices has opened doors for privacy and security breaches in Cyber-Physical systems like smart homes, smart vehicles, and smart grids that affect our daily existence. IoT systems are also a source of big data that gets shared via the cloud. IoT systems in a smart home environment have sensitive access control issues since they are deployed in a personal space. The collected data can also be of a highly personal nature. Therefore, it is critical to building access control models that govern who, under what circumstances, can access which sensed data or actuate a physical system. Traditional access control mechanisms are not expressive enough to handle such complex access control needs, warranting the incorporation of new methodologies for privacy and security. In this paper, we propose the creation of the PALS system, that builds upon existing work in an attribute-based access control model, captures physical context collected from sensed data (attributes) and performs dynamic reasoning over these attributes and context-driven policies using Semantic Web technologies to execute access control decisions. Reasoning over user context, details of the information collected by the cloud service provider, and device type our mechanism generates as a consequent access control decisions. Our system’s access control decisions are supplemented by another sub-system that detects intrusions into smart home systems based on both network and behavioral data. The combined approach serves to determine indicators that a smart home system is under attack, as well as limit what data breach such attacks can achieve.


pals architecture

The post paper: Context Sensitive Access Control in Smart Home Environments appeared first on UMBC ebiquity.

Posted at 14:13

Ebiquity research group UMBC: paper: Automating GDPR Compliance using Policy Integrated Blockchain

Automating GDPR Compliance using Policy Integrated Blockchain


Abhishek Mahindrakar and Karuna Pande Joshi, Automating GDPR Compliance using Policy Integrated Blockchain, 6th IEEE International Conference on Big Data Security on Cloud, May 2020.

Data protection regulations, like GDPR, mandate security controls to secure personally identifiable information (PII) of the users which they share with service providers. With the volume of shared data reaching exascale proportions, it is challenging to ensure GDPR compliance in real-time. We propose a novel approach that integrates GDPR ontology with blockchain to facilitate real-time automated data compliance. Our framework ensures data operation is allowed only when validated by data privacy policies in compliance with privacy rules in GDPR. When a valid transaction takes place the PII data is automatically stored off-chain in a database. Our system, built using Semantic Web and Ethereum Blockchain, includes an access control system that enforces data privacy policy when data is shared with third parties.

The post paper: Automating GDPR Compliance using Policy Integrated Blockchain appeared first on UMBC ebiquity.

Posted at 14:13

Ebiquity research group UMBC: Why does Google think Raymond Chandler starred in Double Indemnity?

In my knowledge graph class yesterday we talked about the SPARQL query language and I illustrated it with DBpedia queries, including an example getting data about the movie Double Indemnity. I had brought a google assistant device and used it to compare its answers to those from DBpedia. When I asked the Google assistant “Who starred in the film Double Indemnity”, the first person it mentioned was Raymond Chandler. I knew this was wrong, since he was one of its screenwriters, not an actor, and shared an Academy Award for the screenplay. DBpedia’s data was correct and did not list Chandler as one of the actors.

I did not feel too bad about this — we shouldn’t expect perfect accuracy in these huge, general purpose knowledge graphs and at least Chandler played an important role in making the film.

After class I looked at the Wikidata page for Double Indemnity (Q478209) and saw that it did list Chandler as an actor. I take this as evidence that Google’s knowledge Graph got this incorrect fact from Wikidata, or perhaps from a precursor, Freebase.

The good news 🙂 is that Wikidata had flagged the fact that Chandler (Q180377) was a cast member in Double Indemnity with a “potential Issue“. Clicking on this revealed that the issue was that Chandler was not known to have an occupation property that a “cast member” property (P161) expects, which includes twelve types, such as actor, opera singer, comedian, and ballet dancer. Wikidata lists chandler’s occupations as screenwriter, novelist, write and poet.

More good news 😀 is that the Wikidata fact had provenance information in the form of a reference stating that it came from CSFD (Q3561957), a “Czech and Slovak web project providing a movie database”. Following the link Wikidata provided led me eventually to the resource, which allowed my to search for and find its Double Indemnity entry. Indeed, it lists Raymond Chandler as one of the movie’s Hrají. All that was left to do was to ask for a translation, which confirmed that Hrají means “starring”.

Case closed? Well, not quite. What remains is fixing the problem.

The final good news 🙂 is that it’s easy to edit or delete an incorrect fact in Wikidata. I plan to delete the incorrect fact in class next Monday. I’ll look into possible options to add an annotation in some way to ignore the incorrect ?SFD source for Chander being a cast member over the weekend.

Some possible bad news 🙁 that public knowledge graphs like Wikidata might be exploited by unscrupulous groups or individuals in the future to promote false or biased information. Wikipedia is reasonably resilient to this, but the problem may be harder to manage for public knowledge graphs, which get much their data from other sources that could be manipulated.

The post Why does Google think Raymond Chandler starred in Double Indemnity? appeared first on UMBC ebiquity.

Posted at 14:13

Ebiquity research group UMBC: paper: Early Detection of Cybersecurity Threats Using Collaborative Cognition

The CCS Dashboard’s sections provide information on sources and targets of network events, file operations monitored and sub-events that are part of the APT kill chain. An alert is generated when a likely complete APT is detected after reasoning over events.

The CCS Dashboard’s sections provide information on sources and targets of network events, file operations monitored and sub-events that are part
of the APT kill chain. An alert is generated when a likely complete APT is detected after reasoning over events.

Early Detection of Cybersecurity Threats Using Collaborative Cognition

Sandeep Narayanan, Ashwinkumar Ganesan, Karuna Joshi, Tim Oates, Anupam Joshi and Tim Finin, Early detection of Cybersecurity Threats using Collaborative Cognition, 4th IEEE International Conference on Collaboration and Internet Computing, Philadelphia, October. 2018.

 

The early detection of cybersecurity events such as attacks is challenging given the constantly evolving threat landscape. Even with advanced monitoring, sophisticated attackers can spend more than 100 days in a system before being detected. This paper describes a novel, collaborative framework that assists a security analyst by exploiting the power of semantically rich knowledge representation and reasoning integrated with different machine learning techniques. Our Cognitive Cybersecurity System ingests information from various textual sources and stores them in a common knowledge graph using terms from an extended version of the Unified Cybersecurity Ontology. The system then reasons over the knowledge graph that combines a variety of collaborative agents representing host and network-based sensors to derive improved actionable intelligence for security administrators, decreasing their cognitive load and increasing their confidence in the result. We describe a proof of concept framework for our approach and demonstrate its capabilities by testing it against a custom-built ransomware similar to WannaCry.

The post paper: Early Detection of Cybersecurity Threats Using Collaborative Cognition appeared first on UMBC ebiquity.

Posted at 14:13

Ebiquity research group UMBC: paper: Attribute Based Encryption for Secure Access to Cloud Based EHR Systems

Attribute Based Encryption for Secure Access to Cloud Based EHR Systems

Attribute Based Encryption for Secure Access to Cloud Based EHR Systems

Maithilee Joshi, Karuna Joshi and Tim Finin, Attribute Based Encryption for Secure Access to Cloud Based EHR Systems, IEEE International Conference on Cloud Computing, San Francisco CA, July 2018

 

Medical organizations find it challenging to adopt cloud-based electronic medical records services, due to the risk of data breaches and the resulting compromise of patient data. Existing authorization models follow a patient centric approach for EHR management where the responsibility of authorizing data access is handled at the patients’ end. This however creates a significant overhead for the patient who has to authorize every access of their health record. This is not practical given the multiple personnel involved in providing care and that at times the patient may not be in a state to provide this authorization. Hence there is a need of developing a proper authorization delegation mechanism for safe, secure and easy cloud-based EHR management. We have developed a novel, centralized, attribute based authorization mechanism that uses Attribute Based Encryption (ABE) and allows for delegated secure access of patient records. This mechanism transfers the service management overhead from the patient to the medical organization and allows easy delegation of cloud-based EHR’s access authority to the medical providers. In this paper, we describe this novel ABE approach as well as the prototype system that we have created to illustrate it.

The post paper: Attribute Based Encryption for Secure Access to Cloud Based EHR Systems appeared first on UMBC ebiquity.

Posted at 14:13

Ebiquity research group UMBC: Videos of ISWC 2017 talks

Videos of almost all of the talks from the 16th International Semantic Web Conference (ISWC) held in Vienna in 2017 are online at videolectures.net. They include 89 research presentations, two keynote talks, the one-minute madness event and the opening and closing ceremonies.

The post Videos of ISWC 2017 talks appeared first on UMBC ebiquity.

Posted at 14:13

Ebiquity research group UMBC: paper: Automated Knowledge Extraction from the Federal Acquisition Regulations System

Automated Knowledge Extraction from the Federal Acquisition Regulations System (FARS)

Srishty Saha and Karuna Pande Joshi, Automated Knowledge Extraction from the Federal Acquisition Regulations System (FARS), 2nd International Workshop on Enterprise Big Data Semantic and Analytics Modeling, IEEE Big Data Conference, December 2017.

With increasing regulation of Big Data, it is becoming essential for organizations to ensure compliance with various data protection standards. The Federal Acquisition Regulations System (FARS) within the Code of Federal Regulations (CFR) includes facts and rules for individuals and organizations seeking to do business with the US Federal government. Parsing and gathering knowledge from such lengthy regulation documents is currently done manually and is time and human intensive.Hence, developing a cognitive assistant for automated analysis of such legal documents has become a necessity. We have developed semantically rich approach to automate the analysis of legal documents and have implemented a system to capture various facts and rules contributing towards building an ef?cient legal knowledge base that contains details of the relationships between various legal elements, semantically similar terminologies, deontic expressions and cross-referenced legal facts and rules. In this paper, we describe our framework along with the results of automating knowledge extraction from the FARS document (Title48, CFR). Our approach can be used by Big Data Users to automate knowledge extraction from Large Legal documents.

The post paper: Automated Knowledge Extraction from the Federal Acquisition Regulations System appeared first on UMBC ebiquity.

Posted at 14:13

Ebiquity research group UMBC: W3C Recommendation: Time Ontology in OWL

W3C Recommendation: Time Ontology in OWL

The Spatial Data on the Web Working Group has published a W3C Recommendation of the Time Ontology in OWL specification. The ontology provides a vocabulary for expressing facts about  relations among instants and intervals, together with information about durations, and about temporal position including date-time information. Time positions and durations may be expressed using either the conventional Gregorian calendar and clock, or using another temporal reference system such as Unix-time, geologic time, or different calendars.

The post W3C Recommendation: Time Ontology in OWL appeared first on UMBC ebiquity.

Posted at 14:13

August 29

Leigh Dodds: Increasing inclusion around open standards for data

I read an interesting article this week by Ana Brandusescu, Michael Canares and Silvana Fumega. Called “Open data standards design behind closed doors?” it explores issues of inclusion and equity around the development of “open data standards” (which I’m reading as “open standards for data”).

Ana, Michael and Silvana rightly highlight that standards development is often seen and carried out as a technical process, whereas their development and impacts are often political, social or economic. To ensure that standards are well designed, we need to recognise their power, choose when to wield that tool, and ensure that we use it well. The article also asks questions about how standards are currently developed and suggests a framework for creating more participatory approaches throughout their development.

I’ve been reflecting on the article this week alongside a discussion that took place in this thread started by Ana.

Improving the ODI standards guidebook

I agree that standards development should absolutely be more inclusive. I too often find myself in standards discussions and groups with people that look like me and whose experiences may not always reflect those who are ultimately impacted by the creation and use of a standard.

In the open standards for data guidebook we explore how and why standards are developed to help make that process more transparent to a wider group of people. We also placed an emphasis on the importance of the scoping and adoption phases of standards development because this is so often where standards fail. Not just because the wrong thing is standardised, but also because the standard is designed for the wrong audience, or its potential impacts and value are not communicated.

Sometimes we don’t even need a standard. Standards development isn’t about creating specifications or technology, those are just outputs. The intended impact is to create some wider change in the world, which might be to increase transparency, or support implementation of a policy or to create a more equitable marketplace. Other interventions or activities might achieve those same goals better or faster. Some of them might not even use data(!)

But looking back through the guidebook, while we highlight in many places the need for engagement, outreach, developing a shared understanding of goals and desired impacts and a clear set of roles and responsibilities, we don’t specifically foreground issues of inclusion and equity as much as we could have.

The language and content of the guidebook could be improved. As could some prototype tools we included like the standards canvas. How would that be changed in order to foreground issues of inclusion and equity?

I’d love to get some contributions to the guidebook to help us improve it. Drop me a message if you have suggestions about that.

Standards as shared agreements

Open standards for data are reusable agreements that guide the exchange of data. They shape how I collect data from you, as a data provider. And as a data provider they shape how you (re)present data you have collected and, in many cases will ultimately impact how you collect data in the future.

If we foreground standards as agreements for shaping how data is collected and shared, then to increase inclusion and equity in the design of those agreements we can look to existing work like the Toolkit for Centering Racial Equity which provides a framework for thinking about inclusion throughout the life-cycle of data. Standards development fits within that life-cycle, even if it operates at a larger scale and extends it out to different time frames.

We can also recognise existing work and best practices around good participatory design and research.

We should avoid standards development, as a process, being divorced from broader discussions and best practices around ethics, equity and engagement around data. Taking a more inclusive and equitable approach to standards development is part of the broader discussion around the need for more integration across the computing and social sciences.

We may also need to recognise that sometimes agreements are made that don’t provide equitable outcomes for everyone. We might not be able to achieve a compromise that works for everyone. Being transparent about the goals and aims of a standard, and how it was developed, can help to surface who it is designed for (or not). Sometimes we might just need different standards, optimised for different purposes.

Some standards are more harmful than others

There are many different types of standard. And standards can be applied to different types of data. The authors of the original article didn’t really touch on this within their framework, but I think its important to recognise these differences, as part of any follow-on activities.

The impacts of a poorly designed standard that classifies people or their health outcomes will be much more harmful than a poorly defined data exchange format. See all of Susan Leigh Star‘s work. Or concerns from indigenous peoples about how they are counted and represented (or not) in statistical datasets.

Increasing inclusion can help to mitigate the harmful impacts around data. So focusing on improving inclusion (or recognising existing work and best practices) around the design of standards with greater capacity for harms is important. The skills and experience required in developing a taxonomy is fundamentally different to those required to develop a data exchange format.

Recognising these differences is also helpful when planning how to engage with a wider group of people. As we can identify what help and input is needed: What skills or perspectives are lacking among those leading standards work? What help or support needs to be offered to increase inclusion. E.g. by developing skills, or choosing different collaboration tools or methods of seeking input.

Developing a community of practice

Since we launched the standards guidebook I’ve been wondering whether it would be helpful to have more of a community of practice around standards development. I found myself thinking about this again after reading Ana, Michael and Silvana’s article and the subsequent discussion on twitter.

What would that look like? Does it exist already?

Perhaps supported by a set of learning or training resources that re-purposes some of the ODI guidebook material alongside other resources to help others to engage with and lead impactful, inclusive standards work?

I’m interested to see how this work and discussion unfolds.

Posted at 12:09

August 28

Leigh Dodds: Four types of innovation around data

Vaughn Tan’s The Uncertainty Mindset is one of the most fascinating books I’ve read this year. It’s an exploration of how to build R&D teams drawing on lessons learned in high-end kitchens around the world. I love cooking and I’m interested in creative R&D and what makes high-performing teams work well. I’d strongly recommend it if you’re interested in any of these topics.

I’m also a sucker for a good intellectual framework that helps me think about things in different ways. I did that recently with the BASEDEF framework.

Tan introduces a nice framework in Chapter 4 of the book which looks at four broad types of innovation around food. These are presented as a way to help the reader understand how and where innovation creates impact in restaurants. The four categories are:

  1. New dishes – new arrangements of ingredients, where innovation might be incremental refinements to existing dishes, combining ingredients together in new ways, or using ingredients from different contexts (think “fusion”)
  2. New ingredients – coming up with new things to be cooked
  3. New cooking methods – new ways of cooking things, like spherification or sous vide
  4. New cooking processes – new ways of organising the processes of cooking, e.g. to help kitchen staff prepare a dish more efficiently and consistently

The categories are the top are more evident to the consumer, those lower down less so. But the impacts of new methods and processes are greater as they apply in a variety of contexts.

Somewhat inevitably, I found myself thinking about how these categories work in the context of data:

  1. New dishes analyses – New derived datasets made from existing primary sources. Or new ways of combining datasets to create insights. I’ve used the metaphor of cooking to describe data analysis before, those recipes for data-informed problem solving help to document this stage to make it reproducible
  2. New ingredients datasets and data sources – Finding and using new sources of data, like turning image, text or audio libraries into datasets, using cheaper sensors, finding a way to extract data from non-traditional sources, or using phone sensors for earthquake detection
  3. New cooking methods for cleaning, managing or analysing data – which includes things like Jupyter notebooks, machine learning or differential privacy
  4. New cooking processes for organising the collection, preparation and analysis of data – e.g. collaborative maintenance, developing open standards for data or approaches to data governance and collective consent?

The breakdown isn’t perfect, but I found the exercise useful to think through the types of innovation around data. I’ve been conscious recently that I’m often using the word “innovation” without really digging into what that means, how that innovation happens and what exactly is being done differently or produced as a result.

The categories are also useful, I think, in reflecting on the possible impacts of breakthroughs of different types. Or perhaps where investment in R&D might be prioritised and where ensuring the translation of innovative approaches into the mainstream might have most impact?

What do you think?

Posted at 15:07

Leigh Dodds: #TownscaperDailyChallenge

This post is a bit of a diary entry. It’s to help me remember a fun little activity that I was involved in recently.

I’d seen little gifs and screenshots of Townscaper on twitter for months. But then suddenly it was in early access.

I bought it and started playing around. I’ve been feeling like I was in a rut recently and wanted to do something creative. After seeing Jim Rossignol mention playing with townscaper as a nightly activity, I thought I’d do similar.

Years ago I used to do lunchtime hacks and experiments as a way to be a bit more creative than I got to be in my day job. Having exactly an hour to create and build something is a nice constraint. Forces you to plan ahead and do the simplest thing to move an idea forward.

I decided to try lunchtime Townscaper builds. Each one with a different theme. I did my first one, with the theme “Bridge”, and shared it on twitter.

Chris Love liked the idea and suggested adding a hashtag so others could do the same. I hadn’t planned to share my themes and builds every day, but I thought, why not? The idea was to try doing something different after all.

So I tweeted out the first theme using the hashtag.

That tweet is the closest thing I’ve ever had to a “viral” tweet. It’s had over 53,523 impressions and over 650 interactions.

Turns out people love Townscaper. And are making lots of cool things with it.

Tweetdeck was pretty busy for the next few days. I had a few people start following me as a result, and suddenly felt a bit pressured. To help orchestra things and manage my own piece of mind, I did a bit of forward planning.

I decided to run the activity for one week. At the end I’d either hand it over to someone or just step back.

I also spent the first evening brainstorming a list of themes. More than enough for me to keep me going for the week, so I could avoid the need to come up with new themes on the fly. I tried to find a mixture of words that were within the bounds of the types of things you could create in Townscaper, but left room for creativity. In the end I revised and prioritized the initial list over the course of the week based on how people engaged.

I wanted the activity to be inclusive so came up with a few ground rules: “No prizes, no winners. It’s just for fun.”. And some brief guidance about how to participate: post screenshots, use the right hashtags).

I also wanted to help gather together submissions, but didn’t want to retweet or share all of them. So decided to finally try out creating twitter moments. One for each daily challenge. This added some work as I was always worrying I’d missed something, but it also meant I spent time looking at every build.

I ended up with two template tweets, one to introduce the challenge and one to publish the results. These were provided as a single thread to help weave everything together.

And over the course of a week, people built some amazing things. Take a look for yourself:

  1. Townscaper Daily Challenge #1 – Bridge
  2. Townscaper Daily Challenge #2 – Garden
  3. Townscaper Daily Challenge #3 – Neighbours
  4. Townscaper Daily Challenge #4 – Canal
  5. Townscaper Daily Challenge #5 – Eyrie
  6. Townscaper Daily Challenge #6 – Fortress
  7. Townscaper Daily Challenge #7 – Labyrinth

People played with the themes in interesting ways. They praised and commented on each others work. It was one of the most interesting, creative and fun things I’ve done on twitter.

By the end of the week, only a few people were contributing, so it was right to let it run its course. (Although I see that people are still occasionally using the hashtag).

It was a reminder than twitter can be and often is a completely different type of social space. A break from the doomscrolling was good.

It was also a reminded me how much I loved creating and making things. So I’m resolved to do more of that in the future.

Posted at 14:05

August 07

Sebastian Trueg: Protecting And Sharing Linked Data With Virtuoso

Disclaimer: Many of the features presented here are rather new and can not be found in  the open-source version of Virtuoso.

Last time we saw how to share files and folders stored in the Virtuoso DAV system. Today we will protect and share data stored in Virtuoso’s Triple Store – we will share RDF data.

Virtuoso is actually a quadruple-store which means each triple lives in a named graph. In Virtuoso named graphs can be public or private (in reality it is a bit more complex than that but this view on things is sufficient for our purposes), public graphs being readable and writable by anyone who has permission to read or write in general, private graphs only being readable and writable by administrators and those to which named graph permissions have been granted. The latter case is what interests us today.

We will start by inserting some triples into a named graph as dba – the master of the Virtuoso universe:

Virtuoso Sparql Endpoint

Sparql Result

This graph is now public and can be queried by anyone. Since we want to make it private we quickly need to change into a SQL session since this part is typically performed by an application rather than manually:

$ isql-v localhost:1112 dba dba
Connected to OpenLink Virtuoso
Driver: 07.10.3211 OpenLink Virtuoso ODBC Driver
OpenLink Interactive SQL (Virtuoso), version 0.9849b.
Type HELP; for help and EXIT; to exit.
SQL> DB.DBA.RDF_GRAPH_GROUP_INS ('http://www.openlinksw.com/schemas/virtrdf#PrivateGraphs', 'urn:trueg:demo');

Done. -- 2 msec.

Now our new named graph urn:trueg:demo is private and its contents cannot be seen by anyone. We can easily test this by logging out and trying to query the graph:

Sparql Query
Sparql Query Result

But now we want to share the contents of this named graph with someone. Like before we will use my LinkedIn account. This time, however, we will not use a UI but Virtuoso’s RESTful ACL API to create the necessary rules for sharing the named graph. The API uses Turtle as its main input format. Thus, we will describe the ACL rule used to share the contents of the named graph as follows.

@prefix acl: <http://www.w3.org/ns/auth/acl#> .
@prefix oplacl: <http://www.openlinksw.com/ontology/acl#> .
<#rule> a acl:Authorization ;
  rdfs:label "Share Demo Graph with trueg's LinkedIn account" ;
  acl:agent <http://www.linkedin.com/in/trueg> ;
  acl:accessTo <urn:trueg:demo> ;
  oplacl:hasAccessMode oplacl:Read ;
  oplacl:hasScope oplacl:PrivateGraphs .

Virtuoso makes use of the ACL ontology proposed by the W3C and extends on it with several custom classes and properties in the OpenLink ACL Ontology. Most of this little Turtle snippet should be obvious: we create an Authorization resource which grants Read access to urn:trueg:demo for agent http://www.linkedin.com/in/trueg. The only tricky part is the scope. Virtuoso has the concept of ACL scopes which group rules by their resource type. In this case the scope is private graphs, another typical scope would be DAV resources.

Given that file rule.ttl contains the above resource we can post the rule via the RESTful ACL API:

$ curl -X POST --data-binary @rule.ttl -H"Content-Type: text/turtle" -u dba:dba http://localhost:8890/acl/rules

As a result we get the full rule resource including additional properties added by the API.

Finally we will login using my LinkedIn identity and are granted read access to the graph:

SPARQL Endpoint Login
sparql6
sparql7
sparql8

We see all the original triples in the private graph. And as before with DAV resources no local account is necessary to get access to named graphs. Of course we can also grant write access, use groups, etc.. But those are topics for another day.

Technical Footnote

Using ACLs with named graphs as described in this article requires some basic configuration. The ACL system is disabled by default. In order to enable it for the default application realm (another topic for another day) the following SPARQL statement needs to be executed as administrator:

sparql
prefix oplacl: <http://www.openlinksw.com/ontology/acl#>
with <urn:virtuoso:val:config>
delete {
  oplacl:DefaultRealm oplacl:hasDisabledAclScope oplacl:Query , oplacl:PrivateGraphs .
}
insert {
  oplacl:DefaultRealm oplacl:hasEnabledAclScope oplacl:Query , oplacl:PrivateGraphs .
};

This will enable ACLs for named graphs and SPARQL in general. Finally the LinkedIn account from the example requires generic SPARQL read permissions. The simplest approach is to just allow anyone to SPARQL read:

@prefix acl: <http://www.w3.org/ns/auth/acl#> .
@prefix oplacl: <http://www.openlinksw.com/ontology/acl#> .
<#rule> a acl:Authorization ;
  rdfs:label "Allow Anyone to SPARQL Read" ;
  acl:agentClass foaf:Agent ;
  acl:accessTo <urn:virtuoso:access:sparql> ;
  oplacl:hasAccessMode oplacl:Read ;
  oplacl:hasScope oplacl:Query .

I will explain these technical concepts in more detail in another article.

Posted at 00:33

Sebastian Trueg: Sharing Files With Whomever Is Simple

Dropbox, Google Drive, OneDrive, Box.com – they all allow you to share files with others. But they all do it via the strange concept of public links. Anyone who has this link has access to the file. On first glance this might be easy enough but what if you want to revoke read access for just one of those people? What if you want to share a set of files with a whole group?

I will not answer these questions per se. I will show an alternative based on OpenLink Virtuoso.

Virtuoso has its own WebDAV file storage system built in. Thus, any instance of Virtuoso can store files and serve these files via the WebDAV API (and an LDP API for those interested) and an HTML UI. See below for a basic example:

Virtuoso DAV Browser

This is just your typical file browser listing – nothing fancy. The fancy part lives under the hood in what we call VAL – the Virtuoso Authentication and Authorization Layer.

We can edit the permissions of one file or folder and share it with anyone we like. And this is where it gets interesting: instead of sharing with an email address or a user account on the Virtuoso instance we can share with people using their identifiers from any of the supported services. This includes Facebook, Twitter, LinkedIn, WordPress, Yahoo, Mozilla Persona, and the list goes on.

For this small demo I will share a file with my LinkedIn identity http://www.linkedin.com/in/trueg. (Virtuoso/VAL identifier people via URIs, thus, it has schemes for all supported services. For a complete list see the Service ID Examples in the ODS API documentation.)

Virtuoso Share File

Now when I logout and try to access the file in question I am presented with the authentication dialog from VAL:

VAL Authentication Dialog

This dialog allows me to authenticate using any of the supported authentication methods. In this case I will choose to authenticate via LinkedIn which will result in an OAuth handshake followed by the granted read access to the file:

LinkedIn OAuth Handshake

 

Access to file granted

It is that simple. Of course these identifiers can also be used in groups, allowing to share files and folders with a set of people instead of just one individual.

Next up: Sharing Named Graphs via VAL.

Posted at 00:33

Sebastian Trueg: Digitally Sign Emails With Your X.509 Certificate in Evolution

Digitally signing Emails is always a good idea. People can verify that you actually sent the mail and they can encrypt emails in return. A while ago Kingsley showed how to sign emails in Thunderbird.I will now follow up with a short post on how to do the same in Evolution.

The process begins with actually getting an X.509 certificate including an embedded WebID. There are a few services out there that can help with this, most notably OpenLink’s own YouID and ODS. The former allows you to create a new certificate based on existing social service accounts. The latter requires you to create an ODS account and then create a new certificate via Profile edit -> Security -> Certificate Generator. In any case make sure to use the same email address for the certificate that you will be using for email sending.

The certificate will actually be created by the web browser, making sure that the private key is safe.

If you are a Google Chrome user you can skip the next step since Evolution shares its key storage with Chrome (and several other applications). If you are a user of Firefox you need to perform one extra step: go to the Firefox preferences, into the advanced section, click the “Certificates” button, choose the previously created certificate, and export it to a .p12 file.

Back in Evolution’s settings you can now import this file:

To actually sign emails with your shiny new certificate stay in the Evolution settings, choose to edit the Mail Account in question, select the certificate in the Secure MIME (S/MIME) section and check “Digitally sign outgoing messages (by default)“:

The nice thing about Evolution here is that in contrast to Thunderbird there is no need to manually import the root certificate which was used to sign your certificate (in our case the one from OpenLink). Evolution will simply ask you to trust that certificate the first time you try to send a signed email:

That’s it. Email signing in Evolution is easy.

Posted at 00:33

Libby Miller: Zoom on a Pi 4 (4GB)

It works using chromium not the Zoom app (which only runs on x86, not ARM). I tested it with a two-person, two-video stream call. You need a screen (I happened to have a spare 7″ touchscreen). You also need a keyboard for the initial setup, and a mouse if you don’t have a touchscreen.

The really nice thing is that Video4Linux (bcm2835-v4l2) support has improved so it works with both v1 and v2 raspi cameras, and no need for options bcm2835-v4l2 gst_v4l2src_is_broken=1 🎉🎉

IMG_4695

So:

  • Install Raspian Buster
  • Connect the screen keyboard, mouse, camera and speaker/mic. I used a Sennheiser usb speaker / mic, and a standard 2.1 Raspberry pi camera.
  • Boot up. I had to add lcd_rotate=2 in /boot/config.txt for my screen to rotate it 180 degrees.
  • Don’t forget to enable the camera in raspi-config
  • Enable bcm2835-v4l2 – add it to sudo nano /etc/modules
  • I increased swapsize using sudo nano /etc/dphys-swapfile -> CONF_SWAPSIZE=2000 -> sudo /etc/init.d/dphys-swapfile restart
  • I increased GPU memory using sudo nano /boot/config.txt -> gpu_mem=512

You’ll need to set up Zoom and pass capchas using the keyboard and mouse. Once you have logged into Zoom you can often ssh in and start it remotely like this:

export DISPLAY=:0.0
/usr/bin/chromium-browser --kiosk --disable-infobars --disable-session-crashed-bubble --no-first-run https://zoom.us/wc/XXXXXXXXXX/join/

Note the url format – this is what you get when you click “join from my browser”. If you use the standard Zoom url you’ll need to click this url yourself, ignoring the Open xdg-open prompts.

IMG_4699

You’ll still need to select the audio and start the video, including allowing it in the browser. You might need to select the correct audio and video, but I didn’t need to.

I experimented a bit with an ancient logitech webcam-speaker-mic and the speaker-mic part worked and video started but stalled – which made me think that a better / more recent webcam might just work.

Posted at 00:17

Libby Miller: Removing rivets

I wanted to stay away from the computer during a week off work so I had a plan to fix up some garden chairs whose wooden slats had gone rotten:

IMG_4610

Looking more closely I realised the slats were riveted on. How do you get rivets off? I asked my hackspace buddies and Barney suggested drilling them out. They have an indentation in the back and you don’t have to drill very far to get them out.

The first chair took me two hours to drill out 15 rivets, and was a frustrating and sweaty experience. I checked YouTube to make sure I wasn’t doing anything stupid and tried a few different drill bits. My last chair today took 15 minutes, so! My amateurish top tips / reminder for me next time:

  1. Find a drill bit the same size as the hole that the rivet’s gone though
  2. Make sure it’s a tough drill bit, and not too pointy. You are trying to pop off the bottom end of the rivet – it comes off like a ring – and not drill a hole into the rivet itself.
  3. Wear eye protection – there’s the potential for little bits of sharp metal to be flying around
  4. Give it some welly – I found it was really fast once I started to put some pressure on the drill
  5. Get the angle right – it seemed to work best when I was drilling exactly vertically down into to the rivet, and not at a slight angle.
  6. Once drilled, you might need to pop them out with a screwdriver or something of the right width plus a hammer

IMG_4616

More about rivets.

Posted at 00:17

Peter Mika: Semantic Search Challenge sponsored by Yahoo! Labs

Together with my co-chairs Marko Grobelnik, Thanh Tran Duc and Haofen Wang, we again got the opportunity of organizing the 4th Semantic Search Workshop, the premier event for research on retrieving information from structured data collections or text collections annotated with metadata. Like last year, the Workshop will take place at the WWW conference, to be held March 29, 2011, in Hyderabad, India. If you wish to submit a paper, there are still a few days left: the deadline is Feb 26, 2011. We welcome both short and long submissions.

In conjunction with the workshop, and with a number of co-organizers helping us, we are also launching  a Semantic Search Challenge (sponsored by Yahoo! Labs), which is hosted at semsearch.yahoo.com. The competition will feature two tracks. The first track (entity retrieval) is the same task we evaluated last year: retrieving resources that match a keyword query, where the query contains the name of an entity, with possibly some context (such as “starbucks barcelona”). We are adding this year a new task (list retrieval) which represents the next level of difficulty: finding resources that belong to a particular set of entities, such as “countries in africa”. These queries are more complex to answer since they don’t name a particular entity. Unlike in other similar competitions, the task is to retrieve the answers from a real (messy…) dataset crawled from the Semantic Web. There is a small prize ($500) to win in each track.

The entry period will start March 1, and run through March 15. Please consider participating in either of these tracks: it’s early days in Semantic Search, and there is so much to discover.

Posted at 00:17

Peter Mika: Microformats and RDFa deployment across the Web

I have presented on previous occasions (at Semtech 2009, SemTech 2010, and later at FIA Ghent 2010, see slides for the latter, also in ISWC 2009) some information about microformat and RDFa deployment on the Web. As such information is hard to come by, this has generated some interest from the audience. Unfortunately, Q&A time after presentations is too short to get into details, hence some additional background on how we obtained this data and what it means for the Web. This level of detail is also important to compare this with information from other sources, where things might be measured differently.

The chart below shows the deployment of certain microformats and RDFa markup on the Web, as percentage of all web pages, based on an analysis of 12 billion web pages indexed by Yahoo! Search. The same analysis has been done at three different time-points and therefore the chart also shows the evolution of deployment.

Microformats and RDFa deployment on the Web (% of all web pages)

The data is given below in a tabular format.

Date RDFa eRDF tag hcard adr hatom xfn geo hreview
09-2008 0.238 0.093 N/A 1.649 N/A 0.476 0.363 N/A 0.051
03-2009 0.588 0.069 2.657 2.005 0.872 0.790 0.466 0.228 0.069
10-2010 3.591 0.000 2.289 1.058 0.237 1.177 0.339 0.137 0.159

There are a couple of comments to make:

  • There are many microformats (see microformats.org) and I only include data for the ones that are most common on the Web. To my knowledge at least, all other microformats are less common than the ones listed above.
  • eRDF has been a predecessor to RDFa, and has been obsoleted by it. RDFa is more fully featured than eRDF, and has been adopted as a standard by the W3C.
  • The data for the tag, adr and geo formats is missing from the first measurement.
  • The numbers cannot be aggregated to get a total percentage of URLs with metadata. The reason is that a webpage may contain multiple microformats and/or RDFa markup. In fact, this is almost always the case with the adr and geo microformats, which are typically used as part of hcard. The hcard microformat itself can be part of hatom markup etc.
  • Not all data is equally useful, depending on what you are trying to do. The tag microformat, for example, is nothing more than a set of keywords attached to a webpage. RDFa itself covers data using many different ontologies.
  • The data doesn’t include “trivial” RDFa usage, i.e. documents that only contain triples from the xhtml namespace. Such triples are often generated by RDFa parsers even when the page author did not intend to use RDFa.
  • This data includes all valid RDFa, and not just namespaces or vocabularies supported by Yahoo! or any other company.

The data shows that the usage of RDFa has increased 510% between March, 2009 and October, 2010, from 0.6% of webpages to 3.6% of webpages (or 430 million webpages in our sample of 12 billion). This is largely thanks to the efforts of the folks at Yahoo! (SearchMonkey), Google (Rich Snippets) and Facebook (Open Graph), all of whom recommend the usage of RDFa. The deployment of microformats has not advanced significantly in the same period, except for the hatom microformat.

These results make me optimistic that the Semantic Web is here already in large ways. I don’t expect that a 100% of webpages will ever adopt microformats or RDFa markup, simply because not all web pages contain structured data. As this seems interesting to watch, I will try to publish updates to the data and include the update chart here or in future presentations.

Enhanced by Zemanta

Posted at 00:17

Michael Hausenblas: Elephant filet

End of January I participated in a panel discussion on Big Data, held during the CISCO live event in London. One of my fellow panelists, I believe it was Sean McKeown of CISCO, said there something along the line:

… ideally the cluster is at 99% utilisation, concerning CPU, I/O, and network …

This stuck in my head and I gave it some thoughts. In the following I will elaborate a bit on this in the context of where Hadoop is used in a shared setup, for example in hosted offerings or, say, within an enterprise that runs different systems such as Storm, Lucene/Solr, and Hadoop on one cluster.

In essence, we witness two competing forces: from the perspective of a single user who expects performance vs. the view of the cluster owner or operator who wants to optimise throughput and maximise utilisation. If you’re not familiar with these terms you might want to read up on Cary Millsap’s Thinking Clearly About Performance (part 1 | part 2).

Now, in such as shared setup we may experience a spectrum of loads: from compute intensive over I/O intensive to communication intensive, illustrated in the following, not overly scientific figure:
Utilisations

Here are a some observations and thoughts for potential starting points of deeper research or experiments.

Multitenancy. We see more and more deployments that require strong support for multitenancy; check out the CapacityScheduler, learn from best practices or use a distribution that natively supports the specification of topologies. Additionally, you might still want to keep an eye on Serengeti – VMware’s Hadoop virtualisation project – that seems to have gone quiet in the past months, but I still have hope for it.

Software Defined Networks (SDN). See Wikipedia’s definition for it, it’s not too bad. CISCO, for example, is very active in this area and only recently there was a special issue in the recent IEEE Communications Magazine (February 2013) covering SDN research. I can perfectly see – and indeed this was also briefly discussed on our CISCO live panel back in January – how SDN can enable new ways to optimise throughput and performance. Imagine a SDN that is dynamically workload-aware in the sense of that it knows the difference of a node that runs a task tracker vs. a data node vs. a Solr shard – it should be possible to transparently better the operational parameters and everyone involved, both the users as well as the cluster owner benefit from it.

As usual, I’m very interested in what you think about the topic and looking forward learning about resources in this space from you.

Posted at 00:10

Michael Hausenblas: MapR, Europe and me

MapRYou might have already heard that MapR, the leading provider of enterprise-grade Hadoop and friends, is launching its European operations.

Guess what? I’m joining MapR Europe as of January 2013 in the role of Chief Data Engineer EMEA and will support our technical and sales teams throughout Europe. Pretty exciting times ahead!

As an aside: as I recently pointed out, I very much believe that Apache Drill and Hadoop offer great synergies and if you want to learn more about this come and join us at the Hadoop Summit where my Drill talk has been accepted for the Hadoop Futures session.

Posted at 00:10

Michael Hausenblas: Hosted MapReduce and Hadoop offerings

Hadoop in the cloud

Today’s question is: where are we regarding MapReduce/Hadoop in the cloud? That is, what are the offerings of Hadoop-as-a-Service or other hosted MapReduce implementations, currently?

A year ago, InfoQ ran a story Hadoop-as-a-Service from Amazon, Cloudera, Microsoft and IBM which will serve us as a baseline here. This article contains the following statement:

According to a 2011 TDWI survey, 34% of the companies use big data analytics to help them making decisions. Big data and Hadoop seem to be playing an important role in the future.

One year later, we learn from a recent MarketsAndMarkets study, Hadoop & Big Data Analytics Market – Trends, Geographical Analysis & Worldwide Market Forecasts (2012 – 2017) that …

The Hadoop market in 2012 is worth $1.5 billion and is expected to grow to about $13.9 billion by 2017, at a [Compound Annual Growth Rate] of 54.9% from 2012 to 2017.

In the past year there have also been some quite vivid discussions around the topic ‘Hadoop in the cloud’.

So, here are some current offerings and announcements I’m aware of:

… and now it’s up to you dear reader – I would appreciate it if you could point me to more offerings and/or announcements you know of, concerning MapReduce and Hadoop in the cloud!

Posted at 00:10

Michael Hausenblas: Interactive analysis of large-scale datasets

The value of large-scale datasets – stemming from IoT sensors, end-user and business transactions, social networks, search engine logs, etc. – apparently lies in the patterns buried deep inside them. Being able to identify these patterns, analyzing them is vital. Be it for detecting fraud, determining a new customer segment or predicting a trend. As we’re moving from the billions to trillions of records (or: from the terabyte to peta- and exabyte scale) the more ‘traditional’ methods, including MapReduce seem to have reached the end of their capabilities. The question is: what now?

But a second issue has to be addressed as well: in contrast to what current large-scale data processing solutions provide for in batch-mode (arbitrarily but in line with the state-of-the-art defined as any query that takes longer than 10 sec to execute) the need for interactive analysis increases. Complementary, visual analytics may or may not be helpful but come with their own set of challenges.

Recently, a proposal for a new Apache Incubator group called Drill has been made. This group aims at building a:

… distributed system for interactive analysis of large-scale datasets […] It is a design goal to scale to 10,000 servers or more and to be able to process petabyes of data and trillions of records in seconds.

Drill’s design is supposed to be informed by Google’s Dremel and wants to efficiently process nested data (think: Protocol Buffers). You can learn more about requirements and design considerations from Tomer Shiran’s slide set.

In order to better understand where Drill fits in in the overall picture, have a look at the following (admittedly naïve) plot that tries to place it in relation to well-known and deployed data processing systems:

BTW, if you want to test-drive Dremel, you can do this already today; it’s an IaaS service offered in Google’s cloud computing suite, called BigQuery.

Posted at 00:10

Michael Hausenblas: Schema.org + WebIntents = Awesomeness

Imagine you search for a camera, say a Canon EOS 60D, and in addition to the usual search results you’re as well offered a choice of actions you can perform on it, for example share the result with a friend, write a review for the item or, why not directly buy it?

Enhancing SERP with actions

Sounds far fetched? Not at all. In fact, all the necessary components are available and deployed. With Schema.org we have a way to describe the things we publish on our Web pages, such as books or cameras and with WebIntents we have a technology at hand that allows us to interact with these things in a flexible way.

Here are some starting points in case you want to dive into WebIntents a bit:

PS: I started to develop a proof of concept for mapping Schema.org terms to WebIntents and will report on the progress, here. Stay tuned!

Posted at 00:10

Michael Hausenblas: Turning tabular data into entities

Two widely used data formats on the Web are CSV and JSON. In order to enable fine-grained access in an hypermedia-oriented fashion I’ve started to work on Tride, a mapping language that takes one or more CSV files as inputs and produces a set of (connected) JSON documents.

In the 2 min demo video I use two CSV files (people.csv and group.csv) as well as a mapping file (group-map.json) to produce a set of interconnected JSON documents.

So, the following mapping file:

{
 "input" : [
  { "name" : "people", "src" : "people.csv" },
  { "name" : "group", "src" : "group.csv" }
 ],
 "map" : {
  "people" : {
   "base" : "http://localhost:8000/people/",
   "output" : "../out/people/",
   "with" : { 
    "fname" : "people.first-name", 
    "lname" : "people.last-name",
    "member" : "link:people.group-id to:group.ID"
   }
  },
  "group" : {
   "base" : "http://localhost:8000/group/",
    "output" : "../out/group/",
    "with" : {
     "title" : "group.title",
     "homepage" : "group.homepage",
     "members" : "where:people.group-id=group.ID link:group.ID to:people.ID"
    }
   }
 }
}

… produces JSON documents representing groups. One concrete example output is shown below:

Posted at 00:10

John Goodwin: Quick Play with Cayley Graph DB and Ordnance Survey Linked Data

Earlier this month Google announced the release of the open source graph database/triplestore Cayley. This weekend I thought I would have a quick look at it, and try some simple queries using the Ordnance Survey Linked Data.

Cayley is written in Go, so first I had to download and install that. I then downloaded Cayley from here. As an initial experiment I decided to use the Boundary Line Linked Data, and you can grabbed the data as n-triples here. I only wanted a subset of this data – I didn’t need all of the triplestores storing the complex boundary geometries for my initial test so I discarded the files of the form *-geom.nt and the files of the form county.nt, dbu.nt etc. (these are the ones with the boundaries in). Finally I put the remainder of the data into one file so it was ready to load into Cayley.

It is very easy to load data into Cayley – see the getting started section part on the Cayley pages here. I decided I wanted to try the web interface so loading the data (in a file called all.nt) was a simple case of typing:

./cayley http –dbpath=./boundaryline/all.nt

Once you’ve done this point your web browser to http://localhost:64210/ and you should see something like:

Screen Shot 2014-06-29 at 10.43.35

 

One of the things that will first strike people used to using RDF/triplestores is that Cayley does not have a SPARQL interface, and instead uses a query language based on Gremlin. I am new to Gremlin, but seems it has already been used to explore linked data – see blog from Dan Brickley from a few years ago.

The main purpose of this blog post is to give a few simple examples of queries you can perform on the Ordnance Survey data in Cayley. If you have Cayley running then you can find the query language documented here.

At the simplest level the query language seems to be an easy way to traverse the graph by starting at a node/vertex and following incoming or outgoing links. So to find All the regions that touch Southampton it is a simple case of starting at the Southampton node, following a touches outbound link and returning the results:

g.V(“http://data.ordnancesurvey.co.uk/id/7000000000037256“).Out(“http://data.ordnancesurvey.co.uk/ontology/spatialrelations/touches“).All()

Giving:

Screen Shot 2014-06-29 at 10.56.15

If you want to return the names and not the IDs:

g.V(“http://data.ordnancesurvey.co.uk/id/7000000000037256“).Out(“http://data.ordnancesurvey.co.uk/ontology/spatialrelations/touches“).Out(“http://www.w3.org/2000/01/rdf-schema#label“).All()

Screen Shot 2014-06-29 at 10.58.30

You can used also filter – so to just see the counties bordering Southampton:

g.V(“http://data.ordnancesurvey.co.uk/id/7000000000037256“).Out(“http://data.ordnancesurvey.co.uk/ontology/spatialrelations/touches“).Has(“http://www.w3.org/1999/02/22-rdf-syntax-ns#type“,”http://data.ordnancesurvey.co.uk/ontology/admingeo/County“).Out(“http://www.w3.org/2000/01/rdf-schema#label“).All()

Screen Shot 2014-06-29 at 11.01.17

 

The Ordnance Survey linked data also has spatial predicates ‘contains’, ‘within’ as well as ‘touches’. Analogous queries can be done with those. E.g. find me everything Southampton contains:

g.V(“http://data.ordnancesurvey.co.uk/id/7000000000037256“).Out(“http://data.ordnancesurvey.co.uk/ontology/spatialrelations/contains“).Out(“http://www.w3.org/2000/01/rdf-schema#label“).All()

So after this very quick initial experiment it seems that Cayley is very good at providing an easy way of doing very quick/simple queries. One query I wanted to do was find everything in, say, Hampshire – the full transitive closure. This is very easy to do in SPARQL, but in Cayley (at first glance) you’d have to write some extra code (not exactly rocket science, but a bit of a faff compared to SPARQL). I rarely touch Javascript these days so for me personally this will never replace a triplestore with a SPARQL endpoint, but for JS developers this tool will be a great way to get started with and explore linked data/RDF. I might well brush up on my Javascript and provide more complicated examples in a later blog post…

 

 

 

Posted at 00:10

John Goodwin: Visualising the Location Graph – example with Gephi and Ordnance Survey linked data

This is arguably a simpler follow up to my previous blog post, and here I want to look at visualising Ordnance Survey linked data in Gephi. Now Gephi isn’t really a GIS, but it can be used to visualise the adjacency graph where regions are represented as nodes in a graph, and links represent adjacency relationships.

The approach here will be very similar to the approach in my previous blog. The main difference is that you will need to use the Ordnance Survey SPARQL endpoint and not the DBpedia one. So this time in the Gephi semantic web importer enter the following endpoint URL:

http://data.ordnancesurvey.co.uk/datasets/os-linked-data/apis/sparql

The Ordnance Survey endpoint returns turtle by default, and Gephi does not seem to like this. I wanted to force the output as XML. I figured this could be done in the using a ‘REST parameter name’ (output) with value equal to xml. This did not seem to work, so instead I had to do a bit of a hack. In the ‘query tag…’ box you will need to change the value from ‘query’ to ‘output=xml&query’. You should see something like this in the Semantic Web Importer now:

Screen Shot 2014-03-28 at 11.28.28

Now click on the query tab. If we want to, for example, view the adjacent graph for consistuencies we can enter the following query:

prefix gephi:<http://gephi.org/>
construct {
?s gephi:label ?label .
?s gephi:lat ?lat .
?s gephi:long ?long .
?s <http://data.ordnancesurvey.co.uk/ontology/spatialrelations/touches> ?o .}
where
{
?s a <http://data.ordnancesurvey.co.uk/ontology/admingeo/WestminsterConstituency> .
?o a <http://data.ordnancesurvey.co.uk/ontology/admingeo/WestminsterConstituency> .
?s <http://data.ordnancesurvey.co.uk/ontology/spatialrelations/touches> ?o .
?s <http://www.w3.org/2000/01/rdf-schema#label> ?label .
?s <http://www.w3.org/2003/01/geo/wgs84_pos#lat> ?lat .
?s <http://www.w3.org/2003/01/geo/wgs84_pos#long> ?long .
}

and click ‘run’. To visualise the output you will need to follow the exact same steps mentioned here (remember to recast the lat and long variables to decimal).

If we want to view adjacency of London Boroughs then we can do this with a similar query:

prefix gephi:<http://gephi.org/>
construct {
?s gephi:label ?label .
?s gephi:lat ?lat .
?s gephi:long ?long .
?s <http://data.ordnancesurvey.co.uk/ontology/spatialrelations/touches> ?o .}
where
{
?s a <http://data.ordnancesurvey.co.uk/ontology/admingeo/LondonBorough> .
?o a <http://data.ordnancesurvey.co.uk/ontology/admingeo/LondonBorough> .
?s <http://data.ordnancesurvey.co.uk/ontology/spatialrelations/touches> ?o .
?s <http://www.w3.org/2000/01/rdf-schema#label> ?label .
?s <http://www.w3.org/2003/01/geo/wgs84_pos#lat> ?lat .
?s <http://www.w3.org/2003/01/geo/wgs84_pos#long> ?long .
}

When visualising you might want to change the scale parameter to 10000.0. You should see something like this:

Screen Shot 2014-03-28 at 11.40.18

So far so good. Now imagine we want to bring in some other data – recall my previous blog post here. We can use SPARQL federation to bring in data from other endpoints. Suppose we would like to make the size of the node represent the ‘IMD rank‘ of each London Borough…we can do with by bringing in data from the Open Data Communities site:

prefix gephi:<http://gephi.org/>
construct {
?s gephi:label ?label .
?s gephi:lat ?lat .
?s gephi:long ?long .
?s gephi:imd-rank ?imdrank .
?s <http://data.ordnancesurvey.co.uk/ontology/spatialrelations/touches> ?o .}
where
{
?s a <http://data.ordnancesurvey.co.uk/ontology/admingeo/LondonBorough> .
?o a <http://data.ordnancesurvey.co.uk/ontology/admingeo/LondonBorough> .
?s <http://data.ordnancesurvey.co.uk/ontology/spatialrelations/touches> ?o .
?s <http://www.w3.org/2000/01/rdf-schema#label> ?label .
?s <http://www.w3.org/2003/01/geo/wgs84_pos#lat> ?lat .
?s <http://www.w3.org/2003/01/geo/wgs84_pos#long> ?long .
SERVICE <http://opendatacommunities.org/sparql> {
?x <http://purl.org/linked-data/sdmx/2009/dimension#refArea> ?s .
?x <http://opendatacommunities.org/def/IMD#IMD-score> ?imdrank . }
}

You will need to recast the imdrank as an integer for what follows (do this using the same approach used to recast the lat/long variables). You can now use Gephi to resize the nodes according to IMD rank. We do this using the ranking tab:

Screen Shot 2014-03-28 at 11.50.43

You should now see you London Boroughs re-sized according to their IMD rank:

Screen Shot 2014-03-28 at 11.51.51

turning the lights off and adding some labels we get:

Screen Shot 2014-03-28 at 12.04.27

Posted at 00:10

John Goodwin: All roads lead to? Experiments with Gephi, Linked Data and Wikipedia

Gephi is “an interactive visualization and exploration platform for all kinds of networks and complex systems, dynamic and hierarchical graphs”. Tony Hirst did a great blog post a while back showing how you could use Gephi together with DBpedia (a linked data version of Wikipedia) to map an influence network in the world of philosophy. Gephi offers a semantic web plugin which allows you to work with the web of linked data. I recommend you read Tony’s blog to get started with using that plugin with Gephi. I was interested to experiment with this plugin, and to look at what sort of geospatial visualisations could be possible.

If you want to follow all the steps in this post you will need to:

Initially I was interested to see if there were any interesting networks we might visualise between places. In order to see how Wikipedia relates one place to another was a simple case of going to the DBpedia SPARQL endpoint and trying the following query:

select distinct ?p
where
{
?s a <http://schema.org/Place> .
?o a <http://schema.org/Place> .
?s ?p ?o .
}

– where s and o are places, find me what ‘p’ relates them. I noticed two properties ‘http://dbpedia.org/ontology/routeStart‘ and ‘http://dbpedia.org/ontology/routeEnd‘ so I thought I would try to visualise how places round the world were linked by transport connections.  To find places connected by a transport link you want to find pairs ‘start’ and ‘end’ that are the route start and route end, respectively, of some transport link. You can do this with the following query:

select ?start ?end
where
{
?start a <http://schema.org/Place> .
?end a <http://schema.org/Place> .
?link <http://dbpedia.org/ontology/routeStart> ?start .
?link <http://dbpedia.org/ontology/routeEnd> ?end .
}

This gives a lot of data so I thought I would restrict the links to be only road links:

select ?start ?end
where
{?start a <http://schema.org/Place> .
?end a <http://schema.org/Place> .
?link <http://dbpedia.org/ontology/routeStart> ?start .
?link <http://dbpedia.org/ontology/routeEnd> ?end .
?link a <http://dbpedia.org/ontology/Road> . }

We are now ready to visualise this transport network in Gephi. Follow the steps in Tony’s blog to bring up the Semantic Web Importer. In the ‘driver’ tab make sure ‘Remote – SOAP endpoint’ is selected, and the EndPoint URL is http://dbpedia.org/sparql. In an analogous way to Tony’s blog we need to construct our graph so we can visualise it. To simply view the connections between places it would be enough to just add this query to the ‘Query’ tab:

construct {?start <http://foo.com/connectedTo> ?end}
where
{
?start a <http://schema.org/Place> .
?end a <http://schema.org/Place> .
?link <http://dbpedia.org/ontology/routeStart> ?start .
?link <http://dbpedia.org/ontology/routeEnd> ?end .
?link a <http://dbpedia.org/ontology/Road> .
}

However, as we want to visualise this in a geospatial context we need the lat and long of the start and end points so our construct query becomes a bit more complicated:

prefix gephi:<http://gephi.org/>
construct {
?start gephi:label ?labelstart .
?end gephi:label ?labelend .
?start gephi:lat ?minlat .
?start gephi:long ?minlong .
?end gephi:lat ?minlat2 .
?end gephi:long ?minlong2 .
?start <http://foo.com/connectedTo> ?end}
where
{
?start a <http://schema.org/Place> .
?end a <http://schema.org/Place> .
?link <http://dbpedia.org/ontology/routeStart> ?start .
?link <http://dbpedia.org/ontology/routeEnd> ?end .
?link a <http://dbpedia.org/ontology/Road> .
{select ?start (MIN(?lat) AS ?minlat) (MIN(?long) AS ?minlong) where {?start <http://www.w3.org/2003/01/geo/wgs84_pos#lat> ?lat . ?start <http://www.w3.org/2003/01/geo/wgs84_pos#long> ?long .} }
{select ?end (MIN(?lat2) AS ?minlat2) (MIN(?long2) AS ?minlong2) where {?end <http://www.w3.org/2003/01/geo/wgs84_pos#lat> ?lat2 . ?end <http://www.w3.org/2003/01/geo/wgs84_pos#long> ?long2 .} }
?start <http://www.w3.org/2000/01/rdf-schema#label> ?labelstart .
?end <http://www.w3.org/2000/01/rdf-schema#label> ?labelend .
FILTER (lang(?labelstart) = ‘en’)
FILTER (lang(?labelend) = ‘en’)
}

Note that query for the lat and long is a bit more complicated that it might be. This is because DBpedia data is quite messy, and many entities will have more than one lat/long pair. I used a subquery in SPARQL to pull out the minimum lat/long for all the pairs retrieved. Additionally I also retrieved the English labels for each of the start/end points.

Now copy/paste this construct query into the ‘Query’ tab on the Semantic Web Importer:

Screen Shot 2014-03-26 at 15.54.34

Now hit the run button and watch the data load.

To visual the data we need to do a bit more work. In Gephi click on the ‘Data Laboratory’ and you should now see your data table. Unfortunately all of the lats and longs have been imported as strings and we need to recast them as decimals. To do this click on the ‘More actions’ pull down menu and look for ‘Recast column’ and click it. In the ‘Recast manipulator’ window go to ‘column’ and select ‘lat(Node Table)’ from the pull down menu. Under ‘Convert to’ select ‘Double’ and click recast. Do the same for ‘long’.

Screen Shot 2014-03-26 at 16.01.19

when you are done click ‘ok’ and return to the ‘overview’ tab in Gephi. To see this data geospatially go to the layout panel and select ‘Geo Layout’. Change the latitude and longitude to your new recast variable names, and unclick ‘center’ (my graph kept vanishing with it selected). Experiment with the scale value:

Screen Shot 2014-03-26 at 16.09.49

You should now see something like this:

Screen Shot 2014-03-26 at 16.11.13

in your display panel (click image to view in higher resolution).

Given that this is supposed to be a road network you will find some oddities. This it seems to down to ‘European routes’ like European route E15 that link from Scotland down to Spain.

Posted at 00:10

Leigh Dodds: How do different communities create unique identifiers?

Identifiers are part of data infrastructure. They play an important role, helping to publish, structure and link together data. Identifiers are boundary objects, that cross communities. That means they need to be well-documented in order to be most useful.

Understanding how identifiers are created, assigned and governed can help us think through how to strengthen our data infrastructure. With that in mind, let’s take a quick tour of how different communities and systems have created identifier systems to help to uniquely refer to different digital and physical objects.

The simplest way to generate identifiers is by a serial number. A steadily increasing number that is assigned to whatever you need to identify next. This is the approached used in most internal databases as well as some commonly encountered public identifiers.

For example the Ordnance Survey TOID identifier is a serial number that looks like this: osgb1000006032892. UPRNs are similar.

Serial numbers work well when you have a single organisation and/or system generating the identifiers. They’re simple to implement, but can have their downsides, especially when they’re shared with others.

Some serial numbering systems include built in error-checking to deal with copying errors, using a check digit. Examples include the CAS registry number for identifying chemicals, and the basic form of the ISSN for identifying academic journals.

 

 

 

 

 

 

As we can see in the bar code form of the ISSN shown above, identifiers often have more structure to them. And they may not be assigned as a simple serial number.

The second way of providing unique identifiers is using a name or code. These are typically still assigned by a central authority, sometimes known as a registration agency, but they are constructed in different ways.

Identifiers for geographic locations typically rely on administrative regions or other areas to help structure identifiers. For example the statistics community in the EU created the NUTS codes to help identify country sub-divisions in statistical datasets. These are assigned based on hierarchy beginning with the country and then smaller geographic regions. Bath is UKK12 for example.

 

 

 

 

 

 

 

 

Postal codes are another geographically based set of codes. Both the UK and US postal codes use a geographical hierarchy. Only here the regions are those meaningful to how the Royal Mail and USPS manages its delivery operations, rather than being administratively defined by the government.

 

 

 

 

 

Hierarchies that are based on geography and/or organisational structures are common patterns in identifiers. Existing hierarchies provide a handy way to partition up sets of things for identification purposes.

The SWIFT code used in banking has a mixture of organisational and geographic hierarchies.

 

 

 

 

 

 

Encoding information about geography and hierarchy within codes can be useful. It can make them easier to validate. It also mean you can also manipulate them, e.g. by truncation, to find the identifiers for broader regions.

But encoding lots of information in identifiers also has its downsides. The main one being dealing with changes to administrative areas that mean the hierarchy has changed. Do you reassign all the identifiers?

Assigning identifiers from a single, central authority isn’t always ideal. It can add coordination overhead which can be particularly problematic if you need to assign lots of identifiers quickly. So some identifier systems look at reducing the burden on that central authority.

A solution to this is to delegate identifier assignment to other organisations. There are two ways this is done in practice.

The first is what we might call federated assignment. This is where the registration agency shares the work of assigning identifiers with other organisations. A typical approach is to delegate the work of registration and assignment to national organisations. Although other approaches are possible.

The delegation of work might be handled entirely “behind the scenes” as an operational approach. But sometimes it ends up being a feature of the identifier system.

For example the  (LEI) uses federated assignment where “Local Operating Units” do the work of assigning identifiers with. As you can see below, the identifiers for the LOUs become part of the identifiers they assign.

 

 

 

The International Standard Recording Code uses a similar approach with national agencies assigning identifiers.

 

 

 

 

Another approach to reducing dependence on, and coordination with a single registration agency, is to use what I’ll call “local assignment“. In this approach individual organisations are empowered to assign identifiers as they need them.

A simplistic approach to local assignment is “block allocation“: handing out blocks of pregenerated identifiers to organisations which can locally assign them. Blocks of IP addresses are handed out to Internet Service Providers. Similarly, blocks of UPRNs are handed out to local authorities.

Here the registration agency still generates the identifiers, but the assignment of identifier to “thing” is done locally. And, in the second case at least, a record of this assignment will still be shared with the agency.

A more common approach is to use “prefix allocation“. In this approach the registration agency assigns individual organisations a prefix within the identifier system. The organisation then generates new unique identifiers by combining their prefix with a locally generated suffix.

A suffix might be generated by adding a local serial number to the prefix. Or by some other approach. Again, after generating and assigning an identifier they are commonly still centrally registered.

Many identifiers use this approach. The EIDR identifiers used in the entertainment industry look like this:

 

 

A GTIN looks like this:

 

 

 

 

And the BIC code for shipping contains look like this:

 

 

 

One challenge with prefix allocation is ensuring that the rules for locally assigned suffixes work in every context where the identifier needs to appear. This typically means providing some rules about how suffixes are constructed.

The DOI system encountered problems because publishers were generating identifiers that didn’t work well when DOIs were expressed as URLs, due to the need for extra encoding. This made them tricky to work with.

For a complicated example that mixes use of prefixes, country codes and check digits, then we can look at the VIN, which is a unique identifier for vehicles. This 17 digit code includes multiple segments but there are four competing standards for what the segments mean. Sigh.

 

 

 

 

 

It’s possible to go further than just reducing dependency on registration agencies. They can be eliminated completely.

In distributed assignment of identifiers, anyone can create an identifier. Rather than requesting an identifier, or a prefix from a registration agency, these systems operate by agreeing rules for how unique identifiers can be constructed.

One approach to distributed assignment is to use an element of randomness to generate a unique identifier at the point of time its needed. The goal is to design an algorithm that uses a random number generator and sometimes additional information like a timestamp or a MAC address, to construct an identifier where there is an extremely low chance that someone could have created the same identifier at the same moment in time. (Known as a “collision”).

This is how UUIDs work. You can play with generating some using online tools.

Identifiers like UUIDs are cheap to generate and require no coordination beyond an agreed algorithm. They work very well when you just need a reliable way to assign an identifier to something with reasonable confidence that if our data is later combined then we won’t encounter any issues.

But what if we need to independently assign an identifier to the same thing? So that when we later combine our datasets, then our data will link up?

For this we need to use a hash-based identifier. A hash based identifier takes some properties of the thing we want to identify and then use that to construct an identifier. If we have a good enough algorithm then even if we do this independently we should end up constructing the same identifier.

This is sometimes referred to as creating a “digital fingerprint” of the object. It’s commonly used to identify copies of objects. For example, the approach is used to construct content identifiers in the IPFS system. And as part of YouTube’s Content ID system to manage copyright claims.

But hash-based identifiers don’t have to be used for managing content, they can be used as pure identifiers. The most complex example I’m familiar with is the InChi, which is a means of generating a unique identifier for chemicals by using information about their structure.

 

 

 

 

By using a consistent algorithm provided as open source software, chemists can reliably create identifiers for the same structures.

The SICI code used to identify academic papers was a hash based system that used metadata about the publication to generate an identifier. However in practice it was difficult to work with due to the variety of ways in which content was actually published and the variety of contexts in which identifiers needed to be generated.

Hash-based identifiers are very tricky to get right as you need a robust algorithm, that is widely adopted. Those needing to generate identifiers will also need to be able to reliably access all of the information required to create the identifier. Variations in availability of metadata, object formats, etc can all impact how well they work in practice.

Posted at 00:10

John Breslin: Book launch for "The Social Semantic Web"

We had the official book launch of “The Social Semantic Web” last month in the President’s Drawing Room at NUI Galway. The book was officially launched by Dr. James J. Browne, President of NUI Galway. The book was authored by myself, Dr. Alexandre Passant and Prof. Stefan Decker from the Digital Enterprise Research Institute at NUI Galway (sponsored by SFI). Here is a short blurb:

Web 2.0, a platform where people are connecting through their shared objects of interest, is encountering boundaries in the areas of information integration, portability, search, and demanding tasks like querying. The Semantic Web is an ideal platform for interlinking and performing operations on the diverse data available from Web 2.0, and has produced a variety of approaches to overcome limitations with Web 2.0. In this book, Breslin et al. describe some of the applications of Semantic Web technologies to Web 2.0. The book is intended for professionals, researchers, graduates, practitioners and developers.

Some photographs from the launch event are below.

Reblog this post [with Zemanta]

Posted at 00:05

Copyright of the postings is owned by the original blog authors. Contact us.