Planet RDF

It's triples all the way down

March 31

W3C Read Write Web Community Group: Read Write Web — Q1 Summary — 2015

Summary

2015 is to shaping up be the year that standards for reading and writing, and the web in general, start to be put together into, next generation, systems and applications.  Quite a comprehensive review post, contains much of what is being looked forward to.

The Spatial Data on the Web working group was announced and the EU funded Aligned project also kicked off.

Congratulations to the Linked Data Platform working group, who achieved REC status this quarter, after several years of hard work.  Having spent most of the last three month testing various implementations, I’m happy to say it has greatly exceeded my already high expectations.

Communications and Outreach

A number of read write web standards and apps were demoed at the W3C Social Web Working group F2F, hosted by MIT.  This seems to have gone quite well and resulted in the coining of a new term “SoLiD” — Social Linked Data!  Apps based on the Linked Data Platform have been considered as part of the work of this group.

 

Community Group

A relatively quiet quarter in the community group, tho still around 60 posts on our mailing list.  There is much interest on the next round of work that will be done with the LDP working group.  Some work has been done on login and signup web components for WebID, websockets and a relaunch of WebIDRealm.

profileeditor

Applications

Lots of activity on the apps front.  Personally I’ve been working using GOLD, but for also announced was the release of Virtuoso 7.2 for those that like a feature rich enterprise solution.

Making use of the experimental pub sub work with websockets, I’ve started to work on a chat application.  A profile reader and editor allows you to create and change your profile.  I’ve continued to work on a decentralized virtual wallet and props goes out to timbl who in his vanishingly small amounts of spare time has been working on a scheduler app.

lr

Last but not Least…

For those of you that like the web, like documentation, like specs and like academic papers, all four have been wrapped into one neat package with the announcement of linked open research.  It’s a great way to document work and create templates for upstream delivery.  Hover over the menu in the top right and see many more options.  I’m looking forward to using this to try to bridge the gap between the worlds of documentation, the web, and research.

Posted at 19:50

March 30

Dublin Core Metadata Initiative: DCMI Webinar: "From 0 to 60 on SPARQL queries in 50 minutes" (Redux)

2015-03-30, This webinar with Ethan Gruber on 13 May provides an introduction to SPARQL, a query language for RDF. Users will gain hands on experience crafting queries, starting simply, but evolving in complexity. These queries will focus on coinage data in the SPARQL endpoint hosted by http://nomisma.org: numismatic concepts defined in a SKOS-based thesaurus and physical specimens from three major museum collections (American Numismatic Society, British Museum, and Münzkabinett of the Staatliche Museen zu Berlin) linked to these concepts. Results generated from these queries in the form of CSV may be imported directly into Google Fusion Tables for immediate visualization in the form of charts and maps. Additional information and free registration is available at http://dublincore.org/resources/training/#2015gruber. Redux: This webinar was first presented as a training session in the LODLAM Training Day at SemTech2014.

Posted at 23:59

Dublin Core Metadata Initiative: National Diet Library of Japan publishes translations of key DCMI specifications

2015-03-30, DCMI is please to announce that the National Diet Library, the sole national library in Japan, has translated the DCMI Metadata Terms and the Singapore Framework for Dublin Core Application Profiles into Japanese. The links to the new Japanese translations, as well as others are available on the DCMI Documents Translation page at http://dublincore.org/resources/translations/index.shtml.

Posted at 23:59

W3C Data Activity: Linked Data Platform WG Open Meeting

A special open meeting of the W3C Linked Data Platform (LDP) Working Group to discuss potential future work for the group. The deliverable from the workshop will be a report that the LDP WG will take into consideration as it … Continue reading

Posted at 17:48

March 29

Libby Miller: A little stepper motor

I want to make a rotating 3D-printed head-on-a-spring for my

Posted at 19:08

Bob DuCharme: Spark and SPARQL; RDF Graphs and GraphX

Some interesting possibilities for working together.

Posted at 17:24

March 24

AKSW Group - University of Leipzig: Two AKSW Papers at ESWC 2015

We are very pleased to announce that two of our papers were accepted for presentation as full research papers at ESWC 2015.

Automating RDF Dataset Transformation and Enrichment (Mohamed Ahmed Sherif, Axel-Cyrille Ngonga Ngomo, and Jens Lehmann)

With the adoption of RDF across several domains, come growing requirements pertaining to the completeness and quality of RDF datasets. Currently, this problem is most commonly addressed by manually devising means of enriching an input dataset. The few tools that aim at supporting this endeavour usually focus on supporting the manual definition of enrichment pipelines. In this paper, we present a supervised learning approach based on a refinement operator for enriching RDF datasets. We show how we can use exemplary descriptions of enriched resources to generate accurate enrichment pipelines. We evaluate our approach against eight manually defined enrichment pipelines and show that our approach can learn accurate pipelines even when provided with a small number of training examples.

HAWK – Hybrid Question Answering using Linked Data (Ricardo Usbeck, Axel-Cyrille Ngonga Ngomo, Lorenz Bühmann, and Christina Unger)

The decentral architecture behind the Web has led to pieces of information being distributed across data sources with varying structure. Hence, answering complex questions often required combining information from structured and unstructured data sources. We present HAWK, a novel entity search approach for Hybrid Question Answering based on combining Linked Data and textual data. The approach uses predicate-argument representations of questions to derive equivalent combinations of SPARQL query fragments and text queries. These are executed so as to integrate the results of the text queries into SPARQL and thus generate a formal interpretation of the query. We present a thorough evaluation of the framework, including an analysis of the influence of entity annotation tools on the generation process of the hybrid queries and a study of the overall accuracy of the system. Our results show that HAWK achieves 0.68 respectively 0.61 F-measure within the training respectively test phases on the Question Answering over Linked Data (QALD-4) hybrid query benchmark.

Come over to ESWC and enjoy the talks.

Best regards,

Sherif on behalf of AKSW

Posted at 12:38

March 23

AKSW Group - University of Leipzig: AKSW Colloquium, 03-23-2015, Query Tree Learner and From CPU bringup to IBM Watson

From CPU bring up to IBM Watson by Kay Müller, visiting researcher, IBM Ireland

Kay Müller

Working in a corporate environment like IBM offers many different opportunities to work on the bleeding edge of research and development. In this presentation Kay Müller, who is currently a Software Engineer in the IBM Watson Group, is going to give a brief overview of some of the projects he has been working on in IBM. These projects range from a CPU bring up using VHDL to the design and development of a semantic search framework for the IBM Watson system.

Git Triple Store by Natanael Arndt

Natanael ArndtIn a setup of distributed clients resp. applications with different actors writing on the same knowledge base (KB) you need Synchronization of distributed copies of the KB, an edit history with provenance information and a management for different versions of the KB in parallel. The aim is to design and construct a Triple Store back end which records any change on Triple level and enables distributed curation of RDF-graphs. This should be achieved by using a distributed revision control system for holding a serialization of the RDF-graph.

Today I will present the paper “R&Wbase: Git for triples” by Miel Vander Sande et al. published at LDOW2013 as related work. Additionally I will present my ideas towards a colaboration infrastructure for a DVCS for triples.

 

About the AKSW Colloquium

This event is part of a series of events about Semantic Web technology. Please see http://wiki.aksw.org/Colloquium for further information about previous and future events. As always, Bachelor and Master students are able to get points for attendance and there is complimentary coffee and cake after the session.

 

Posted at 09:58

AKSW Group - University of Leipzig: AKSW Colloquium, 03-23-2015, Git Triple Store and From CPU bringup to IBM Watson

From CPU bring up to IBM Watson by Kay Müller, visiting researcher, IBM Ireland

Kay Müller

Working in a corporate environment like IBM offers many different opportunities to work on the bleeding edge of research and development. In this presentation Kay Müller, who is currently a Software Engineer in the IBM Watson Group, is going to give a brief overview of some of the projects he has been working on in IBM. These projects range from a CPU bring up using VHDL to the design and development of a semantic search framework for the IBM Watson system.

Git Triple Store by Natanael Arndt

Natanael Arndt

In a setup of distributed clients resp. applications with different actors writing on the same knowledge base (KB) a synchronization of distributed copies of the KB, an edit history with provenance information and a management for different versions of the KB in parallel are needed. The aim is to design and construct a Triple Store back end which records any change on triple-level and enables distributed curation of RDF-graphs. This should be achieved by using a distributed revision control system for holding a serialization of the RDF-graph. Natanael Arndt will present the paper “R&Wbase: Git for triples” by Miel Vander Sande et al. published at LDOW2013 as related work. Additionally, he will present his ideas towards a colaboration infrastructure using distributed version control systems for triples.

 

About the AKSW Colloquium

This event is part of a series of events about Semantic Web technology. Please see http://wiki.aksw.org/Colloquium for further information about previous and future events. As always, Bachelor and Master students are able to get points for attendance and there is complimentary coffee and cake after the session.

 

Posted at 09:58

March 21

Libby Miller: Tiny micro servo 9g and Duemilanove w/ ATmega168

It’s eons since I messed with an Arduino, and I’ve forgotten it all. All I have handy is a Duemilanove with ATmega168 and the newish version of the Arduino IDE doesn’t have Duemilanove ATmega168 as an option. Following

Posted at 18:43

March 17

Norm Walsh: Gradle, etc.

A few thoughts on improving build processes.

Posted at 02:01

March 16

Dublin Core Metadata Initiative: DC-2015 submission deadline extended to 11 April 2015

2015-03-16, The Program Committee for DC-2015, to be held 1-5 September 2015 in São Paulo, Brazil, has decided to extend the deadline for submission for both the Technical and Professional Programs to 11 April 2015. The extended call can be found at http://purl.org/dcevents/dc-2015/cfp.

Posted at 23:59

Dublin Core Metadata Initiative: DCMI Webinar: "Approaches to Making Dynamic Data Citable: Recommendations of the RDA Working Group"

2015-03-16, Being able to reliably and efficiently identify entire or subsets of data in large and dynamically growing or changing datasets constitutes a significant challenge for a range of research domains. In order to repeat an earlier study, to apply data from an earlier study to a new model, we need to be able to precisely identify the very subset of data used. While verbal descriptions of how the subset was created (e.g. by providing selected attribute ranges and time intervals) are hardly precise enough and do not support automated handling, keeping redundant copies of the data in question does not scale up to the big data settings encountered in many disciplines today. Furthermore, we need to be able to handle situations where new data gets added or existing data gets corrected or otherwise modified over time. Conventional approaches, such as assigning persistent identifiers to entire data sets or individual subsets or data items, are thus not sufficient. In this webinar, Andreas Rauber will review the challenges identified above and discuss solutions that are currently elaborated within the context of the working group of the Research Data Alliance (RDA) on Data Citation: Making Dynamic Data Citeable. The approach is based on versioned and time-stamped data sources, with persistent identifiers being assigned to the time-stamped queries/expressions that are used for creating the subset of data. We will further review results from the first pilots evaluating the approach. Additional information and registration available at http://dublincore.org/resources/training/#2015rauber.

Posted at 23:59

March 04

Frederick Giasson: Open Semantic Framework 3.3 Released

Structured Dynamics is happy to announce the immediate availability of the Open Semantic Framework version 3.3. This new release of OSF lets system administrators choose between two different communication channels to send SPARQL queries to the triple store: triple_120
  1. HTTP
  2. ODBC

In OSF 3.1, the only communication channel available was a ODBC channel using the iODBC drivers. In OSF 3.2, the only communication channel available was a HTTP channel. What we did with OSF 3.3 is to let the system administrator choose between the two.

Quick Introduction to the Open Semantic Framework

What is the Open Semantic Framework?

The Open Semantic Framework (OSF) is an integrated software stack using semantic technologies for knowledge management. It has a layered architecture that combines existing open source software with additional open source components. OSF is designed as an integrated content platform accessible via the Web, which provides needed knowledge management capabilities to enterprises. OSF is made available under the Apache 2 license.

OSF can integrate and manage all types of content â€" unstructured documents, semi-structured files, spreadsheets, and structured databases â€" using a variety of best-of-breed data indexing and management engines. All external content is converted to the canonical RDF data model, enabling common tools and methods for tagging and managing all content. Ontologies provide the schema and common vocabularies for integrating across diverse datasets. These capabilities can be layered over existing information assets for unprecedented levels of integration and connectivity. All information within OSF may be powerfully searched and faceted, with results datasets available for export in a variety of formats and as linked data.

Why Multiple Channels in OSF?

Historically, OSF only used the ODBC channel to communicate with Virtuoso, and it was using the iODBC drivers. As explained in a previous blog post, the fact that we were using the iODBC drivers in Ubuntu was adding a lot of complexity into the system since we had to recompile most of the PHP packages to use that other ODBC driver.

With OSF 3.2, we refactored the code such that we could query any SPARQL HTTP endpoint. The goal of this current improvement is to be able to use any triple store that has a compatible SPARQL HTTP endpoint with OSF, and not just Virtuoso.

With OSF 3.3, what we choose to do is to make both options a possibility. However, what we did is to make sure that the latest version of Virtuoso was now properly working with the unixODBC drivers, which are shipped by default with Ubuntu.

This means that people can now use the ODBC channel, but using the unixODBC drivers instead. The end result of this enhancement is that it makes the maintenance of a Ubuntu/OSF instance much easier since no packages are on hold, and that the PHP5 packages can be updated at any time without needing to be recompiled using the iODBC drivers.

Deploying a New OSF 3.3 Server

Using the OSF Installer

OSF 3.3 can easily be deployed on a Ubuntu 14.04 LTS server using the osf-installer application. The deployment is done by executing the following commands in your terminal:

mkdir -p /usr/share/osf-installer/

cd /usr/share/osf-installer/

wget https://raw.github.com/structureddynamics/Open-Semantic-Framework-Installer/3.3/install.sh

chmod 755 install.sh

./install.sh

./osf-installer --install-osf -v

Using an Amazon AMI

If you are an Amazon AWS user, you also have access to a free AMI that you can use to create your own OSF instance. The full documentation for using the OSF AMI is available here.

Upgrading Existing Installations

It is not possible to automatically upgrade previous versions of OSF to OSF 3.3. It is possible to upgrade an older instance of OSF to OSF version 3.3, but only manually. If you have this requirement, just let me know and I will write about the upgrade steps that are required to upgrade these instances to OSF version 3.3.

Conclusion

This new version of the Open Semantic Framework should be even simpler to install, deploy and maintain. Several additional small updates have also provided in this new version to other aspects of installation simpler and faster.

Posted at 22:18

March 03

W3C Data Activity: A writable Web based on LDP

Last week has marked the culmination of almost three years of hard work coming out of the Linked Data Platform WG, resulting in the publication of the Linked Data Platform 1.0 as a W3C Recommendation. For those of you not yet familiar with … Continue reading

Posted at 16:34

AKSW Group - University of Leipzig: ALIGNED project kick-off

ALIGNED, AKSW’s new H2020-funded project, kicked off in Dublin. The project brings together computer science researchers, companies building data-intensive systems and information technology, and academic curators of large datasets in an effort to build IT systems for aligned, co-evolving software and data lifecycles. These lifecycles will support automated testing, runtime data quality analytics, model-generated extraction and human curation interfaces.

AKSW will lead the data quality engineering part of ALIGNED, controlling the data lifecycle and providing integrity and verification techniques, using state-of-the-art tools such as RDFUnit and upcoming standards like  W3C Data Shapes. In this project, we will support our partners at Trinity College Dublin and Oxford Software Engineering as technical partners, Oxford Anthropology and Adam Mickiewicz University Poznan as data curators and publishers, as well as the Semantic Web Company and Wolters Kluwer, Germany providing enterprise solutions and use cases.

Find out more at aligned-project.eu and following @AlignedProject on Twitter.

Martin Brümmer on behalf of the NLP2RDF group

Aligned project kick-off team picture

Posted at 11:43

AKSW Group - University of Leipzig: ALIGNED project kick-off

ALIGNED, AKSW’s new H2020-funded project, kicked off in Dublin. The project brings together computer science researchers, companies building data-intensive systems and information technology, and academic curators of large datasets in an effort to build IT systems for aligned, co-evolving software and data lifecycles. These lifecycles will support automated testing, runtime data quality analytics, model-generated extraction and human curation interfaces.

AKSW will lead the data quality engineering part of ALIGNED, controlling the data lifecycle and providing integrity and verification techniques, using state-of-the-art tools such as RDFUnit and upcoming standards like  W3C Data Shapes. In this project, we will support our partners at Trinity College Dublin and Oxford Software Engineering as technical partners, Oxford Anthropology and Adam Mickiewicz University Poznan as data curators and publishers, as well as the Semantic Web Company and Wolters Kluwer, Germany providing enterprise solutions and use cases.

Find out more at aligned-project.eu and following @AlignedProject on Twitter.

Martin Brümmer on behalf of the NLP2RDF group

Aligned project kick-off team picture

Posted at 11:43

February 27

AKSW Group - University of Leipzig: AKSW Colloquium: Tommaso Soru and Martin Brümmer on Monday, March 2 at 3.00 p.m.

On Monday, 2nd of March 2015, Tommaso Soru will present ROCKER, a refinement operator approach for key discovery. Martin Brümmer will then present NIF annotation and provenance – A comparison of approaches.

Tommaso Soru – ROCKER – Abstract

As within the typical entity-relationship model, unique and composite keys are of central importance also when their concept is applied on the Linked Data paradigm. They can provide help in manifold areas, such as entity search, question answering, data integration and link discovery. However, the current state of the art does not count approaches able to scale while relying on a correct definition of key. We thus present a refinement-operator-based approach dubbed ROCKER, which has shown to scale to big datasets with respect to the run time and the memory consumption. ROCKER will be officially introduced at the 24th International Conference on World Wide Web.

Tommaso Soru, Edgard Marx, and Axel-Cyrille Ngonga Ngomo, “ROCKER – A Refinement Operator for Key Discovery”. [PDF]

Martin Brümmer - Abstract – NIF annotation and provenance – A comparison of approaches

The uptaking use of the NLP Interchange Format (NIF) reveals its shortcomings on a number of levels. One of these is tracking metadata of annotations represented in NIF – which NLP tool added which annotation with what confidence at which point in time etc.

A number of solutions to this task of annotating annotations expressed as RDF statements has been proposed over the years. The talk will weigh these solutions, namely annotation resources, reification, Open Annotation, quads and singleton properties in regard to their granularity, ease of implementation and query complexity.

The goal of the talk is presenting and comparing viable alternatives of solving the problem at hand and collecting feedback on how to proceed.

Posted at 12:57

AKSW Group - University of Leipzig: AKSW Colloquium: Tommaso Soru and Martin Brümmer on Monday, March 2 at 3.00 p.m.

On Monday, 2nd of March 2015, Tommaso Soru will present ROCKER, a refinement operator approach for key discovery. Martin Brümmer will then present NIF annotation and provenance – A comparison of approaches.

Tommaso Soru – ROCKER – Abstract

As within the typical entity-relationship model, unique and composite keys are of central importance also when their concept is applied on the Linked Data paradigm. They can provide help in manifold areas, such as entity search, question answering, data integration and link discovery. However, the current state of the art does not count approaches able to scale while relying on a correct definition of key. We thus present a refinement-operator-based approach dubbed ROCKER, which has shown to scale to big datasets with respect to the run time and the memory consumption. ROCKER will be officially introduced at the 24th International Conference on World Wide Web.

Tommaso Soru, Edgard Marx, and Axel-Cyrille Ngonga Ngomo, “ROCKER – A Refinement Operator for Key Discovery”. [PDF]

Martin Brümmer – Abstract – NIF annotation and provenance – A comparison of approaches

The uptaking use of the NLP Interchange Format (NIF) reveals its shortcomings on a number of levels. One of these is tracking metadata of annotations represented in NIF – which NLP tool added which annotation with what confidence at which point in time etc.

A number of solutions to this task of annotating annotations expressed as RDF statements has been proposed over the years. The talk will weigh these solutions, namely annotation resources, reification, Open Annotation, quads and singleton properties in regard to their granularity, ease of implementation and query complexity.

The goal of the talk is presenting and comparing viable alternatives of solving the problem at hand and collecting feedback on how to proceed.

Posted at 12:57

February 26

W3C Data Activity: Open Data Standards

Data on the Web Best Practices WG co-chair Steve Adler writes: Yesterday, Open Data reached a new milestone with the publication of the W3C’s first public working draft of its Data on the Web Best Practices. Open Data is spreading … Continue reading

Posted at 16:34

February 23

Dublin Core Metadata Initiative: DC-2015 Professional Program Session in Portuguese & English

2015-02-23, DCMI and the DC-2015 host, São Paulo State University, are please to announce that session proposals as well as the presentation language of sessions in the Professional Program at DC-2015 may be in either Portuguese or English. Depending on the language of the session presenters, simultaneous English/Portuguese or Portuguese/English translation will be provided. Tracks in the Professional Program include special topic sessions and panels, half- and full-day tutorials, workshops, and best practice posters and demonstrations.The call for participation in both the Professional and Technical Programs remain open until 28 March 2015. The call for participation can be found at http://purl.org/dcevents/dc-2015/cfp.

Posted at 23:59

Dublin Core Metadata Initiative: DCMI Webinar: "VocBench 2.0: A Web Application for Collaborative Development of Multilingual Thesauri"

2015-02-23, On 4 March 2015, Caterina Caracciolo of the United Nations Food and Agriculture Organization (FAO) and Armando Stellato of the University of Rome Tor Vergata, will present a webinar on VocBench, a web-based platform for the collaborative maintenance of multilingual thesauri. VocBench is an open source project developed through a collaboration between FAO and the University of Rome Tor Vergata. VocBench is currently used for the maintenance of AGROVOC, EUROVOC, GEMET, the thesaurus of the Italian Senate, the Unified Astronomy Thesaurus of Harvard University, as well as other thesauri. VocBench has a strong focus on collaboration, supported by workflow management for content validation and publication. Dedicated user roles provide a clean separation of competencies ranging from management aspects to vertical competencies in content editing such as conceptualization versus terminology editing. Extensive support for scheme management allows editors to fully exploit the possibilities of the SKOS model including fulfillment of its integrity constraints. VocBench is open source software since publication of version 2--open to a large community of users and institutions supporting its development with their feedback and ideas. During the webinar Dr. Caracciolo and Dr. Stellato will demonstrate the main features of VocBench from the point of view of users and system administrators, and explain in what ways you may join the project. Additional information about the webinar and access to registration is available at http://dublincore.org/resources/training/#2015stellato.

Posted at 23:59

February 20

Semantic Web Company (Austria): SEMANTiCS2015: Calls for Research & Innovation Papers, Industry Presentations and Poster/Demos are now open!

The SEMANTiCS2015 conference comes back this year in its 11th edition where it all started in 2005 to Vienna, Austria!

The conference  takes place from 15-17 September 2015 (the main conference will be on 16-17th of September and several back 2 back workshops & events on 15th) at the University of Economics – see all information: http://semantics.cc/.

SEMANTiCS 2015 - Banner - new

We are happy to announce the SEMANTiCS Open Calls as follows. All infos on the Calls can also be found on the SEMANTiCS2015 website here: http://semantics.cc/open-calls

Call for Research & Innovation Papers

The Research & Innovation track at SEMANTiCS welcomes the submission of papers on novel scientific research and/or innovations relevant to the topics of the conference. Submissions must be original and must not have been submitted for publication elsewhere. Papers should follow the ACM ICPS guidelines for formatting (http://www.acm.org/sigs/publications/proceedings-templates) and must not exceed 8 pages in lenght for full papers and 4 pages for short papers, including references and optional appendices.

Abstract Submission Deadline: May 22, 2015
Paper Submission Deadline: May 29, 2015
Notification of Acceptance: July 10, 2015
Camera-Ready Paper: July 24, 2015
Details: http://bit.ly/semantics15-research

Call for Industry & Use Case Presentations

To address the needs and interests of industry SEMANTICS presents enterprise solutions that deal with semantic processing of data and/or information in areas like like Linked Data, Data Publishing, Semantic Search, Recommendation Services, Sentiment Detection, Search Engine Add-Ons, Thesaurus and/or Ontology Management, Text Mining, Data Mining and any related fields. All submissions have a strong focus on real world applications beyond the prototypical status and demonstrate the power of semantic systems!

Submission Deadline: July 1, 2015
Notification of Acceptance: July 20, 2015
Presentation Ready: August 15, 2015
Details: http://bit.ly/semantics15-industry

Call for Posters and Demos

The Posters & Demonstrations Track invites innovative work in progress, late-breaking research and innovation results, and smaller contributions (including pieces of code) in all fields related to the broadly understood Semantic Web. The informal setting of the Posters & Demonstrations Track encourages participants to present innovations to business users and find new partners or clients.  In addition to the business stream, SEMANTiCS 2015 welcomes developer-oriented posters and demos to the new technical stream.

Submission Deadline: June 17, 2015
Notification of Acceptance: July 10, 2015
Camera-Ready Paper: August 01, 2015
Details: http://bit.ly/semantics15-poster

We are looking forward to receive your submissions for SEMANTiCS2015 and see you in Vienna in autumn!

Posted at 10:00

February 19

AKSW Group - University of Leipzig: AKSW Colloquium: Edgard Marx and Tommaso Soru on Monday, February 23, 3.00 p.m.

On Monday, 23rd of February 2015, Edgard Marx will introduce Smart, a search engine designed over the Semantic Search paradigm; subsequently, Tommaso Soru will present ROCKER, a refinement operator approach for key discovery.

EDIT: Tommaso Soru’s presentation was moved to March 2nd.

Abstract – Smart

Since the conception of the Web, search engines play a key role in making content available. However, retrieving of the desire information is still significantly challenging. Semantic Search systems are a natural evolution of the traditional search engines. They promise more accurate interpretation by understanding the contextual meaning of the user query. In this talk, we will introduce our audience to Smart, a search engine designed over the Semantic Search paradigm. Smart incorporates two of our currently designed approaches of dealing with the problem of Information Retrieval, as well as a novel interface paradigm. Moreover, we will present some of the former, as well as more recent state-of-the-art approaches used by the industry – for instance by Yahoo!, Google and Facebook.

Abstract – ROCKER

As within the typical entity-relationship model, unique and composite keys are of central importance also when their concept is applied on the Linked Data paradigm. They can provide help in manifold areas, such as entity search, question answering, data integration and link discovery. However, the current state of the art does not count approaches able to scale while relying on a correct definition of key. We thus present a refinement-operator-based approach dubbed ROCKER, which has shown to scale to big datasets with respect to the run time and the memory consumption. ROCKER will be officially introduced at the 24th International Conference on World Wide Web.

Tommaso Soru, Edgard Marx, and Axel-Cyrille Ngonga Ngomo, “ROCKER – A Refinement Operator for Key Discovery”. [PDF]

Posted at 21:53

Copyright of the postings is owned by the original blog authors. Contact us.