Planet RDF

It's triples all the way down

November 17

Michael Hausenblas: Cloud Cipher Capabilities

… or, the lack of it.

A recent discussion at a customer made me having a closer look around support for encryption in the context of XaaS cloud service offerings as well as concerning Hadoop. In general, this can be broken down into over-the-wire (cf. SSL/TLS) and back-end encryption. While the former is widely used, the latter is rather seldom to find.

Different reasons might exits why one wants to encrypt her data, ranging from preserving a competitive advantage to end-user privacy issues. No matter why someone wants to encrypt the data, the question is do systems support this (transparently) or are developers forced to code this in the application logic.

IaaS-level. Especially in this category, file storage for app development, one would expect wide support for built-in encryption.

On the PaaS level things look pretty much the same: for example, AWS Elastic Beanstalk provides no support for encryption of the data (unless you consider S3) and concerning Google’s App Engine, good practices for data encryption only seem to emerge.

Offerings on the SaaS level provide an equally poor picture:

  • Dropbox offers encryption via S3.
  • Google Drive and Microsoft Skydrive seem to not offer any encryption options for storage.
  • Apple’s iCloud is a notable exception: not only does it provide support but also nicely explains it.
  • For many if not most of the above SaaS-level offerings there are plug-ins that enable encryption, such as provided by Syncdocs or CloudFlogger

In Hadoop-land things also look rather sobering; there are few activities around making HDFS or the likes do encryption such as ecryptfs or Gazzang’s offering. Last but not least: for Hadoop in the cloud, encryption is available via AWS’s EMR by using S3.

Posted at 10:06

November 14

Ebiquity research group UMBC: Why does Google think Raymond Chandler starred in Double Indemnity?

In my knowledge graph class yesterday we talked about the SPARQL query language and I illustrated it with DBpedia queries, including an example getting data about the movie Double Indemnity. I had brought a google assistant device and used it to compare its answers to those from DBpedia. When I asked the Google assistant “Who starred in the film Double Indemnity”, the first person it mentioned was Raymond Chandler. I knew this was wrong, since he was one of its screenwriters, not an actor, and shared an Academy Award for the screenplay. DBpedia’s data was correct and did not list Chandler as one of the actors.

I did not feel too bad about this — we shouldn’t expect perfect accuracy in these huge, general purpose knowledge graphs and at least Chandler played an important role in making the film.

After class I looked at the Wikidata page for Double Indemnity (Q478209) and saw that it did list Chandler as an actor. I take this as evidence that Google’s knowledge Graph got this incorrect fact from Wikidata, or perhaps from a precursor, Freebase.

The good news 🙂 is that Wikidata had flagged the fact that Chandler (Q180377) was a cast member in Double Indemnity with a “potential Issue“. Clicking on this revealed that the issue was that Chandler was not known to have an occupation property that a “cast member” property (P161) expects, which includes twelve types, such as actor, opera singer, comedian, and ballet dancer. Wikidata lists chandler’s occupations as screenwriter, novelist, write and poet.

More good news 😀 is that the Wikidata fact had provenance information in the form of a reference stating that it came from CSFD (Q3561957), a “Czech and Slovak web project providing a movie database”. Following the link Wikidata provided led me eventually to the resource, which allowed my to search for and find its Double Indemnity entry. Indeed, it lists Raymond Chandler as one of the movie’s Hrají. All that was left to do was to ask for a translation, which confirmed that Hrají means “starring”.

Case closed? Well, not quite. What remains is fixing the problem.

The final good news 🙂 is that it’s easy to edit or delete an incorrect fact in Wikidata. I plan to delete the incorrect fact in class next Monday. I’ll look into possible options to add an annotation in some way to ignore the incorrect ?SFD source for Chander being a cast member over the weekend.

Some possible bad news 🙁 that public knowledge graphs like Wikidata might be exploited by unscrupulous groups or individuals in the future to promote false or biased information. Wikipedia is reasonably resilient to this, but the problem may be harder to manage for public knowledge graphs, which get much their data from other sources that could be manipulated.

The post Why does Google think Raymond Chandler starred in Double Indemnity? appeared first on UMBC ebiquity.

Posted at 20:05

Ebiquity research group UMBC: paper: Early Detection of Cybersecurity Threats Using Collaborative Cognition

The CCS Dashboard’s sections provide information on sources and targets of network events, file operations monitored and sub-events that are part of the APT kill chain. An alert is generated when a likely complete APT is detected after reasoning over events.

The CCS Dashboard’s sections provide information on sources and targets of network events, file operations monitored and sub-events that are part
of the APT kill chain. An alert is generated when a likely complete APT is detected after reasoning over events.

Early Detection of Cybersecurity Threats Using Collaborative Cognition

Sandeep Narayanan, Ashwinkumar Ganesan, Karuna Joshi, Tim Oates, Anupam Joshi and Tim Finin, Early detection of Cybersecurity Threats using Collaborative Cognition, 4th IEEE International Conference on Collaboration and Internet Computing, Philadelphia, October. 2018.

 

The early detection of cybersecurity events such as attacks is challenging given the constantly evolving threat landscape. Even with advanced monitoring, sophisticated attackers can spend more than 100 days in a system before being detected. This paper describes a novel, collaborative framework that assists a security analyst by exploiting the power of semantically rich knowledge representation and reasoning integrated with different machine learning techniques. Our Cognitive Cybersecurity System ingests information from various textual sources and stores them in a common knowledge graph using terms from an extended version of the Unified Cybersecurity Ontology. The system then reasons over the knowledge graph that combines a variety of collaborative agents representing host and network-based sensors to derive improved actionable intelligence for security administrators, decreasing their cognitive load and increasing their confidence in the result. We describe a proof of concept framework for our approach and demonstrate its capabilities by testing it against a custom-built ransomware similar to WannaCry.

The post paper: Early Detection of Cybersecurity Threats Using Collaborative Cognition appeared first on UMBC ebiquity.

Posted at 20:05

November 03

Libby Miller: Real_libby – a GPT-2 based slackbot

In the latest of my continuing attempts to automate myself, I retrained a GPT-2 model with my iMessages, and made a slackbot so people could talk to it. Since Barney (an expert on these matters) felt it was unethical that it vanished whenever I shut my laptop, it’s now living happily(?) if a little more slowly in a Raspberry Pi 4.

Screen Shot 2019-07-20 at 12.19.24It was surprisingly easy to do, with a few hints from Barney. I’ve sketched out what I did below. If you make one, remember that it can leak out private information – names in particular – and can also be pretty sweary, though mine’s not said anything outright offensive (yet).

fuck, mitzhelaists!

This work is inspired by the many brilliant Twitter bot-makers  and machine-learning people out there such as Barney, (who has many bots, including inspire_ration and notYourBot, and knows much more about machine learning and bots than I do), Shardcore (who made Algohiggs, which is probably where I got the idea for using GPT-2),  and Janelle Shane, (whose ML-generated names for e.g. cats are always an inspiration).

First, get your data

The first step was to get at my iMessages. A lot of iPhone data is backed up as sqlite, so if you decrypt your backups and have a dig round, you can use something like baskup. I had to make a few changes but found my data in

/Users/[me]/Library/Application\ Support/MobileSync/Backup/[long number]/3d/3d0d7e5fb2ce288813306e4d4636395e047a3d28

This number – 3d0d7e5fb2ce288813306e4d4636395e047a3d28 – seems always to indicate the iMessage database – though it moves round depending on what version of iOS you have. I made a script to write the output from baskup into a flat text file for GPT-2 to slurp up. I had about 5K lines.

Retrain GPT-2

I used this code.

python3 ./download_model.py 117M

PYTHONPATH=src ./train.py --dataset /Users/[me]/gpt-2/scripts/data/

I left it overnight on my laptop and by morning loss and avg were oscillating so I figured it was done – 3600 epochs. The output from training was fun, e.g..

([2899 | 33552.87] loss=0.10 avg=0.07)

my pigeons get dandruff
treehouse actually get little pellets
little pellets of the same stuff as well, which I can stuff pigeons with
*little
little pellets?
little pellets?
little pellets?
little pellets?
little pellets?
little pellets?
little pellets
little pellets
little pellets
little pellets
little pellets
little pellets
little pellets
little pellets
little pellets
little pellets
little pellets

Test it

I copied the checkpoint directory into the models directory

cp -r checkpoint/run1 models/libby
cp models/117M/{encoder.json,hparams.json,vocab.bpe} models/libby/

At which point I could test it using the code provided:

python3 src/interactive_conditional_samples.py --model_name libby

This worked but spewed out a lot of text, very slowly. Adding –length 20 sped it up:

python3 src/interactive_conditional_samples.py --model_name libby --length 20

Screen Shot 2019-07-20 at 13.05.06

That was the bulk of it done! I turned interactive_conditional_samples.py into a server and then whipped up a slackbot – it responds to direct questions and occasionally to a random message.

Putting it on a Raspberry Pi 4 was very very easy. Startlingly so.

Screen Shot 2019-07-20 at 13.11.10

It’s been an interesting exercise, and mostly very funny. These bots have the capacity to surprise you and come up with the occasional apt response (I’m cherrypicking)

Screen Shot 2019-07-20 at 14.25.00

We’ve been talking a lot at work about personal data and what we would do with our own, particularly messages with friends and the pleasure of scrolling back and finding old jokes and funny messages. My messages were mostly of the “could you get some milk?” “here’s a funny picture of the cat” type, but it covered a long period and there were also two very sad events in there. Parsing the data and coming across those again was a vivid reminder that this kind of personal data can be an emotional minefield and not something to be trivially messed with by idiots like me.

Also: while GPT-2 means there’s plausible deniability about any utterance, a bot like this can leak personal information of various kinds, such as names and regurgitated fragments of real messages. Unsurprisingly it’s not the kind of thing I’d be happy making public as is, and I’m not sure if it ever could be.

 

 

Posted at 18:06

October 26

Libby Miller: Tensorflow – saveModel for tflite

I want to convert an existing model to one that will run on a USB stick ‘accelerator’ called Coral. Conversion to tflite is needed for any small devices like these.

I’ve not managed this yet, but here are some notes.  I’ve figured out some of it, but come unstuck in that some operations (‘ops’) are not supported in tflite yet. But maybe this is still useful to someone, and I want to remember what I did.

I’m trying to change a tensorflow model – for which I only have .meta and .index files – to one with .pb files or variables, which seems to be called a ‘savedModel’. These have some interoperability, and appear to be a prerequisite for making a tflite model.

Here’s what I have to start with:

ls models/LJ01-1/
model_gs_933k.data-00000-of-00001.1E0cbbD3  
model_gs_933k.meta
model_gs_933k.data-00000-of-00001           
model_gs_933k.index                         
model_gs_933k.meta.289E3B1a

Conversion to SavedModel

First, create a savedModel (this code is for Tensorflow 1.3, but 2.0 is a simple conversion using a command-line tool).

import tensorflow as tf
model_path = 'LJ01-1/model_gs_933k'
output_node_names = ['Merge_1/MergeSummary']    
loaded_graph = tf.Graph()

with tf.Session(graph=loaded_graph) as sess:
    # Restore the graph
    sess.run(tf.global_variables_initializer())
    saver = tf.train.import_meta_graph(model_path+'.meta')
    # Load weights
    saver.restore(sess,model_path)
    # Freeze the graph
    frozen_graph_def = tf.graph_util.convert_variables_to_constants(
        sess,
        sess.graph_def,
        output_node_names)

    builder = tf.saved_model.builder.SavedModelBuilder('new_models')
    op = sess.graph.get_operations()
    input_tensor = [m.values() for m in op][1][0]
    output_tensor = [m.values() for m in op][len(op)-1][0]

    # https://sthalles.github.io/serving_tensorflow_models/
    tensor_info_input = tf.saved_model.utils.build_tensor_info(input_tensor)
    tensor_info_output = tf.saved_model.utils.build_tensor_info(output_tensor)
    prediction_signature = (
      tf.saved_model.signature_def_utils.build_signature_def(
         inputs={'x_input': tensor_info_input},
         outputs={'y_output': tensor_info_output},
         method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))
    builder.add_meta_graph_and_variables(sess, [tf.saved_model.tag_constants.SERVING],
      signature_def_map={
        tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
          prediction_signature
      },
      )
    builder.save()

I used

output_node_names = [n.name for n in tf.get_default_graph().as_graph_def().node]
print(output_node_names)

to find out the names of the input and output ops.

That gives you a directory (new_models) like

new_models/variables
new_models/variables/variables.data-00000-of-00001
new_models/variables/variables.index
new_models/saved_model.pb

Conversion to tflite

Once you have that, then you can use the command-line tool tflite_convert (examples) – 

tflite_convert --saved_model_dir=new_models --output_file=model.tflite --enable_select_tf_ops

This does the conversion to tflite. And it will probably fail, e.g. mine did this:

Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ABS, ADD, CAST, CONCATENATION, CONV_2D, DIV, EXP, EXPAND_DIMS, FLOOR, GATHER, GREATER_EQUAL, LOGISTIC, MEAN, MUL, NEG, NOT_EQUAL, PAD, PADV2, RSQRT, SELECT, SHAPE, SOFTMAX, SPLIT, SQUARED_DIFFERENCE, SQUEEZE, STRIDED_SLICE, SUB, SUM, TRANSPOSE, ZEROS_LIKE. Here is a list of operators for which you will need custom implementations: BatchMatMul, FIFOQueueV2, ImageSummary, Log1p, MergeSummary, PaddingFIFOQueueV2, QueueDequeueV2, QueueSizeV2, RandomUniform, ScalarSummary.

You can add –allow_custom_ops to that, which will let everything through – but it still won’t work if there are ops that are not tflite supported – you have to write custom operators for the ones that don’t yet work (I’ve not tried this).

But it’s still useful to use –allow_custom_ops, i.e.

tflite_convert --saved_model_dir=new_models --output_file=model.tflite --enable_select_tf_ops --allow_custom_ops

because you can visualise the graph once you have a tflite file, using netron. Which is quite interesting, although I suspect it doesn’t work for the bits which it passed through but doesn’t support.

>>> import netron 
>>> netron.start('model.tflite')
Serving 'model.tflite' at http://localhost:8080

 

Posted at 20:06

October 24

Sebastian Trueg: TPAC 2012 – Who Am I (On the Web)?

Last week I attended my first TPAC ever – in Lyon, France. Coming from the open-source world and such events like Fosdem or the ever brilliant Akademy I was not sure what to expect. Should I pack a suite? On arrival all my fears were blown away by an incredibly well organized event with a lot of nice people. I felt very welcome as a newbie, there was even a breakfast for the first-timers with some short presentations to get an overview of the W3C‘s work in general and the existing working groups. So before getting into any details: I would love this to become a regular thing (not sure it will though, seeing that next year the TPAC will be in China).

My main reason for going to the TPAC was identity on the Web, or short WebID. OpenLink Software is a strong supporter of the WebID identification and authentication system. Thus, it was important to be present for the meeting of the WebID community group.

The meeting with roughly 15 people spawned some interesting discussions. The most heatedly debated topic was that of splitting the WebID protocol into two parts: 1. identification and 2. authentication. The reason for this is not at all technical but more political. The WebID protocol which uses public keys embedded in RDF profiles and X.509 certificates which contain a personal profile URL has always had trouble being accepted by several working groups and people. So in order to lower the barrier for acceptance and to level the playing field the idea was to split the part which is indisputable (at least in the semantic web world) from the part that people really have a problem with (TLS).

This lead to a very simple definition of a WebID which I will repeat in my own words since it is not written in stone yet (or rather “written in spec”):

A WebID is a dereferencable URI which denotes an agent (person, organization, or software). It resolves to an RDF profile document uniquely identifying the agent.

Here “uniquely identify” simply means that the profile contains some relation of the WebID to another identifier. This identifier can be an email address (foaf:mbox), it can be a Twitter account, an OpenID, or, to restore the connection to the WebID protocol, a public key.

The nice thing about this separation of identity and authentication is that the WebID is now compatible with any of the authentication systems out there. It can be used with WebID-Auth (this is how I call the X.509 certificate + public key in agent profile system formally known as WebID), but also with OpenID or even with OAuth. Imagine a service provider like Google returning a WebID as part of the OAuth authentication result. In case of an OpenID the OpenID itself could be the WebID or another WebID would be returned after successful authentication. Then the client could dereference it to get additional information.

This is especially interesting when it comes to WebACLs. Now we could imagine defining WebACLs on WebIDs from any source. Using mutual owl:sameAs relations these WebIDs could be made to denote the same person which the authorizing service could then use to build a list of identifiers that map the one used in the ACL rule.

In any case this is a definition that should pose no problems to such working groups as the Linked Data Protocol. Even the OpenID or OAuth community should wee the benefits of identifying people via URIs. In the end the Web is a Web of URIs…

Posted at 21:06

October 06

Andrew Matthews: Knowledge Graphs 101

This is the first in a short series introducing Knowledge Graphs. It covers just the basics, showing how to write, store, query and work with graph data using RDF (short for Resource Description Format). I will keep it free of theory and interesting but unnecessary digressions. Let me know in the comments if you find […]

Posted at 23:06

Andrew Matthews: Preparing a Project Gutenberg ebook for use on a 6″ ereader

For a while I’ve been trying to find a nice way to convert project Gutenberg books to look pleasant on a BeBook One. I’ve finally hit on the perfect combination of tools, that produces documents ideally suited to 6″ eInk ebook readers like my BeBook. The tool chain involves using GutenMark to convert the file […]

Posted at 23:06

Andrew Matthews: Some pictures of Carlton Gardens

Carlton Gardens, a set on Flickr. This was my first outing with the Pentax K-x that I got recently. In these pictures, I’m trying to get to grips with the camera, so I didn’t have any particular objective other than to take pictures. The light was so harsh it was very difficult for me to […]

Posted at 23:06

Andrew Matthews: Note to Self: Convert UTF-8 w/ BOM to ASCII (WIX + DB) using GNU uconv

This one took me a long time to work out, and it took a non-latin alphabet user (Russian) to point me at the right tools. Yet again, I’m guilty of being a complacent anglophone. I was producing a database installer project using WIX 3.5, and ran into all sorts of inexplicable problems, which I finally […]

Posted at 23:06

Andrew Matthews: Automata-Based Programming With Petri Nets – Part 1

Petri Nets are extremely powerful and expressive, but they are not as widely used as state machines. That's a pity, they allow us to solve problems beyond the reach of state machines. This post is the first in a mini-series on software development with Petri Nets. All of the code for a full feature-complete Petri Net library is available online at on GitHub. You're welcome to take a copy, play with it and use it in your own projects.

Posted at 23:06

Andrew Matthews: Quantum Reasoners Hold Key to Future Web

Last year, a company called DWave Systems announced their quantum computer (the ‘Orion’) – another milestone on the road to practical quantum computing. Their controversial claims seem worthy in their own right but they are particularly important to the semantic web (SW) community. The significance to the SW community was that their quantum computer solved […]

Posted at 23:06

Andrew Matthews: Semantic Overflow Highlights I

Semantic Overflow has been active for a couple of weeks. We now have 155 users and 53 questions. We’ve already had some very interesting questions and some excellent detailed and thoughtful responses. I thought, on Egon’s instigation, to  bring together, from the site’s BI stats, some of the highlights of last week. The best loved […]

Posted at 23:06

Andrew Matthews: www.SemanticOverflow.com – the Web 2.0 Q&A site for all things Web 3.0.

www.SemanticOverflow.com is a new site based on the hugely popular StackOverflow.com, devoted to Q&A on anything related to the semantic web. The site is very new (created today) and I’m trying to get as many people to visit as I can, so please come and post your questions and together we’ll create a thriving community […]

Posted at 23:06

Andrew Matthews: Quote of the Day – Chris Sells on Cocktail Parties

I can relate to this: I’ll take a lake of fire any day over more than three strangers in a room with which I share no common task and with whom I’m expected to socialize How to express this to my wife without her thinking that I am suffering from a combination of acrophobia and […]

Posted at 23:06

Andrew Matthews: Australian Port – a new WMD?

Proving that Cockroaches are not indestructible, Kerry neatly (if inadvertently) demonstrated that Australian port is capable of killing things that heat, cold and lethal levels of ionizing radiation cannot. Of course Kerry was gagging for days just at the thought that the thing had been in her glass all along – it probably hadn’t – […]

Posted at 23:06

Andrew Matthews: Relational Modeling? Not as we know it!

... there's plenty of ways that RDF specifically addresses the problems it seeks to address - data interchange, standards definition, KR, mashups - in a distributed web-wide way. RDBMSs address the problems that were faced by programmers at the coal face in the 60s and 70s - Efficient, Standardized, platform-independent data storage and retrieval. The imperative that created a need for RDBMSs in the 60s is not going away, so I doubt databases will be going away any time soon either. In fact they can be exposed to the world as triples without too much trouble. The problem is that developers need more than just data storage and retrieval. They need intelligent data storage and retrieval.

Posted at 23:06

Andrew Matthews: Pattern Matching in C#

I recently used Matthew Podwyszocki’s pattern matching classes for a top level exception handler in an App I’m writing. Matthew’s classes are a really nice fluent interface attaching predicates to functions generating results. I used it as a class factory to select between handlers for exceptions. Here’s an example of how I used it: ExceptionHandler […]

Posted at 23:06

Andrew Matthews: Object Orientation? Not as we know it.

I thought I’d start with a lyric: That one’s my mother and That one’s my father and The one in the hat, that’s me. You could be forgiven for wondering what Ani Difranco has to do with this blog’s usual themes, but rest assured, I won’t stray too far. My theme today is the limitations […]

Posted at 23:06

Andrew Matthews: New Resources for LinqToRdf

John Mueller recently sent through a link to a series of articles on working with RDF. As well as being a useful introduction to working with RDF, they use LinqToRdf for code examples. Modeling your Data with RDF (Part 1) Understanding and Using Resource Description Framework Files (Part 2) They provide information on hosting RDF […]

Posted at 23:06

Andrew Matthews: Not another mapping markup language!

Kingsley Idehen has again graciously given LinqToRdf some much needed link-love. He mentioned it in a post that was primarily concerned with the issues of mapping between the ontology, relational and object domains. His assertion is that LinqtoRdf, being an offshoot of an ORM related initiative, is reversing the natural order of mappings. He believes […]

Posted at 23:06

Andrew Matthews: Semantic Development Environments

The semantic web is a GOOD THING by definition – anything that enables us to create smarter software without also having to create Byzantine application software must be a step in the right direction. The problem is – many people have trouble translating the generic term “smarter” into a concrete idea of what they would […]

Posted at 23:06

Andrew Matthews: White Paper: Exploiting the RDF-based Linked Data Web using .NET via LINQ

OpenLink has recently posted an excellent white paper on using LinqToRdf with Virtuoso and the Virtuoso Sponger: Recently OpenLink has been investigating LinqToRdf, an exciting project from Andrew Matthews which aims to bring the Semantic Web to .NET. Because of their language bindings and heritage, existing RDF APIs such as Sesame, Jena and Redland predominantly favour […]

Posted at 23:06

Andrew Matthews: Announcing LinqToRdf v0.8

I’m very pleased to announce the release of version 0.8 of LinqToRdf. This release is significant for a couple of reasons. Firstly, because it provides a preview release of RdfMetal and secondly because it is the first release containing changes contributed by someone other than yours truly. The changes in this instance being provided by […]

Posted at 23:06

Andrew Matthews: LinqToRdf v0.7.1 and RdfMetal

I’ve just uploaded version 0.7.1 of LinqToRdf. This bug fix release corrects an issue I introduced in version 0.7. The issue only seemed to affect some machines and stems from the use of the GAC by the WIX installer (to the best of my knowledge). I’ve abandoned GAC installation and gone back to the original […]

Posted at 23:06

October 04

Andrew Matthews: Knowledge Graphs 101

This is the first in a short series introducing Knowledge Graphs. It covers just the basics, showing how to write, store, query and work with graph data using RDF (short for Resource Description Format). I will keep it free of theory and interesting but unnecessary digressions. Let me know in the comments if you find […]

Posted at 06:06

Andrew Matthews: Preparing a Project Gutenberg ebook for use on a 6″ ereader

For a while I’ve been trying to find a nice way to convert project Gutenberg books to look pleasant on a BeBook One. I’ve finally hit on the perfect combination of tools, that produces documents ideally suited to 6″ eInk ebook readers like my BeBook. The tool chain involves using GutenMark to convert the file […]

Posted at 04:06

Andrew Matthews: Some pictures of Carlton Gardens

Carlton Gardens, a set on Flickr. This was my first outing with the Pentax K-x that I got recently. In these pictures, I’m trying to get to grips with the camera, so I didn’t have any particular objective other than to take pictures. The light was so harsh it was very difficult for me to […]

Posted at 04:06

Andrew Matthews: Note to Self: Convert UTF-8 w/ BOM to ASCII (WIX + DB) using GNU uconv

This one took me a long time to work out, and it took a non-latin alphabet user (Russian) to point me at the right tools. Yet again, I’m guilty of being a complacent anglophone. I was producing a database installer project using WIX 3.5, and ran into all sorts of inexplicable problems, which I finally […]

Posted at 04:06

Andrew Matthews: Automata-Based Programming With Petri Nets – Part 1

Petri Nets are extremely powerful and expressive, but they are not as widely used as state machines. That's a pity, they allow us to solve problems beyond the reach of state machines. This post is the first in a mini-series on software development with Petri Nets. All of the code for a full feature-complete Petri Net library is available online at on GitHub. You're welcome to take a copy, play with it and use it in your own projects.

Posted at 04:06

Copyright of the postings is owned by the original blog authors. Contact us.