Submodularity and the set cover problem

If a function on sets is submodular, approximation bounds can be proven for optimisation problems involving that function. One such case is the set cover problem. This post contains my notes on the problem and show how the approximation bound can be proven.

Read More

A new language

Over the weekend I created a new programming language with functional programming elements, objects, and macros.

Read More

A taste of a taste of rewrite systems

Rewriting is the process of repeatedly replacing terms in a formula with other terms. For example, the rule \(\neg\neg x \rightarrow x\) can be applied to the formula \(\neg \neg \neg \neg a\) to get \(a\). This seems important but I needed a bit more background to understand it. These are my notes on some of the concepts in the survey paper A Taste of Rewrite Systems.

Read More

Loopy glitches

While developing a loopy belief propagation library I came across a bug that produced this interesting image.

Loopy belief propagation glitch image
Read More

Chainer character embeddings

(Continues from Numpy character embeddings.) The numpy embedding model turned out to be extremely slow because it wasn’t vectorised. Chainer is a python deep learning package that enables us to implement the model easily with automatic differentiation and the resulting vectorised operations are fast—and can be run on a GPU if you want. In this post I’ll explore how the different optimisers perform out of the box.

Read More

Markov random fields and socionics

Socionics comes across as the more serious and academic eastern European cousin of MBTI (which is much better known by English speakers). Although many of the same criticism that apply to personality theories/tools such as MBTI are also applicable to socionics, I find these models fascinating. Here I’ll use socionics to set up a fun application/demonstration of Markov random fields.

Read More

Image segmentation with loopy belief propagation

The package pyugm is a package for learning (discrete at this stage) undirected graphical models in Python. It implements loopy belief propagation (LBP) on cluster graphs or Gibbs sampling for inference. In this post I’ll show how a simple image segmentation model can be build and calibrated.

Read More

Hidden conditional random fields

When we want to classify sequences, HCRFs are—if we can forget about recurrent neural networks for a moment—discriminative counterparts to hidden Markov models.

Read More

Spelling correction with pyhacrf

In this post I’ll describe how to generate suggestions for incorrectly spelled words using the pyhacrf package. pyhacrf implements the Hidden Alignment Conditional Random Field (HACRF) model in python with a sklearn-like interface.

Read More

Hello world!

First post! I plan to add some posts about the pyhacrf project soon, and later something on probablistic graphical models and relational learning with deep networks.

Read More