anhinga_anhinga: (Anhinga)
anhinga_anhinga ([personal profile] anhinga_anhinga) wrote2016-12-27 01:29 am

Dataflow matrix machines as generalized recurrent neural networks

A year ago I posted about dataflow programming and linear models of computation:

http://anhinga-anhinga.livejournal.com/82757.html

It turns out that those dataflow matrix machines are a fairly powerful generalization of recurrent neural networks.

The main feature of dataflow matrix machines (DMMs) are vector neurons. While recurrent neural networks process streams of numbers, dataflow matrix machines process streams of representations of arbitrary vectors (linear streams).

Another important feature of DMMs is that neurons of arbitrary input and output arity are allowed, and a rich set of built-in transformations of linear streams is provided.

Recurrent neural networks are Turing-complete, but they are an esoteric programming language, and not a convenient general-purpose programming platform. DMMs provide a formalism friendly to handling sparse vectors, conditionals, and more, and there are indications that DMMs will grow to become a powerful general-purpose programming platform, in addition to being a convenient machine learning platform.

In this context, it is possible to represent large classes of programs by matrices of real numbers, which allows us to modify programs in continuous fashion and to synthesize programs by synthesizing matrices of real numbers.

Further details and preprints

Self-referential mechanism: Consider a linear stream of matrices describing the connectivity pattern and weights of a DMM. Select a dedicated neuron Self emitting such a stream on its output, and use the latest value of that stream as the current network matrix (matrix describing the connectivity pattern and weights of our DMM). A typical Self neuron would work as an accumulator taking additive updates from other neurons in the network. This mechanism enables reflection facilities and powerful dynamic self-modification facilities. In particular, the networks in question have facilities for dynamic expansion.

The recent DMM-related preprints by our group:

https://arxiv.org/abs/1603.09002

https://arxiv.org/abs/1605.05296

https://arxiv.org/abs/1606.09470

https://arxiv.org/abs/1610.00831

Modern recurrent neural networks with good machine learning properties such as LSTM and Gated Recurrent Unit networks are naturally understood in the DMM framework as networks having linear and bilinear neurons in addition to neurons with more traditional sigmoid activation functions.

Our new open source effort

The new open-source implementation of core DMM primitives in Clojure:

https://github.com/jsa-aerial/DMM

This open-source implementation features a new vector space of recurrent maps (space of "mixed rank tensors"), which allows us to represent a large variety of linear streams as streams of recurrent maps. The vector space of recurrent maps also makes it possible to express variadic neurons as neurons having just one argument.

Therefore a type of neuron is simply a function transforming recurrent maps, which is a great simplification compared to the formalism presented in the preprints above. See the design notes within this open-source implementation for further details.

[identity profile] anhinga-anhinga.livejournal.com 2017-01-03 03:11 pm (UTC)(link)
Привет!

Буду отвечать кусочками...

2. Columns correspond to outputs, and they are indexed by this three-level structure: {name of the built-in transformer {name of the individual neuron {name of the output stream ...}}}.

When one considers a particular matrix row, this correspond to "the second index, which changes when one moves along the matrix row".

Rows correspond to inputs, and they are indexed by this three-level structure: {name of the built-in transformer {name of the individual neuron {name of the input stream ...}}}.

So the matrix elements are indexed by the 6-level structure: {name of the built-in transformer of neuron-i {name of the individual neuron-i {name of the input stream {name of the built-in transformer of neuron-j {name of the individual neuron-j {name of the output stream ...}}}}}}

3 (first paragraph). I love visualizing computational graphs. It is always good to have alternative angles of view (to view the network both as a matrix and as a graph).

In some of out earlier prototypes, we did visualize how the dataflow graph of a changing network
changes with time. E.g. I have this auxiliary livejournal, and its top post has the video (made in June 2015) which shows changing dataflow graph in the lower left corner: http://anhinga-drafts.livejournal.com/29929.html

This was made before we understood the matrix formalism. And I always wanted to have options to use input interfaces to edit dataflow graph directly on the graphical level (being inspired in this by some visual programming languages like Pure Data). But it's always a lot of software engineering work, and we don't have anything like this yet in our current Clojure system.