http://datjko.livejournal.com/ ([identity profile] datjko.livejournal.com) wrote in [personal profile] anhinga_anhinga 2017-01-03 07:14 am (UTC)

I keep reading in small portions time to time.

1.
"
let's agree that ihe first level of hash-map keys would name the respective input and output streams and map to their latest values.
...
Therefore the root keys should be types of neurons.
"

Neuron types or instances of neurons? - ok, i see, instances are the next level.

2.
"
A map from names of the built-in transformers to (the map from names of the individual neurons to (the map from names of their output streams to the respective non-zero matrix coefficients)).
"

In the first quote you said about naming both input and output streams and here only about output?

3. Am I right that you don't keep in mind the computational graph "as a picture" and you rather think of it as something implicitly given by the "computational matrix"?

May be the pictures of usual neural nets confuse me here. It seems I understood the idea. Could you please check if I'm wrong in the following:

The "computation matrix" *is* that prefix tree of chains of keys that we were talking about (aka recurrent map), and a rule for computing the next step is to traverse it up to each individual neuron and apply the neuron's function to the subtree of arguments, producing a new subtree of output values, which is then split&replicated&wrapped with appropriate neurons keys to make up a computation matrix for next step.

I have a feeling that in papers this (or some more generic operation?) is meant as a part of some simple transformation of a giant (or even of countable dimensions) but sparsed matrix or tensor, is it right? Most likely it's explained in the papers you gave the links to but I did not read or understood it yet. Do you have a simple hint of what is that matrix transformation which would apply the neuron functions to appropriate data and produce a new computational matrix. Or, may be an example? Tomorrow I'll try to grasp the example of the interview task solution in https://arxiv.org/abs/1606.09470 so, don;t hurry with answers if you think it's better to wait me there.

<s>
at any moment of time t (and on the up movement of the two-stroke engine) there is a directed bipartite computational graph g_t with neurons as edges (or terminal source or sink nodes) transforming an input recurrent map vector of values (generated for the moment t) to an output recurrent map vector of values. Nodes of g_t are neurons' inputs and outputs (corresp in input and output partitions of the graph), where several recurrent map streams are merged (concatenated) and split (or replicated).
</s> (I wanted to erase this statement as I thought I came to a better idea above after thinking about recurrent map vs vector of values for input/output streams. left it here just in case if it worth any comments).

4. What are requirements for neuron functions? Generically, at least they are assumed to always produce exactly zero on zero input, right? Are there other requirements? Stateless? Deterministic?


Post a comment in response:

This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting