http://datjko.livejournal.com/ ([identity profile] datjko.livejournal.com) wrote in [personal profile] anhinga_anhinga 2017-01-04 11:11 am (UTC)

Misha, they say that "There's no sparse variable in TensorFlow":
https://groups.google.com/a/tensorflow.org/forum/?utm_medium=email&utm_source=footer#!msg/discuss/LXJIWkil_mc/IEWI1398EQAJ

period. currently they can't do gradient steps on sparse weights.
I yet have to learn what people do for ML on large graphs for example. I believe there must be a workaround.

Before you pointed me to sparse tensors in TF I was thinking of making a kd-tree out of a sparse tensors and apply to the tree an RNN adopted for recurrent processing trees (instead of usual recurrent processing sequences). This way all weights are dense and, arguably, the recurrent tree RNN may mimic the plain convolution-like updates or even have some advantage over it. On the other hand, in kd-tree spatially neighboring cells (leaves) can be quite far from their nearest common ancestor. That's that tree RNN should be able to learn spatial neighboring relations for some nodes separated by quite long traverses through the tree and it makes me skeptic about this approach.

Currently I have no idea when I will be able to play with such kd-tree RNNs but if I suddenly learn something inspiring for this technique I may want to give the idea a try as seriously as I can. Basically I want to make it possible to apply DL to point clouds (which are generically sparsed samples of does not matter what embedded into does not matter which space).

Post a comment in response:

This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting