Misha, thank you, looks very interesting but I'm rather a newbie here. Could you answer a first bunch of my silly questions here or give me a link to some introduction? I started learning ml more or less seriously only recently and some looking into the papers you gave makes it clear for me: it's yet another interesting story I started to listen from the middle of the book.
1. why fixed number of inputs in a single neuron can not be approximated by a cascade of layers of ordinary neurons? or what are advantages of a single multiple input neuron compared to a subnetwork of ordinary neurons?
2. are the multiple inputs of a DMM neuron supposed to have different roles/interpretations (as in your example of "main signal" and "modulation signal" or as 'recurrent' connections in RNN) or they are thought to be fed from homogeneous streams like inputs for linear combination before activation function in regular neurons? Or it's up to the designer of the network?
3. Do I understand it right that generally in DMMs the number of a neuron inputs is not necessarily fixed?
4. and even new neurons may be created dynamically? In training time only or even in a test stream processing?
5. "friendly to handling sparse vectors" is friendly only to "one-of-all" representation as for letters in an alphabet or there is some more generic friendship with sparsity?
I'm sorry, I can only guess how silly can be my understanding.
no subject
1. why fixed number of inputs in a single neuron can not be approximated by a cascade of layers of ordinary neurons? or what are advantages of a single multiple input neuron compared to a subnetwork of ordinary neurons?
2. are the multiple inputs of a DMM neuron supposed to have different roles/interpretations (as in your example of "main signal" and "modulation signal" or as 'recurrent' connections in RNN) or they are thought to be fed from homogeneous streams like inputs for linear combination before activation function in regular neurons? Or it's up to the designer of the network?
3. Do I understand it right that generally in DMMs the number of a neuron inputs is not necessarily fixed?
4. and even new neurons may be created dynamically? In training time only or even in a test stream processing?
5. "friendly to handling sparse vectors" is friendly only to "one-of-all" representation as for letters in an alphabet or there is some more generic friendship with sparsity?
I'm sorry, I can only guess how silly can be my understanding.