Recurrent neural nets have states, unlike feed-forward networks. An analogy for RNN is the C strtok function, where calling it with the same parameter typically yields a different value (but of course, unlike strtok, RNN does not modify the input). An analogy for feed-forward networks is a function in the mathematical sense, where y=f(x) regardless of how many times it was called.

At first I thought what makes RNN special is that it uses its own output as part of its input. While that’s true, after more reading, it seems that the magic really is the cell state. The cell state in an RNN is updated each time it processes the input. Using the strtok analogy, it is like how strtok updates its internal position of the last token each time strtok is called, so the next time you call it, it returns the next token.

So RNN is like a program whereas a feed-forward network is like a function.

https://www.tensorflow.org/versions/r0.12/tutorials/recurrent/index.html

http://karpathy.github.io/2015/05/21/rnn-effectiveness/

http://colah.github.io/posts/2015-08-Understanding-LSTMs/

### Like this:

Like Loading...

*Related*