RECURRENT NEURAL NETWORK
It is my distinct honour and privilege to welcome you to the Journal of Theoretical and Computational Science.
The Journal of Theoretical and Computational Science aims to spread knowledge and promote discussion through the publication of peer-reviewed, high quality research papers on all topics related to Modern Scientific Techniques. The open access journal is published by Longdom Publishing who hosts open access peer-reviewed journals as well as organizes conferences that hosts the work of researchers in a manner that exemplifies the highest standards in research integrity.
A recurrent neural network (RNN) is a class of artificial neural networks where connections between nodes form a directed graph along a temporal sequence. This allows it to exhibit temporal dynamic behaviour. Derived from feed forward neural networks, RNNs can use their internal state (memory) to process variable length sequences of inputs. This makes them applicable to tasks such as unsegmented, connected handwriting recognition or speech recognition.
Both finite impulse and infinite impulse recurrent networks can have additional stored states, and the storage can be under direct control by the neural network. The storage can also be replaced by another network or graph, if that incorporates time delays or has feedback loops. Such controlled states are referred to as gated state or gated memory, and are part of long short-term memory networks (LSTMs) and gated recurrent units. This is also called Feedback Neural Network (FNN).
We always encourage your research works under the scope of our Journal of Theoretical and Computational Science.
With regards,
Angelina
Managing Editor
Journal of Theoretical and Computational Science
WhatsApp: +3225889658