InsideDarkWeb.com

Backpropagation through time for stacked RNNs

I was able to find the partial derivative of the cost function with respects to a single variable without much difficulty. However, this requires propagating backwards through the network for each parameter. Is there a way to do this by propagating backwards through the network once? For example, for a MLP, one could find the partial derivative with respects to the activation levels of neurones by propagation backwards only once, and then finding the partial derivatives of the weights and biases by applying the chain rule. Unfortunately, for a stacked RNN, this proved way less straightforward due to the parameters being the same at each time step. I think it might have something to do with ordered derivatives but can’t seem to find much resources on the topic.

Cross Validated Asked by E Fresher on November 21, 2021

0 Answers

Add your own answers!

Related Questions

Interpretation of TSA::arimax output model is presented in R

1  Asked on January 2, 2021 by wasif

   

Belief propagation on Polytree

0  Asked on January 2, 2021 by jonasc

   

Split train//validation/test sets by time, is it correct?

3  Asked on December 31, 2020 by wishihadabettername

     

Chi squared test questions

0  Asked on December 30, 2020 by woodpigeon

     

QQ plot comparison of z-normalized datasets

1  Asked on December 30, 2020 by prinzvonk

     

Ask a Question

Get help from others!

© 2021 InsideDarkWeb.com. All rights reserved.