RNN

class RNN(*args, **kwargs)[源代码]

对输入序列应用一个多层 Elman RNN, 其中非线性单元采用的 \(\tanh\) 或者 \(\text{ReLU}\)

针对输入序列的每一个元素,每一层网络做如下计算:

\[h_t = \tanh(W_{ih} x_t + b_{ih} + W_{hh} h_{(t-1)} + b_{hh})\]

where \(h_t\) is the hidden state at time t, \(x_t\) is the input at time t, and \(h_{(t-1)}\) is the hidden state of the previous layer at time t-1 or the initial hidden state at time 0. If nonlinearity is 'relu', then \(\text{ReLU}\) is used instead of \(\tanh\).

参数
  • input_size (int) – The number of expected features in the input x.

  • hidden_size (int) – The number of features in the hidden state h.

  • num_layers (int) – Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two RNNs together to form a stacked RNN, with the second RNN taking in outputs of the first RNN and computing the final results. Default: 1.

  • nonlinearity (str) – The non-linearity to use. Can be either 'tanh' or 'relu'. Default: 'tanh'.

  • bias (bool) – If False, then the layer does not use bias weights b_ih and b_hh. Default: True.

  • batch_first (bool) – If True, then the input and output tensors are provided as (batch, seq, feature) instead of (seq, batch, feature). Note that this does not apply to hidden or cell states. See the Inputs/Outputs sections below for details. Default: False.

  • dropout (float) – If non-zero, introduces a Dropout layer on the outputs of each RNN layer except the last layer, with dropout probability equal to dropout. Default: 0.

  • bidirectional (bool) – If True, becomes a bidirectional RNN. Default: False.

Shape:
  • 输入: input, h_0
    input: \((L, N, H_{in})\) when batch_first=False or \((N, L, H_{in})\)

    when batch_first=True. Containing the features of the input sequence.

    h_0: \((D * \text{num\_layers}, N, H_{out})\). Containing the initial hidden

    state for each element in the batch. Defaults to zeros if not provided.

    其中:

    \[\begin{split}\begin{aligned} N ={} & \text{batch size} \\ L ={} & \text{sequence length} \\ D ={} & 2 \text{ if bidirectional=True otherwise } 1 \\ H_{in} ={} & \text{input\_size} \\ H_{out} ={} & \text{hidden\_size} \end{aligned}\end{split}\]
  • 输出: output, h_n
    output: \((L, N, D * H_{out})\) when batch_first=False or \((N, L, D * H_{out})\) when batch_first=True.

    Containing the output features (h_t) from the last layer of the RNN, for each t.

    h_n: \((D * \text{num\_layers}, N, H_{out})\). Containing the final hidden state for each element in the batch.

实际案例

import numpy as np
import megengine as mge
import megengine.module as M

m = M.RNN(10,20,2,batch_first=False,nonlinearity="relu",bias=True,bidirectional=True)
inp = mge.tensor(np.random.randn(6, 30, 10), dtype=np.float32)
hx = mge.tensor(np.random.randn(4, 30, 20), dtype=np.float32)
out, hn = m(inp, hx)
print(out.numpy().shape)

输出:

(6, 30, 40)