megengine.functional.nn.logsoftmax#

logsoftmax(inp, axis)[source]#

Applies the \(\log(\text{softmax}(x))\) function to an n-dimensional input tensor. The \(\text{logsoftmax}(x)\) formulation can be simplified as:

\[\text{logsoftmax}(x_{i}) = \log(\frac{\exp(x_i) }{ \sum_j \exp(x_j)} )\]

For numerical stability the implementation follows this transformation:

\[\text{logsoftmax}(x) = \log (\frac{\exp (x)}{\sum_{i}(\exp (x_{i}))}) = x - \log (\sum_{i}(\exp (x_{i}))) = x - \text{logsumexp}(x)\]

Examples

>>> import numpy as np
>>> x = Tensor(np.arange(-5, 5, dtype=np.float32)).reshape(2,5)
>>> y = F.logsoftmax(x, axis=1)
>>> y.numpy().round(decimals=4)
array([[-4.4519, -3.4519, -2.4519, -1.4519, -0.4519],
       [-4.4519, -3.4519, -2.4519, -1.4519, -0.4519]], dtype=float32)
Return type:

Tensor