megengine.optimizer.AdamW¶
- class AdamW(params, lr, betas=(0.9, 0.999), eps=1e-08, weight_decay=0.01)[源代码]¶
Implements AdamW algorithm proposed in “Decoupled Weight Decay Regularization”.
- 参数
params (
Union[Iterable[Parameter],dict]) – 可迭代对象,可以是一组待优化的参数,或定义几组参数的dict类型。lr (
float) – learning rate. betas: coefficients used for computing running averages of gradient and its square. Default: (0.9, 0.999)eps (
float) – term added to the denominator to improve numerical stability. Default: 1e-8weight_decay (
float) – weight decay (L2 penalty). Default: 1e-2
方法
add_param_group(param_group)向
Optimizer的param_groups中添加一组参数。backward(loss)把所有参数的梯度属性设置为 None。
load_state_dict(state)加载优化器状态。
state_dict([keep_var])导出优化器状态。
step()执行单一优化步骤。
1.0 版后已移除.