MultiStepLR#

class MultiStepLR(optimizer, milestones, gamma=0.1, current_epoch=- 1)[source]#
Decays the learning rate of each parameter group by gamma once the

number of epoch reaches one of the milestones.

Parameters:
  • optimizer (Optimizer) – wrapped optimizer.

  • milestones (Iterable[int]) – list of epoch indices which should be increasing.

  • gamma (float) – multiplicative factor of learning rate decay. Default: 0.1

  • current_epoch (int) – the index of current epoch. Default: -1

get_lr()[source]#

Compute current learning rate for the scheduler.

load_state_dict(state_dict)[source]#

Loads the schedulers state.

Parameters:

state_dict – scheduler state.

state_dict()[source]#

Returns the state of the scheduler as a dict. It contains an entry for every variable in self.__dict__ which is not the optimizer.