Skip to content Skip to sidebar Skip to footer

How Do I Use The Accuracy Of The Previous Iteration Of Training A Neural Network As The Baseline For Early Stopping In The Following Iteration?

Like the question states, I want the baseline in my early stopping callback to equal that of the previous iteration. Is there a way I can update that early stopping parameter as th

Solution 1:

In the EarlyStopping class in callbacks.py, change on_epoch_end to the following:

def on_epoch_end(self, epoch, logs=None):
    current = self.get_monitor_value(logs)
    if current is None:
      return
    if self.monitor_op(current - self.min_delta, self.best):
      self.best = current
      if current > self.baseline:
        self.baseline = current
      self.wait = 0
      if self.restore_best_weights:
        self.best_weights = self.model.get_weights()
    else:
      self.wait += 1
      if self.wait >= self.patience:
        self.stopped_epoch = epoch
        self.model.stop_training = True
        if self.restore_best_weights:
          if self.verbose > 0:
            print('Restoring model weights from the end of the best epoch.')
          self.model.set_weights(self.best_weights)

But make sure that you redefine your EarlyStopping callback variable for every iteration (if using a for loop).


Post a Comment for "How Do I Use The Accuracy Of The Previous Iteration Of Training A Neural Network As The Baseline For Early Stopping In The Following Iteration?"