When to Stop Training

AugeLab Studio automatically calculates the correct training time for your models. The training automatically stops when ends.

Training an object detection model requires careful consideration of when to halt the training process. Stopping training at the appropriate moment can significantly impact the model's performance, generalization capabilities, and efficiency.

This guide aims to provide researchers, developers, and practitioners with valuable insights to determine the optimal stopping point during model training.

If this is your first training, you may follow the Starter Checklist.

Monitor Training Progress

During training, continuously monitor the progress of the model. Keep track of critical performance metrics, such as:

  • Loss

  • mAP

  • IOU

  • Iterations

scores. Loss and mAP will be show on a graph just like below:

Loss

Loss is shown with blue points on training graph and represents how far is the model accuracy from the training data provided.

Training Loss can be monitored to keep track on model accuracy and overtraining. Several ranges of Training Loss value can indicate:

Loss itself can not give you enough information on how accurate the data is. Refer to mAP for a parameter that is more reflective on accurate.

**2.0 ≥** Loss

A generic model that can give an idea on how generic and accurate the database is. For non-specific databases, this should give you a somehow accurate model, ready to assess training procedure.

**1.0 ≥** Loss

A specialized database can be used to achieve loss values below 1.0 and are good indicators to a specialized database.

**0.5 ≥** Loss

Fine tuned model that is ready to test and deploy. After reaching this value, improving loss value can take a lot longer time than the first stages of the training.

mAP

The mAP (mean average precision) metric combines both precision and recall to provide a comprehensive evaluation of the model's accuracy in detecting objects in an image.

It is calculated by comparing prediction boxes with ground truth annotations overlapping.

During training, reaching values around %90 is generally considered a good model. Values above %90 usually considered as over-fitting.

IOU

IOU (Intersection over Union) measures the overlap between predicted and true bounding boxes for individual object detections. mAP evaluates the overall performance of the object detection model across all object categories, considering both precision and recall.

Higher the IUO value, better the prediction is.

You can track each IOU in Training Window loggings:

Fine Tuning

Training Time

Define a maximum training time budget based on available computational resources and project constraints. If the model does not achieve satisfactory performance within the allocated time, consider stopping training and exploring other approaches such as:

  • Manually analyze annotation accuracy

  • Check class variety

  • Choose different model sizes and batch sizes

  • Increase database size

Over-Fitting

Avoid overfitting by monitoring the training and validation losses. This usually happens when training loses most of its momentum going downwards and starts creeping down.

However, in specialized databases or cases over-fitting is not always a bad thing. Proved you have enough data, an over-trained model can serve you well.

Balancing Time and Performance

Balance the training time with the desired model performance. In some cases, additional training iterations may improve performance, but the returns may diminish over time. Weigh the benefits against the computational cost and the urgency of the project.

Usually, depending on class numbers and database size, training process length can vary between a day or a week.

Starter Checklist

Database:

Model:

Training (stop if):

Last updated

Was this helpful?