Auto Annotation
Magic annotation helps you generate bounding boxes automatically using AI models, so you can label datasets much faster.
Itβs designed for a practical workflow:
Configure the model and prompts once
Auto-label one image to verify quality
Batch auto-label the whole dataset
Review and correct anything thatβs wrong
Requirements
First Look

(Open Magic annotation with the β¨ button (or press T).)
You can open Magic annotation in two ways:
Press the
Tkey to auto-annotate the current image (uses your saved settings)Click the β¨ button in the Classes panel to open the Magic annotation settings dialog
Before You Start (Recommended)
Magic annotation needs a dataset and a class list.
Load your dataset folder in the Image Annotation Window
Load (or create) your
classes.namesfile
If you havenβt used the Annotation Window before, follow the main labeling guide first.
Open the Magic annotation Dialog
Open AI Tools β Image Annotation
Load your dataset and class file
In the Classes panel, click the β¨ button
This opens the Magic annotation Settings dialog.

Step 1 β Choose a Model
In Model Selection, choose one of the supported detectors.
Text-prompt models (recommended for custom classes)
These models detect objects using your class descriptions (text prompts):
Grounding DINO Tiny: good default, faster
Grounding DINO Base: more accurate, heavier (GPU strongly recommended)
OWLv2 Base Ensemble: good general model
OWLv2 Large Ensemble: more accurate, heavier (GPU strongly recommended)
Use these when your classes are not standard COCO classes, or when you want to describe the object in natural language.
YOLO models (fast, but class matching matters)
YOLO models appear when the YOLO/OpenCV DNN feature is available:
YOLOv4 (COCO)
YOLOv4 Tiny (COCO)
These do not use text descriptions. They use COCO class names.
For YOLO (COCO) models, the COCO class names must match your dataset class names exactly. Example: if your class is person, it should be exactly person (not human).
Custom YOLOv4 model
If you have your own YOLOv4 model, select Custom YOLOv4 Model and provide:
Weights file (
.weights)Config file (
.cfg)Names file (
.names)
Step 2 β Set Thresholds
Confidence Threshold
Controls how confident a detection must be to become a label.
Higher values β fewer boxes, but usually cleaner
Lower values β more boxes, but more false positives
A good starting point is 30%.
Grounding DINO: Box Threshold and Text Threshold
These appear only for Grounding DINO models:
Box Threshold: how strict the box confidence should be
Text Threshold: how strict the text-to-object matching should be
Guidance:
If you get too many wrong boxes β increase Text Threshold first
If boxes are sloppy or too wide β increase Box Threshold
If you get no detections β lower thresholds gradually
Step 3 β Write Better Class Descriptions (Text-Prompt Models)
If you selected a text-prompt model, you will see a Class Descriptions table.

(Class Descriptions table (used as prompts).)
Why this matters: the description is the prompt the model uses to find your objects.
Good descriptions are:
Visual and specific (color, shape, material)
Grounded in your real images (background, lighting, orientation)
Examples:
Instead of
boltβsilver bolt on a black conveyor beltInstead of
cupβwhite paper cup, top viewInstead of
labelβrectangular sticker label on a cardboard box
You can also use:
Use Class Names to reset prompts to the class names
Clear All to start from scratch
Step 4 β Choose Annotation Mode (Important)
Magic annotation supports three modes for handling images that already have annotations:
Override: replaces existing annotation files
Add: appends new detections to existing annotations
Skip: does not process images that already have annotations
Recommended usage:
Choose Override when youβre starting fresh or re-labeling everything
Choose Add when you want to supplement your existing labels
Choose Skip when youβre polishing a partially labeled dataset and donβt want to risk overwriting work
Run Magic annotation
Annotate Current (one image)
Use Annotate Current first.
This is the safest way to validate that:
your prompts are good
thresholds are reasonable
boxes look correct
If the results are not good, adjust prompts/thresholds and try again.
Batch Annotate All (whole dataset)
When the current image looks good, click Batch Annotate All.
A progress dialog shows:
current status (model loading / processing)
progress bar
estimated time remaining (ETA)
You can cancel at any time.

(Batch Magic annotation progress with ETA.)
Review and Correct Results
Magic annotation is meant to accelerate labeling, not replace review.
After auto-labeling:
Quickly skim through images to catch obvious failures
Fix incorrect boxes (wrong class, wrong size)
Remove false positives
Add missed objects manually where needed
If you see repeated mistakes, stop and adjust prompts/thresholds, then rerun.
If your AI models aren't behaving as expected, use these quick-fix toggles to tune your performance.
π‘ Still stuck?
Try the AI Assistant in AugeLab Studio. Describe your specific camera view and what the boxes currently look like; it can often suggest the exact decimal value for your thresholds.
Would you like me to create a "Threshold Cheat Sheet" table that explains exactly what Confidence vs. Text thresholds do?
Notes
Magic annotation settings are saved and reused when you press
T.If your plan/licensing has limits for offline Magic annotation, the tool will prevent batch processing after the limit is reached.
Last updated
Was this helpful?