Auto Annotation

Magic annotation helps you generate bounding boxes automatically using AI models, so you can label datasets much faster.

It’s designed for a practical workflow:

  1. Configure the model and prompts once

  2. Auto-label one image to verify quality

  3. Batch auto-label the whole dataset

  4. Review and correct anything that’s wrong


Requirements

Magic annotation is fastest with an NVIDIA GPU.

  • You will need a computer with an NVIDIA GPU

  • Download the required AI modules from the Module Downloader window


First Look

Magic annotation entry point in the Classes panel

(Open Magic annotation with the ✨ button (or press T).)

You can open Magic annotation in two ways:

  • Press the T key to auto-annotate the current image (uses your saved settings)

  • Click the ✨ button in the Classes panel to open the Magic annotation settings dialog

The first time you use Magic annotation, AugeLab Studio will ask you to configure your settings. These settings are remembered, and you can change them anytime by clicking the ✨ button.


Magic annotation needs a dataset and a class list.

  1. Load your dataset folder in the Image Annotation Window

  2. Load (or create) your classes.names file

If you haven’t used the Annotation Window before, follow the main labeling guide first.


Open the Magic annotation Dialog

  1. Open AI Tools β†’ Image Annotation

  2. Load your dataset and class file

  3. In the Classes panel, click the ✨ button

This opens the Magic annotation Settings dialog.

Magic annotation settings dialog

Step 1 β€” Choose a Model

In Model Selection, choose one of the supported detectors.

These models detect objects using your class descriptions (text prompts):

  • Grounding DINO Tiny: good default, faster

  • Grounding DINO Base: more accurate, heavier (GPU strongly recommended)

  • OWLv2 Base Ensemble: good general model

  • OWLv2 Large Ensemble: more accurate, heavier (GPU strongly recommended)

Use these when your classes are not standard COCO classes, or when you want to describe the object in natural language.

YOLO models (fast, but class matching matters)

YOLO models appear when the YOLO/OpenCV DNN feature is available:

  • YOLOv4 (COCO)

  • YOLOv4 Tiny (COCO)

These do not use text descriptions. They use COCO class names.

Custom YOLOv4 model

If you have your own YOLOv4 model, select Custom YOLOv4 Model and provide:

  • Weights file (.weights)

  • Config file (.cfg)

  • Names file (.names)


Step 2 β€” Set Thresholds

Confidence Threshold

Controls how confident a detection must be to become a label.

  • Higher values β†’ fewer boxes, but usually cleaner

  • Lower values β†’ more boxes, but more false positives

A good starting point is 30%.

Grounding DINO: Box Threshold and Text Threshold

These appear only for Grounding DINO models:

  • Box Threshold: how strict the box confidence should be

  • Text Threshold: how strict the text-to-object matching should be

Guidance:

  • If you get too many wrong boxes β†’ increase Text Threshold first

  • If boxes are sloppy or too wide β†’ increase Box Threshold

  • If you get no detections β†’ lower thresholds gradually


Step 3 β€” Write Better Class Descriptions (Text-Prompt Models)

If you selected a text-prompt model, you will see a Class Descriptions table.

Class descriptions table

(Class Descriptions table (used as prompts).)

Why this matters: the description is the prompt the model uses to find your objects.

Good descriptions are:

  • Visual and specific (color, shape, material)

  • Grounded in your real images (background, lighting, orientation)

Examples:

  • Instead of bolt β†’ silver bolt on a black conveyor belt

  • Instead of cup β†’ white paper cup, top view

  • Instead of label β†’ rectangular sticker label on a cardboard box

You can also use:

  • Use Class Names to reset prompts to the class names

  • Clear All to start from scratch

Tip: If two classes look similar, make the description emphasize what differentiates them. Example: scratch on metal surface vs oil stain on metal surface.


Step 4 β€” Choose Annotation Mode (Important)

Magic annotation supports three modes for handling images that already have annotations:

  • Override: replaces existing annotation files

  • Add: appends new detections to existing annotations

  • Skip: does not process images that already have annotations

Recommended usage:

  • Choose Override when you’re starting fresh or re-labeling everything

  • Choose Add when you want to supplement your existing labels

  • Choose Skip when you’re polishing a partially labeled dataset and don’t want to risk overwriting work


Run Magic annotation

Annotate Current (one image)

Use Annotate Current first.

This is the safest way to validate that:

  • your prompts are good

  • thresholds are reasonable

  • boxes look correct

If the results are not good, adjust prompts/thresholds and try again.

Batch Annotate All (whole dataset)

When the current image looks good, click Batch Annotate All.

A progress dialog shows:

  • current status (model loading / processing)

  • progress bar

  • estimated time remaining (ETA)

You can cancel at any time.

Batch Magic annotation progress dialog

(Batch Magic annotation progress with ETA.)


Review and Correct Results

Magic annotation is meant to accelerate labeling, not replace review.

After auto-labeling:

  1. Quickly skim through images to catch obvious failures

  2. Fix incorrect boxes (wrong class, wrong size)

  3. Remove false positives

  4. Add missed objects manually where needed

If you see repeated mistakes, stop and adjust prompts/thresholds, then rerun.


If your AI models aren't behaving as expected, use these quick-fix toggles to tune your performance.

🚫 "It annotates nothing" (Zero Detections)

When the AI is being too "shy" to label anything, it's usually a threshold or description issue.

  • Lower Confidence: Drop the Confidence Threshold slightly (e.g., ).

  • Text Sensitivity: For Grounding DINO, lower the Text Threshold to be less strict about word matching.

  • Be Specific: Instead of "part," try "silver metal bolt" or "red plastic cap." Descriptions should be visual.

  • Check Lists: Verify that your class list is actually loaded in the node settings and isn't empty.

πŸ“¦ "Too many wrong boxes" (Ghost Detections)

If your screen is cluttered with false positives, you need to tighten the "strictness" of the model.

  • Raise Confidence: Increase the Confidence Threshold to filter out low-certainty guesses.

  • Text Strictness: Increase the Text Threshold to force a closer match between the image and your prompt.

  • Remove Ambiguity: Avoid broad prompts like "object" or "item." If the AI is labeling shadows as "parts," specifically describe the part's unique colors or textures.

❓ "YOLO model doesn't detect my class"

Standard YOLO models are pre-trained on specific datasets.

  • COCO Standard: Basic YOLO models only recognize the 80 COCO categories. Your labels must match exactly (e.g., person, cell phone, chair, bottle).

  • Custom Needs: If you need to detect something specific (like a "scratched circuit board"), switch to a Text-Prompt model (like Grounding DINO) or train a Custom YOLO model.

🐌 "Processing is slow or laggy"

Vision models are computationally expensive.

  • First-Run Delay: It is normal for the first run to be slow while models download and initialize in memory.

  • Model Size: Grounding DINO Base and OWLv2 Large are high-accuracy but "heavy." Try a "Tiny" or "Small" variant for faster speeds.

  • Hardware: Ensure AugeLab is utilizing your GPU. Running large AI models on a CPU will result in significant latency.


πŸ’‘ Still stuck?

Try the AI Assistant in AugeLab Studio. Describe your specific camera view and what the boxes currently look like; it can often suggest the exact decimal value for your thresholds.

Would you like me to create a "Threshold Cheat Sheet" table that explains exactly what Confidence vs. Text thresholds do?


Notes

  • Magic annotation settings are saved and reused when you press T.

  • If your plan/licensing has limits for offline Magic annotation, the tool will prevent batch processing after the limit is reached.

Last updated

Was this helpful?