Comment on page
Feature Detectornode in AugeLab Studio is used to detect objects in an image based on their features. It utilizes feature extraction and matching techniques to identify objects that match a training image.
Feature Detectornode performs object detection based on feature matching between a training image and an input image. It uses features such as keypoints and descriptors to identify objects that resemble the trained object.
- Node Title: Feature Detector
- Node ID: OP_NODE_FEATURE_DETECTOR
Feature Detectornode has the following input sockets:
- 1.Train Image: The image containing the object to be trained for detection.
- 2.Input Image From Camera: The input image from the camera or another source.
Feature Detectornode has the following output sockets:
- 1.Detected Image: The input image with the detected object highlighted.
- 2.Detect Status: A boolean value indicating whether the object was detected (True) or not (False).
- 3.Center: The coordinates of the center of the detected object.
Feature Detectornode provides the following parameters for configuration:
- Homography Type: The type of homography estimation method to be used for feature matching. It can be one of the following options:
- RANSAC: Random Sample Consensus (RANSAC) algorithm.
- LMEDS: Least-Median of Squares (LMEDS) algorithm.
- RHO: RHO algorithm.
- Compute Type: The type of feature computation method to be used. It can be one of the following options:
- STABLE: Stable feature computation.
- PERFORMANCE: Performance-oriented feature computation.
- Number of Features: The desired number of features to be detected and matched.
- Distance Threshold: The maximum distance threshold for matching features.
- K Nearest: The number of nearest neighbors to consider for feature matching.
- Pyramid Decrease Ratio: The pyramid decrease ratio for multi-scale feature detection.
- Number of Pyramid Levels: The number of pyramid levels for multi-scale feature detection.
- Point Compare Type: The point comparison type for feature matching.
- 1.Drag and drop the
Feature Detectornode from the node library onto the canvas in AugeLab Studio.
- 2.Connect the training image to the Train Image input socket of the
- 3.Connect the input image from the camera or another source to the Input Image From Camera input socket of the
- 4.Configure the desired parameters of the
Feature Detectornode, such as the homography type, compute type, number of features, distance threshold, and others.
- 5.Run the pipeline.
Feature Detectornode will perform feature extraction and matching between the training image and the input image.
- 7.If a match is found, the detected object will be highlighted in the Detected Image output.
- 8.The Detect Status output will indicate whether the object was detected (True) or not (False).
- 9.The Center output will provide the coordinates of the center of the detected object.
- 10.Retrieve the outputs for further analysis, processing, or visualization.
Feature Detectornode uses feature detection and matching techniques to identify objects based on their features.
- It compares the features of the training image with those of the input image to find matches.
- The homography type determines the method used for homography estimation during feature matching.
- The compute type specifies the feature computation method, which affects the stability or performance of the feature detection.
- Adjust the number of features, distance threshold, and other parameters to optimize the detection performance.
- If a match is found, the detected object will be highlighted in the output image.
- The Detect Status output indicates whether the object was detected or not.
- The Center output provides the coordinates of the center of the detected object.
- If no match is found, the Detect Status will be False, and the Center coordinates will be (0, 0).
- Perform proper error handling if the detection process fails or produces unexpected results.
That concludes the documentation for the
Feature Detectornode in AugeLab Studio. This node enables you to detect objects based on their features by comparing a training image with an input image. Use this node for various applications such as object recognition, tracking, or augmented reality.