- Yolo Data augmentation config file
- Different Data Augmentations in Yolo
When it comes to object detection algorithms, YOLO stands tall as the most popular choice among machine learning practitioner. Its exceptional speed and accuracy make it a preferred option for a wide range of applications. Over time, various iterations of YOLO, such as V5, V7, V8, and YOLO-NAS, have emerged, setting new records for state-of-the-art object detection.
However, fine-tuning these YOLO models to achieve optimal performance requires more than just implementing the algorithm itself. One crucial aspect is data augmentation. Each YOLO version comes with its own default data augmentation configuration, but simply relying on these settings may not yield the desired results for your specific use case.
In this article, we will explore the available data augmentation techniques and understand in detail. By gaining insight into these augmentation options, you will be better equipped to customize and fine-tune your YOLO models according to your specific needs.
Yolo Data augmentation config file
You can find the augmentation configuration specific to your desired YOLO version by referring to these files. Once you have identified the corresponding configuration file, you can easily modify the augmentation settings according to your requirements.
The data configuration files are organized in the following order: V4, V5, V7, and V8.
darknet/yolov4.cfg at master · AlexeyAB/darknet
You can't perform that action at this time. You signed in with another tab or window. You signed out in another tab or…
yolov5/hyp.scratch-low.yaml at master · ultralytics/yolov5
YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite. Contribute to ultralytics/yolov5 development by creating an account on…
yolov7/hyp.scratch.p5.yaml at main · WongKinYiu/yolov7
Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors …
YOLO settings and hyperparameters play a critical role in the model's performance, speed, and accuracy. These settings…
Different Data Augmentation
Image HSV (Hue, Saturation, and Value) Augmentation
This augmentation technique helps the YOLO model by introducing variations in colour, lighting conditions, and contrast.
By altering the Hue component, we can simulate different lighting conditions, such as daylight or artificial lighting, enabling the model to learn to detect objects under various illumination settings.
Adjusting the Saturation component allows us to control the vividness or dullness of colors, providing the model with exposure to different color distributions.
Lastly, modifying the Value component affects the brightness of the image, allowing the model to adapt to different brightness levels.
By incorporating HSV augmentation into the YOLO, the model becomes more robust and capable of handling real-world scenarios with varying lighting conditions, colour schemes, and contrasts.
In the later versions of YOLO, excluding V4, we can configure HSV augmentation by specifying fractional values for hsv_h, hsv_s, and hsv_v. These values are defined within the range of 0 to 1, allowing for precise control over the desired changes in hue, saturation, and value components of the image.
hsv_h: 0.015 # image HSV-Hue augmentation (fraction)
hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)
hsv_v: 0.4 # image HSV-Value augmentation (fraction)
Image Angle/Degree rotation Augmentation
Image angle/degree augmentation involves rotating the input images by a certain angle or degree. By introducing rotational variations during training, the model becomes more robust and capable of handling objects that may appear at different orientations or angles in real-world images.
Image rotation augmentation can be configured by specifying rotation degrees value within the range of 0 to 360.
degrees: 0.0 # image rotation (+/- deg)
Image Translation Augmentation
Translation augmentation involves shifting or moving the objects within the image. This technique simulates the scenario where objects are slightly displaced or moved within the frame.
This augmentation improves the model’s accuracy in detecting objects even when they are not centered or located at expected positions.
Image translation augmentation can be configured by specifying translate value within the range of 0 to 1.
translate: 0.2 # image translation (+/- fraction)
Image Prospective Transform Augmentation
Perspective transform augmentation involves distorting the image to simulate perspective changes. This is particularly useful for scenarios where objects may appear at different distances or viewpoints.
By applying perspective transformations during training, the YOLO model learns to handle variations in object sizes, shapes, and distortions caused by perspective changes.
perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
Image Scale Augmentation
Image scale augmentation involves resizing the input images to different scales or dimensions. By training the YOLO model on images with varied scales, it becomes more adaptable to objects appearing at different sizes in real-world scenarios.
This augmentation helps the model learn to detect objects with varying scales, enabling it to handle both small and large objects effectively.
Image scale augmentation can be configured by specifying the scale value, which determines the zoom level of the image. When a smaller scale value is used, the image is zoomed out, making objects appear smaller and providing a wider context. Conversely, a larger scale value brings the object closer, resulting in a zoomed-in view.
scale: 0.9 # image scale (+/- gain)
Image Shear Augmentation
Shear augmentation introduces geometric deformations by tilting or skewing the images along the x or y-axis. This technique mimics real-world situations where objects may appear tilted or skewed due to perspective or camera angles.
By incorporating shear transformations during training, the YOLO model becomes more robust in detecting objects with distorted shapes, such as objects viewed from different angles or with perspective effects.
shear: 0.0 # image shear (+/- deg)
Image Flip up-down(Vertically) and Flip Left-Right(Horizontally)
Flip up-down augmentation involves flipping the image vertically, resulting in a mirror image where the top becomes the bottom and vice versa. This augmentation helps the YOLO model learn to detect objects that may appear upside down or inverted in real-world scenarios.
Flip left-right augmentation, on the other hand, entails flipping the image horizontally, creating a mirror image where the left side becomes the right side and vice versa. This augmentation allows the YOLO model to learn and detect objects from different perspectives or viewpoints.
By training on vertically flipped or horizontally flipped images, the model becomes more robust and adaptable, enabling it to accurately detect objects regardless of their orientation.
flipud: 0.0 # image flip up-down (probability)
fliplr: 0.5 # image flip left-right (probability)
Image Mosaic Augmentation
Mosaic augmentation is a technique that combines several images to create a single training sample with a mosaic-like appearance. This helps the YOLO model learn to detect objects in complicated scenes where objects may overlap or the environment is crowded.
When the model is trained using mosaic augmented images, it becomes better at handling situations where objects are partially hidden or blend together. This augmentation technique improves the model’s ability to accurately detect objects even in challenging scenarios.
mosaic: 1.0 # image mosaic (probability)
Image Mixup Augmentation
MixUp augmentation combines pairs of images and their corresponding object labels to create new training examples. By blending images and their labels, the YOLO model learns to recognize common object features and better generalize across different classes.
This augmentation technique enhances the model’s ability to handle variations in object appearances and improves its overall performance in detecting objects with similar characteristics.
mixup: 0.0 # image mixup (probability)
Image Cutmix Augmentation
CutMix augmentation involves randomly selecting a portion of one image and pasting it onto another image while maintaining the corresponding object labels.
This technique encourages the YOLO model to learn from mixed and overlapping object instances, promoting better object boundary localization and enhancing the model’s ability to handle partial object views.
In conclusion, data augmentation serves as a valuable tool in simplifying and enhancing the training process of YOLO models, paving the way for more effective and accurate object detection in various practical applications.
By incorporating various augmentation methods, such as HSV augmentation, image angle/degree, translation, perspective transform, image scale, flip up-down, flip left-right, as well as more advanced techniques like Mosaic, CutMix, and MixUp, we can significantly improve the performance and robustness of YOLO models.
YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors
YOLOv7 surpasses all known object detectors in both speed and accuracy in the range from 5 FPS to 160 FPS and has the…
Data Augmentation in YOLOv4
The "secret" to YOLOv4 isn't architecture: it's in data preparation. The object detection space continues to move…
YOLOv4: Optimal Speed and Accuracy of Object Detection
There are a huge number of features which are said to improve Convolutional Neural Network (CNN) accuracy. Practical…