Images augmentation method based on texture modelling and Artificial Intelligence

Abstract

Defect detection and quality control systems rely on extensive datasets of defect and defect-free samples for training. However, acquiring diverse, high-quality images of defective products is often impractical due to manufacturing constraints, cost, and rarity of specific defect types. This work introduces a novel defect augmentation method that enhances dataset diversity by synthetically generating realistic defects and seamlessly integrating them into defect-free background images. Unlike traditional approaches that augment entire images, the proposed method isolates defects and blends them into new backgrounds while maintaining contextual consistency. Our approach leverages a generator-discriminator framework, iteratively refining defect synthesis to ensure that generated anomalies retain the characteristics of real-world defects. A key advantage of the proposed method is its efficiency in inference time. Unlike existing techniques that require full image blending during both training and inference, our model operates on a lightweight 2D binary map representing the spatial relationships of defects. This significantly reduces computational overhead, making our approach faster and more scalable. Experiments on industrial datasets demonstrate that the proposed method produces visually realistic defects, maintains high consistency with manufacturing defect distributions, and enhances defect detection model performance. By providing an efficient and scalable defect augmentation solution, this work contributes to improved generalization and robustness of AI-based quality control systems

Description

Subject(s)

Image Augmentation, Artificial Intelligence, Anomalies Insertion, Machine Learning, Grayscale Images, Industrial Inspection Systems

Citation