Computer Vision-Based Detection of Lettuce Health

Main Article Content

Fatima Martinez
James Brian Romaine
Adrián Cardona
Pablo Millán

Abstract

Early identification of plant health issues is essential for efficient and sustainable agricultural management. However, traditional visual inspection methods are not feasible on a large scale. Therefore, this study proposes an automated system for detecting the condition of lettuce crops using the YOLOv5 object detector. The model was trained on a dataset comprising 144 images of lettuces, captured at the BIOAlverde farm (Seville, Spain), and classified into two categories: healthy and unhealthy. The images were manually annotated, and the model was trained using data augmentation techniques, hyperparameter tuning, and cross-validation to improve accuracy. The results obtained demonstrate the high performance of the system. For healthy lettuces, the model achieved a precision of 97.9%, a recall of 99.3%, and a mean Average Precision (mAP) of 99.5%; while for unhealthy lettuces, precision reached 95.8%, recall 100%, and mAP 99.5%. In conclusion, the proposed system constitutes an effective tool for the automatic detection of lettuce condition, contributing to the advancement of precision agriculture through the optimisation of resource use and the timely identification of crop issues.

Article Details

Section

Original Articles

How to Cite

References

Chebrolu, N., Lottes, P., Schaefer, A., Winterhalter, W., Burgard, W., & Stachniss, C. (2017). Agricultural robot dataset for plant classification, localization and mapping on sugar beet fields. The International Journal of Robotics Research, 36(10), 1045-1052.

Cienfuegos, S. M. U., & Rivas, J. J. B. (2025). Inteligencia Artificial: Machine Learning, para detección temprana de plagas y enfermedades de cultivos básicos, Nicaragua. INGENIO, 8(1), 24–34.

Fernández, L. J., Morales, D. R., & Sánchez, P. H. (2021). Implementación de redes neuronales profundas para la clasificación de enfermedades en hojas de lechuga. Journal of Agricultural Informatics, 12(3), 67-75.

la Fernández Marcos, S. (2023). Sistema de reconocimiento de patrones en imágenes satelitales para detección de objetos [Artículo

científico]. https://gredos.usal.es/bitstream/10366/158423/1/Memoria.pdf .

García, S. L., Pérez, N. M., & Rodríguez, V. H. (2022). Sistema basado en visión artificial para la detección automática de plagas en cultivos de lechuga. Tecnología Agrícola Aplicada, 14(1), 33-41.

Gómez, J. A., Pérez, L. M., & Rodríguez, S. F. (2020). Detección de enfermedades en lechuga mediante análisis de imágenes y aprendizaje profundo. Revista de Tecnología Agrícola, 15(2), 45-53.

Haque, M. E., Rahman, A., Junaeid, I., Hoque, S. U., & Paul, M. (2022). Rice leaf disease classification and detection using yolov5.arXiv preprint arXiv:2209.01579. Revista Científica de la UCSA, Vol.12 N.1 Abril, 2025: 03-13

Huang, J., Rathod, V., Sun, C., Zhu, M., Korattikara, A., Fathi, A., ... & Murphy, K. (2017). Speed/accuracy trade-offs for modern convolutional object detectors. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 7310–7311.

Jiang, P., Ergu, D., Liu, F., Cai, Y., & Ma, B. (2022). A review of YOLO algorithm developments. Procedia Computer Science, 199, 1066–1073.

Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., & Berg, A. C. (2016). SSD: Single Shot MultiBox Detector. European Conference on Computer Vision, 21–37.

López, M. A., García, J. C., & Torres, E. F. (2018). Evaluación de la salud de la lechuga mediante análisis hiperespectral y aprendizaje automático. Ciencia y Tecnología Agrícola, 10(2), 89-97.

Martínez, R. P., López, A. G., & Hernández, M. T. (2019). Aplicación de técnicas de visión por computadora para el monitoreo de la salud de cultivos de lechuga. Agricultura Digital, 8(1), 22-30.

Martinez, F., Romaine, J. B., Johnson, P., Cardona, A., & Millan, P. (2025). Novel Fusion Technique for High-Performance Automated Crop edge Detection in Smart Agriculture. IEEE Access.

Martinez, F., Romaine, J. B., Manzano, J. M., Ierardi, C., & Millan, P. (2024). Deployment and verification of custom autonomous low-budget IoT devices for image feature extraction in wheat. IEEE Access.

Muñiz, L. (2021). Viabilidad y rendimiento de YOLOv5 en Raspberry Pi 4 modelo B. Obtenido de https://idus. us.es/bitstream/handle/11441/126961/TFG-3731-MU% c3%91IZ% 20GARCIA. pdf.

Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real time object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.

Sarma, K. S. R. K., Sasikala, C., Surendra, K., Erukala, S., & Aruna, S. L. (2024, June). A comparative study on faster R-CNN, YOLO and SSD object detection algorithms on HID S system. In AIP Conference Proceedings (Vol. 2971, No. 1). AIP Publishing.

Tan, L., Huangfu, T., Wu, L., & Chen, W. (2021). Comparison of RetinaNet, SSD, and YOLO v3 for real-time pill identification. BMC Medical Informatics and Decision Making, 21, 1-11.

Terven, J., Córdova-Esparza, D. M., & Romero-González, J. A. (2023). A comprehensive review of yolo architectures in computer vision: From yolov1 to yolov8 and yolonas. Machine learning and knowledge extraction, 5(4), 1680-1716.