Deep Learning–Driven Computer Vision for Early and Automatic Detection of Cacao Pests and Diseases
DOI:
https://doi.org/10.56294/sctconf20251762Keywords:
agricultural informatics, automated classification, image preprocessing, model evaluation, stratified validationAbstract
Introduction: deep learning (DL)–based computer vision has emerged as a promising tool for precision agriculture, particularly for detecting crop diseases and pests automatically. This study evaluated the comparative performance of three state-of-the-art DL architectures for automatic identification of cacao pests and diseases using image analysis.
Method: a reproducible pipeline was implemented, encompassing image preprocessing, stratified cross-validation, and inferential statistics through repeated-measures ANOVA. The dataset comprised 4,390 images divided into three highly unbalanced classes: Healthy, Black Pod Rot, and Pod Borer. The architectures—ResNet50, EfficientNet-B0, and ViT-B/16—were fully fine-tuned using the AdamW optimizer, early stopping, and a dynamic learning-rate scheduler.
Results: all models achieved mean macro-F1 scores above 0.96, with no statistically significant differences observed among them (F = 0.278, p = 0.7645). Training curves showed rapid convergence and inter-fold stability, indicating consistent generalization without overfitting.
Conclusions: performance outcomes suggest that the effectiveness of the detection system relies more on pipeline design and class-balance management than on the specific DL architecture used. The findings contribute to developing reproducible, efficient intelligent systems for cacao phytosanitary monitoring and support the integration of artificial intelligence into precision agriculture practices.
Published
Issue
Section
License
Copyright (c) 2025 Jorge Raúl Navarro-Cabrera, José Guillermo Beraún-Barrantes , Ángel Cárdenas-García , Carlos Mauricio Lozano-Carranza (Author)

This work is licensed under a Creative Commons Attribution 4.0 International License.
The article is distributed under the Creative Commons Attribution 4.0 License. Unless otherwise stated, associated published material is distributed under the same licence.
