English

智能化农业装备学报(中英文) ›› 2024, Vol. 5 ›› Issue (2): 42-50.DOI: 10.12398/j.issn.2096-7217.2024.02.005

• • 上一篇    下一篇

基于改进YOLOv7的茶叶嫩芽识别模型研究

魏堂伟(), 张津诚, 王晶, 周庆燕()   

  1. 安徽农业大学信息与人工智能学院,安徽 合肥,230036
  • 收稿日期:2024-02-01 修回日期:2024-03-20 出版日期:2024-05-15 发布日期:2024-05-15
  • 通讯作者: 周庆燕
  • 作者简介:魏堂伟,女,1997年生,安徽六安人,硕士研究生;研究方向为人工智能。E-mail: 1784725066@qq.com
  • 基金资助:
    安徽省科技计划项目(2022MKS12)

Study of tea buds recognition and detection based on improved YOLOv7 model

WEI Tangwei(), ZHANG Jincheng, WANG Jing, ZHOU Qingyan()   

  1. College of Information and Artificial Intelligence,Anhui Agricultural University,Hefei 230036,China
  • Received:2024-02-01 Revised:2024-03-20 Online:2024-05-15 Published:2024-05-15
  • Contact: ZHOU Qingyan

摘要:

为了在复杂环境中有效识别茶叶嫩芽并在最大限度避免伤害茶树的情况下,提高智能化采摘精度,本研究针对传统目标检测算法在茶园中存在的检测精度低、鲁棒性差等问题,提出一种基于改进YOLOv7的茶叶嫩芽识别与检测的模型YOLOv7-tea,从而实现对茶叶嫩芽的快速识别与检测。首先,采集茶叶嫩芽图像并完成嫩芽图像的标注和数据增强,构建茶叶嫩芽数据集。其次,通过在YOLOv7主干网络的3个特征提取层中引入CBAM注意力机制模块,增强模型对特征提取的能力;采用SPD-Conv模块替换颈部网络下采样模块中的SConv模块,以减少小目标特征的丢失;通过EIoU损失函数优化框回归损失,提升预测框的准确率。最后,以茶叶嫩芽图像数据集为样本将其他目标检测模型与YOLOv7-tea模型进行对比试验,并对不同距离、不同角度拍摄的茶叶嫩芽图像进行识别效果检测。试验结果表明,YOLOv7-tea网络模型比YOLOv7模型的精确率(Precision, P)、召回率(Recall, R)和识别平均精度(mAP)值分别高出2.87、6.91和8.69个百分点,且模型的检测速度更快,在复杂背景下对茶叶嫩芽的识别与检测具有更高的置信度分数。该研究构建的YOLOv7-tea模型对小尺寸茶叶嫩芽的识别效果较好,减少了漏检和误检的情况,具有良好的鲁棒性和实时性,可为茶叶的产量预估和智能化采摘提供参考。

关键词: 茶叶嫩芽, 目标检测, CBAM注意力机制, 自动化采摘, YOLOv7

Abstract:

To effectively identify tea buds in complex environments and improve the precision of intelligent harvesting while minimizing damage to tea trees, this study addresses the issues of low detection accuracy and poor robustness exhibited by traditional target detection algorithms in tea gardens, and proposes YOLOv7-tea model for tea bud identification and detection based on an improved YOLOv7, so as to achieve rapid recognition and detection of tea buds.First, tea bud images were collected and annotated, and data augmentation was performed to construct a tea bud dataset. Next, the CBAM attention mechanism module was introduced into three feature extraction layers of the YOLOv7 backbone network to enhance the model's feature extraction capability; the SPD-Conv module was used to replace the SConv module in the neck network's downsampling module to reduce the loss of small object features; and the EIoU loss function was employed to optimize box regression, thereby improving the accuracy of the predicted boxes. Finally, a comparative experiment was conducted between other target detection models and the YOLOv7-tea model using the tea bud image dataset as a sample, and the recognition effect of tea buds shot at different distances and angles was tested.The experimental results show that the YOLOv7-tea network model outperforms the YOLOv7 model in terms of precision (P), recall (R), and mean average precision (mAP) by 2.87, 6.91, and 8.69 percentage points, respectively. Additionally, the model has a faster detection speed and exhibits higher confidence scores in the recognition and detection of tea buds in complex backgrounds.The YOLOv7-tea model constructed in this study demonstrates better recognition performance for small-sized tea leaf buds, reducing instances of missed detection and false alarms. It exhibits good robustness and real-time performance, offering valuable insights for estimating tea yield and implementing intelligent harvesting.

Key words: tea buds, object detection, CBAM attention mechanism, automated picking, YOLOv7

中图分类号: