English

智能化农业装备学报(中英文) ›› 2024, Vol. 5 ›› Issue (4): 51-65.DOI: 10.12398/j.issn.2096-7217.2024.04.004

• • 上一篇    下一篇

基于不同拍摄角度多幅图像的果园视觉导航

<a href="http://znhnyzbxb.niam.com.cn/CN/article/advancedSearchResult.do?searchSQL=((([Author]) AND 1[Journal]) AND year[Order])" target="_blank"></a><sup>1</sup><sup>,</sup><sup>2</sup><sup>,</sup><sup>3</sup>(<a href="mailto:mzh2018@zstu.edu.cn"><img src="/images/email.png" border="0" /></a>), <a href="http://znhnyzbxb.niam.com.cn/CN/article/advancedSearchResult.do?searchSQL=((([Author]) AND 1[Journal]) AND year[Order])" target="_blank"></a><sup>1</sup><sup>,</sup><sup>2</sup><sup>,</sup><sup>3</sup>, <a href="http://znhnyzbxb.niam.com.cn/CN/article/advancedSearchResult.do?searchSQL=((([Author]) AND 1[Journal]) AND year[Order])" target="_blank"></a><sup>1</sup><sup>,</sup><sup>2</sup><sup>,</sup><sup>3</sup>, <a href="http://znhnyzbxb.niam.com.cn/CN/article/advancedSearchResult.do?searchSQL=((([Author]) AND 1[Journal]) AND year[Order])" target="_blank"></a><sup>1</sup><sup>,</sup><sup>2</sup><sup>,</sup><sup>3</sup>, <span>CHANDA Mulongoti</span><sup>1</sup><sup>,</sup><sup>2</sup><sup>,</sup><sup>3</sup><span>,&nbsp;</span>&nbsp; <a href="http://znhnyzbxb.niam.com.cn/CN/article/advancedSearchResult.do?searchSQL=((([Author]) AND 1[Journal]) AND year[Order])" target="_blank"></a><sup>1</sup><sup>,</sup><sup>2</sup><sup>,</sup><sup>3</sup>(<a href="mailto:xqiangdu@zstu.edu.cn"><img src="/images/email.png" border="0" /></a>)   

  1. 1.浙江理工大学机械工程学院,浙江 杭州,310018
    2.浙江省种植装备技术重点实验室,浙江 杭州,310018
    3.农业农村部东南丘陵山区农业装备重点实验室(部省共建),浙江 杭州,310018
  • 收稿日期:2023-10-25 修回日期:2023-12-29 出版日期:2024-11-15 发布日期:2024-11-15
  • 通讯作者: DU Xiaoqiang, PhD, Professor, research interests: machine vision and agricultural robot, agricultural machinery equipment design. E-mail: xqiangdu@zstu.edu.cn

Visual navigation in orchard based on multiple images at different shooting angles

MA Zenghong1,2,3(), YUE Jiawen1,2,3, YIN Cheng1,2,3, ZHAO Runmao1,2,3, CHANDA Mulongoti1,2,3, DU Xiaoqiang1,2,3()   

  1. 1.School of Mechanical Engineering,Zhejiang Sci-Tech University,Hangzhou 310018,China
    2.Key Laboratory of Transplanting Equipment and Technology of Zhejiang Province,Hangzhou 310018,China
    3.Key Laboratory of Agricultural Equipment for Hilly and Mountainous Areas in Southeastern China(Co-construction by Ministry and Province),Ministry of Agriculture and Rural Affairs,Hangzhou 310018,China
  • Received:2023-10-25 Revised:2023-12-29 Online:2024-11-15 Published:2024-11-15
  • Contact: DU Xiaoqiang
  • About author:MA Zenghong, PhD, Associate Professor, research interests: agricultural machinery navigation and unmanned driving. E-mail: mzh2018@zstu.edu.cn
  • Supported by:
    National Key Research and Development Program of China(2022YFD2202103);National Natural Science Foundation of China(31971798);Zhejiang Provincial Key Research & Development Plan(2023C02049);SNJF Science and Technology Collaborative Program of Zhejiang Province(2022SNJF017);Hangzhou Agricultural and Social Development Research Project(202203A03)

摘要: 果园通常地形崎岖,树冠茂密,杂草丛生。由于信号遮挡、多径效应和射频干扰,GNSS难以在果园中进行自主导航。为了实现果园的自主导航,本文提出了一种基于不同拍摄角度下多幅图像的视觉导航方法。一种动态图像捕捉设备是为安装相机而设计的,可以在不同角度拍摄多个图像。首先,将获得的果园图像分为天空和土壤检测阶段。每个图像被转换到HSV空间,并通过中值滤波和形态学处理初步分割为天空、树冠和土壤区域。其次,利用最大连通区域算法提取天空和土壤区域,并利用Canny算子对区域边缘进行检测和滤波。第三,通过对区域坐标点的拟合,提取当前帧数中的导航线。然后使用动态加权滤波算法分别提取土壤和天空检测阶段的导航线,并将天空检测阶段导航线镜像到土壤区域。最后,利用卡尔曼滤波算法对最终导航路径进行融合提取。对200幅图像的测试结果表明,视觉导航路径拟合的准确率为95.5%,单帧图像处理耗时60 ms,满足导航的实时性和鲁棒性要求。在油茶园进行的视觉导航实验表明,当行驶速度为0.6 m/s时,无杂草和杂草环境下视觉导航的最大跟踪偏移量分别为0.14 m和0.24 m,均方根误差分别为30 mm和55 mm。

关键词: 果园, 视觉导航, 多拍摄角度, 区域分割, 卡尔曼滤波器

Abstract: The orchards usually have rough terrain, dense tree canopy and weeds. It is hard to use GNSS for autonomous navigation in orchard due to signal occlusion, multipath effect, and radio frequency interference. To achieve autonomous navigation in orchard, a visual navigation method based on multiple images at different shooting angles is proposed in this paper. A dynamic image capturing device is designed for camera installation and multiple images can be shot at different angles. Firstly, the obtained orchard images are classified into sky and soil detection stage. Each image is transformed to HSV space and initially segmented into sky, canopy and soil regions by median filtering and morphological processing. Secondly, the sky and soil regions are extracted by the maximum connected region algorithm, and the region edges are detected and filtered by the Canny operator. Thirdly, the navigation line in the current frame is extracted by fitting the region coordinate points. Then the dynamic weighted filtering algorithm is used to extract the navigation line for the soil and sky detection stage, respectively, and the navigation line for the sky detection stage is mirrored to the soil region. Finally, the Kalman filter algorithm is used to fuse and extract the final navigation path. The test results on 200 images show that the accuracy of visual navigation path fitting is 95.5%, and single frame image processing costs 60 ms, which meets the real-time and robustness requirements of navigation. The visual navigation experiments in Camellia oleifera orchard show that when the driving speed is 0.6 m/s, the maximum tracking offset of visual navigation in weed-free and weedy environments is 0.14 m and 0.24 m, respectively, and the RMSE is 30 mm and 55 mm, respectively.

Key words: orchard, visual navigation, multiple shooting angles, region segmentation, Kalman filter

中图分类号: