中文

Journal of Intelligent Agricultural Mechanization ›› 2024, Vol. 5 ›› Issue (4): 51-65.DOI: 10.12398/j.issn.2096-7217.2024.04.004

Previous Articles     Next Articles

Visual navigation in orchard based on multiple images at different shooting angles

MA Zenghong1,2,3(), YUE Jiawen1,2,3, YIN Cheng1,2,3, ZHAO Runmao1,2,3, CHANDA Mulongoti1,2,3, DU Xiaoqiang1,2,3()   

  1. 1.School of Mechanical Engineering,Zhejiang Sci-Tech University,Hangzhou 310018,China
    2.Key Laboratory of Transplanting Equipment and Technology of Zhejiang Province,Hangzhou 310018,China
    3.Key Laboratory of Agricultural Equipment for Hilly and Mountainous Areas in Southeastern China(Co-construction by Ministry and Province),Ministry of Agriculture and Rural Affairs,Hangzhou 310018,China
  • Received:2023-10-25 Revised:2023-12-29 Online:2024-11-15 Published:2024-11-15
  • Corresponding author: DU Xiaoqiang
  • About author:MA Zenghong, PhD, Associate Professor, research interests: agricultural machinery navigation and unmanned driving. E-mail: mzh2018@zstu.edu.cn
  • Supported by:
    National Key Research and Development Program of China(2022YFD2202103);National Natural Science Foundation of China(31971798);Zhejiang Provincial Key Research & Development Plan(2023C02049);SNJF Science and Technology Collaborative Program of Zhejiang Province(2022SNJF017);Hangzhou Agricultural and Social Development Research Project(202203A03)

Abstract: The orchards usually have rough terrain, dense tree canopy and weeds. It is hard to use GNSS for autonomous navigation in orchard due to signal occlusion, multipath effect, and radio frequency interference. To achieve autonomous navigation in orchard, a visual navigation method based on multiple images at different shooting angles is proposed in this paper. A dynamic image capturing device is designed for camera installation and multiple images can be shot at different angles. Firstly, the obtained orchard images are classified into sky and soil detection stage. Each image is transformed to HSV space and initially segmented into sky, canopy and soil regions by median filtering and morphological processing. Secondly, the sky and soil regions are extracted by the maximum connected region algorithm, and the region edges are detected and filtered by the Canny operator. Thirdly, the navigation line in the current frame is extracted by fitting the region coordinate points. Then the dynamic weighted filtering algorithm is used to extract the navigation line for the soil and sky detection stage, respectively, and the navigation line for the sky detection stage is mirrored to the soil region. Finally, the Kalman filter algorithm is used to fuse and extract the final navigation path. The test results on 200 images show that the accuracy of visual navigation path fitting is 95.5%, and single frame image processing costs 60 ms, which meets the real-time and robustness requirements of navigation. The visual navigation experiments in Camellia oleifera orchard show that when the driving speed is 0.6 m/s, the maximum tracking offset of visual navigation in weed-free and weedy environments is 0.14 m and 0.24 m, respectively, and the RMSE is 30 mm and 55 mm, respectively.

Key words: orchard, visual navigation, multiple shooting angles, region segmentation, Kalman filter

CLC Number: