中文
Share
Share
中文
ISSN 2096-7217 CN 32-1887/S2
Quick Search
Citation Search
Site Search
Adv Search
Toggle navigation
JIAM
Organizer
Home
About
Journal Introduction
Editorial Board
Open Access Statement
Ethic Statement
Academic Misconduct
Browse
Just Accepted
Current Issue
Archive
Hot Papers
Special Issues
Most Cited
Most Viewed
Most Downloaded
Author Center
Online Submission
Guides for Authors
Review Process
Peer Review
Copyright Transfer Agreement
Charge and Remuneration
Download
Manuscript Review
Expert Review
Edit office
Editor in chief
Included
CNKI
WANFANG DATA
VIP
SUPERSTAR
Subscribe
RSS
Email Alert
Subscription
Contact Us
Contact Us
Advertisement
Article Result
Journals
Publication Years
Keywords
Search within results
(((MA Zenghong[Author]) AND 1[Journal]) AND year[Order])
AND
OR
NOT
Title
Author
Institution
Keyword
Abstract
PACS
DOI
Please wait a minute...
Choose
Download reference
EndNote
Reference Manager
ProCite
BibTeX
RefWorks
Show/Hide thumbnails
Select
Visual navigation in orchard based on multiple images at different shooting angles
MA Zenghong, YUE Jiawen, YIN Cheng, ZHAO Runmao, CHANDA Mulongoti, DU Xiaoqiang
Journal of Intelligent Agricultural Mechanization 2024, 5 (4): 51-65. DOI:
10.12398/j.issn.2096-7217.2024.04.004
Abstract
(
49
)
HTML
(
5
)
PDF
(17569KB)(
4
)
Knowledge map
The orchards usually have rough terrain, dense tree canopy and weeds. It is hard to use GNSS for autonomous navigation in orchard due to signal occlusion, multipath effect, and radio frequency interference. To achieve autonomous navigation in orchard, a visual navigation method based on multiple images at different shooting angles is proposed in this paper. A dynamic image capturing device is designed for camera installation and multiple images can be shot at different angles. Firstly, the obtained orchard images are classified into sky and soil detection stage. Each image is transformed to HSV space and initially segmented into sky, canopy and soil regions by median filtering and morphological processing. Secondly, the sky and soil regions are extracted by the maximum connected region algorithm, and the region edges are detected and filtered by the Canny operator. Thirdly, the navigation line in the current frame is extracted by fitting the region coordinate points. Then the dynamic weighted filtering algorithm is used to extract the navigation line for the soil and sky detection stage, respectively, and the navigation line for the sky detection stage is mirrored to the soil region. Finally, the Kalman filter algorithm is used to fuse and extract the final navigation path. The test results on 200 images show that the accuracy of visual navigation path fitting is 95.5%, and single frame image processing costs 60 ms, which meets the real-time and robustness requirements of navigation. The visual navigation experiments in Camellia oleifera orchard show that when the driving speed is 0.6 m/s, the maximum tracking offset of visual navigation in weed-free and weedy environments is 0.14 m and 0.24 m, respectively, and the RMSE is 30 mm and 55 mm, respectively.
Fig. & Tab.
|
Reference
|
Related Articles
|
Metrics