Enhanced Convolutional-Neural-Network Architecture for Crop Classification


Authors / Editors


Research Areas

No matching items found.


Publication Details

Output type: Journal article

UM6P affiliated Publication?: Yes

Author list: Moreno-Revelo, Monica Y.; Guachi-Guachi, Lorena; Gomez-Mendoza, Juan Bernardo; Revelo-Fuelagan, Javier; Peluffo-Ordonez, Diego H.

Publisher: MDPI

Publication year: 2021

Journal: APPLIED SCIENCES-BASEL (2076-3417)

Volume number: 11

Issue number: 9

ISSN: 2076-3417

eISSN: 2076-3417

Languages: English (EN-GB)


View in Web of Science | View on publisher site | View citing articles in Web of Science


Abstract

Automatic crop identification and monitoring is a key element in enhancing food production processes as well as diminishing the related environmental impact. Although several efficient deep learning techniques have emerged in the field of multispectral imagery analysis, the crop classification problem still needs more accurate solutions. This work introduces a competitive methodology for crop classification from multispectral satellite imagery mainly using an enhanced 2D convolutional neural network (2D-CNN) designed at a smaller-scale architecture, as well as a novel post-processing step. The proposed methodology contains four steps: image stacking, patch extraction, classification model design (based on a 2D-CNN architecture), and post-processing. First, the images are stacked to increase the number of features. Second, the input images are split into patches and fed into the 2D-CNN model. Then, the 2D-CNN model is constructed within a small-scale framework, and properly trained to recognize 10 different types of crops. Finally, a post-processing step is performed in order to reduce the classification error caused by lower-spatial-resolution images. Experiments were carried over the so-named Campo Verde database, which consists of a set of satellite images captured by Landsat and Sentinel satellites from the municipality of Campo Verde, Brazil. In contrast to the maximum accuracy values reached by remarkable works reported in the literature (amounting to an overall accuracy of about 81%, a f(1) score of 75.89%, and average accuracy of 73.35%), the proposed methodology achieves a competitive overall accuracy of 81.20%, a f(1) score of 75.89%, and an average accuracy of 88.72% when classifying 10 different crops, while ensuring an adequate trade-off between the number of multiply-accumulate operations (MACs) and accuracy. Furthermore, given its ability to effectively classify patches from two image sequences, this methodology may result appealing for other real-world applications, such as the classification of urban materials.


Keywords

No matching items found.


Documents

No matching items found.


Last updated on 2021-10-09 at 23:19