Crop mapping is important information for many applications, especially those supporting environmental, economic and policy issues. In particular, our partners, the World Food Programme and UNICEF, need accurate crop-type information to help manage food supplies and children’s welfare in developing countries. The availability of new sensor platforms and the latest developments in transferring models between different image domains create a great opportunity to improve existing crop-type mapping approaches.
This study explores a new opportunity to improve satellite-based information on crop types using drones and knowledge transferred from computer vision. The work is conducted in cooperation with the World Food Programme, UNICEF and Stanford University. It is an unprecedented attempt, not only within the crop-type mapping domain but even within the broader field of EO analysis, to fuse information extracted from images captured by satellites, drones and handheld cameras/smartphones.
Using three datasets of drone images acquired in Malawi and Mozambique, we found that transfer learning from computer vision could be successfully applied to crop-type classification. It means that low-level visual information learned on millions of images containing objects like cats, dogs, etc. can help distinguish crop types in drone data for Sub-Saharan areas.
Moreover, the classified drone data can be used to increase the number and diversity of input samples needed by Sentinel-2-based classification systems (e.g. Sen2Agri) which make use of multi-temporal information to increase the accuracy of crop-type mapping. A key aspect is to provide confidence-level information together with the classification results to select only the most reliable samples..
Share