ESA title
Optic

Bringing Convolutional Neural Networks at the edge

Importance of the work

Earth Observation satellites are becoming smaller but also more numerous. Improving their on-board data processing capabilities can both enhance their autonomy and reduce the amount of data that must be downloaded. Convolutional Neural Networks (CNN) have the potential to compress the data acquired onboard in a smart way, enabling us to download only the data that we are interested in, e.g. cloud-free images or areas with wildfires.




Smart downstream transmission: downloading only the data that is useful, e.g. cloud-free images or areas with wildfires. Credits: John Mrziglod and Jennifer Adams, ESA.

Overview

In this experiment, ESA and its partners deploy a CNN on a nanosatellite with a hyperspectral instrument (FSSCat and HyperScout-2). Since CNNs are computationally very expensive and therefore not suitable for small satellites that have limited resources of power and processing, a specialised Vision Processing Unit (VPU) is shipped onboard. The work is conducted by a consortium which includes Cosine, University of Pisa, Sinergise and Ubotica. Cloud detection was chosen as a demo application to test the functioning of the CNN in space.


Findings

The availability of labelled cloud training data suitable for Deep Learning was a key focus of the project, since Hyperscout-2 is yet to acquire imagery following launch. Hyperscout-2 training data was simulated from Copernicus Sentinel-2 cloud labels generated by Synergise, including spatial, spectral (i.e. bandwidths) and radiometric (i.e. Signal-to-Noise-Ratio) transformations. 

Following the work of the consortium, an internal capacity to port Deep Learning models on embedded devices has been built. CNNs for cloud classification and segmentation (86% accuracy, 15% FP) have been developed in Φ-lab in order to gain a deeper understanding of the challenges of this technology.

Latest use cases

Subscribe to our newsletter

Share