ESA title
Optic

Sensor-independence for cloud masking

Importance of the work

In Earth Observation, for many problems the performance of ML algorithms is no longer limited by model design, but rather by dataset size. Labelled data is costly to produce, especially for segmentation problems, and the diversity of sensor design seen in satellites means that datasets are often not directly combinable. Therefore, models are left with small datasets and often struggle to profit from the wealth of labelled data already collected for other missions. This problem will only intensify as the number of satellite missions increases.


Sensor-independent model prediction of images with various spectral band combinations used as input. This demonstrates that a single model can consistently predict cloud cover using band combinations, allowing it to work on multiple sensors. In this case, we show results on a selection of Landsat 8 scenes. Credits: Alistair Francis, UCL.

Overview

This research has focused on designing a CNN that is sensor-independent (usable across multiple sensors) for optical segmentation problems, specifically cloud masking. This novel sensor-independent model fuses the spatial information of the image, along with knowledge about the wavelengths used. It can ingest any number of spectral bands, and use knowledge learned on one sensor for prediction on another, allowing for simultaneous use across multiple sensors without the need for retraining.


Findings

Initial results for cloud masking are promising. Although optimisation of the model design is still ongoing, we have found strong evidence to suggest that including training data from one satellite increases performance on another, even if the spectral bands are not the same. This suggests that combining many datasets from a diverse range of satellites will lead to even better results. Soon, this could generate a model that exhibits state-of-the-art performance across multiple sensors simultaneously.

Latest use cases

Subscribe to our newsletter

Share