Evaluating AlphaEarth Foundation Embeddings for Pixel- and Object-Based Land Cover Classification in Google Earth Engine
- Posted
- Server
- Preprints.org
- DOI
- 10.20944/preprints202511.2172.v1
Foundation models such as AlphaEarth introduce a new paradigm in remote sensing by providing semantically rich, pretrained embeddings that integrate multi-sensor, spatio-temporal, and contextual information. This study evaluates the performance of AlphaEarth embeddings for land-cover classification under both pixel-based and object-based paradigms within the Google Earth Engine (GEE) environment. Sentinel-2 imagery for 2024 was used to map a 1,930-hectare region in Pabbi Tehsil, Khyber Pakhtunkhwa, Pakistan, where rapid urbanization is reshaping traditional land use. Four experimental configurations—Pixel-Based Spectral Indices (PBSI), Pixel-Based AlphaEarth Embeddings (PBAE), Object-Based Spectral Indices (OBSI), and Object-Based AlphaEarth Embeddings (OBAE)—were implemented using a Random Forest classifier.The results show that AlphaEarth embeddings consistently outperformed spectral index–based models, improving overall accuracy by ≈ 5 percentage points and Kappa by ≈ 3. Object-based approaches enhanced spatial coherence and boundary delineation, particularly for built-up and road classes, while maintaining stable area statistics across pipelines. The findings demonstrate that pretrained embeddings can achieve deep-learning-level accuracy through lightweight, cloud-native workflows, offering an efficient pathway for land-cover mapping and urban-cadastral monitoring in data-scarce regions.