TeamTTA: Efficient Multi-Device Collaboration for Open-Set Test-Time Adaptation via Cloud Integration

Main Article Content

Abstract

Deep neural networks (DNNs) deployed on edge devices often suffer from severe performance degradation when exposed to dynamic and continually shifting environments. Test-time adaptation (TTA) has emerged as a promising solution by updating models online with incoming test data. However, edge deployment poses unique challenges: limited computational resources, latency caused by adaptation delays, and knowledge isolation across devices. The situation becomes even more complex in open-world scenarios, where the presence of unknown categories further disrupts adaptation. To overcome these limitations, we propose TeamTTA, a cloud-integrated framework designed for efficient multi-device collaboration open-set test-time adaptation. Specifically, TeamTTA aggregates reliable samples from multiple edge devices through crowdsourcing, uploads them to the cloud, and maintains a memory buffer for continual adaptation. A large vision model (LVM) in the cloud leverages its zero-shot generalization ability to filter out open-set samples and acts as a teacher model, distilling its knowledge into a replicated student edge model stored in the cloud. The adapted model parameters, or alternatively global statistics under poor network conditions, are then transmitted back to the edge devices for efficient inference. Extensive experiments on standard public TTA benchmarks, including corrupted and open-set datasets, show that TeamTTA achieves superior adaptation accuracy, robustness to distribution shifts, and communication efficiency, outperforming state-of-the-art TTA baselines. These results validate the effectiveness of integrating cloud-edge collaboration and LVM-driven knowledge distillation for real-world edge intelligence.

Article Details

Section
Articles