Abstract
Active learning (AL) is a machine learning technique that aims to reduce annotation costs by selectively choosing the most informative samples for labeling. This process relies on acquisition functions, which can be broadly categorized into two types: representativity-based and uncertainty-based. Representativity-based functions focus on exploring the dataset, while uncertainty-based functions refine decision boundaries. This creates a trade-off known as the exploration-exploitation dilemma. To address this challenge, we propose a novel approach that alternates between these two types of acquisition functions. Our method employs an adaptive feedback-driven selection mechanism, an annealing-based approach, or a baseline random criterion to guide the alternation process. This strategy helps mitigate common AL issues, such as batch mode inefficiency and cold start problems. Our experiments demonstrate that the alternating approach enhances both the accuracy and robustness of the AL process. Additionally, we consider the balance between accuracy and energy consumption, contributing to the development of more sustainable AI systems. By evaluating our criterion across various models and datasets, we show its potential to reduce computational costs while maintaining or even improving accuracy. Notably, alternating between the BALD and BADGE acquisition functions yields particularly robust results.
Originalsprache | Englisch |
---|---|
Titel | 2024 IEEE International Conference on Big Data |
Seiten | 5755-5764 |
DOIs | |
Publikationsstatus | Veröffentlicht - 2025 |
Veranstaltung | 2024 IEEE International Conference on Big Data - Washington, USA/Vereinigte Staaten Dauer: 15 Dez. 2024 → 18 Dez. 2024 |
Konferenz
Konferenz | 2024 IEEE International Conference on Big Data |
---|---|
Land/Gebiet | USA/Vereinigte Staaten |
Stadt | Washington |
Zeitraum | 15/12/24 → 18/12/24 |
Research Field
- Complex Dynamical Systems