Unsupervised Reinforcement Adaptation for Class-Imbalanced Text Classification
Abstract
Class-imbalance naturally exists when train and test models in different domains. Unsupervised domain adaptation (UDA) augment model performance with only accessible annotations from the source domain and unlabeled data from the target domain. However, existing state-of-the-art UDA models learn domain-invariant representations and evaluate primarily on class-balanced data across domains. In this work, we propose an unsupervised domain adaptation approach via reinforcement learning that jointly leverages feature variants and imbalanced labels across domains. We experiment with the text classification task for its easily accessible datasets and compare the proposed method with five baselines. Experiments on three datasets prove that our proposed method can effectively learn robust domain-invariant representations and successfully adapt text classifiers on imbalanced classes over domains. The code is available at https://github.com/woqingdoua/ImbalanceClass.
Publication Title
*SEM 2022 - 11th Joint Conference on Lexical and Computational Semantics, Proceedings of the Conference
Recommended Citation
Wu, Y., & Huang, X. (2022). Unsupervised Reinforcement Adaptation for Class-Imbalanced Text Classification. *SEM 2022 - 11th Joint Conference on Lexical and Computational Semantics, Proceedings of the Conference, 311-322. Retrieved from https://digitalcommons.memphis.edu/facpubs/20259