Segment, Mask, and Predict: Augmenting Chinese Word Segmentation with Self-Supervision


Please cite:
title={Segment, Mask, and Predict: Augmenting Chinese Word Segmentation with Self-Supervision},
author={Maimaiti, Mieradilijiang and Liu, Yang and Zheng, Yuanhang and Chen, Gang and Huang, Kaiyu and Zhang,Ji and Luan, Huanbo and Sun, Maosong},
journal={International Conference on Empirical Methods in Natural Language Processing (EMNLP)},


Recent state-of-the-art (SOTA) effective neural network methods and fine-tuning methods based on pre-trained models (PTM) have been used in Chinese word segmentation (CWS), and they achieve great results. However, previous works focus on training the models with the fixed corpus at every iteration. The intermediate generated information is also valuable. Besides, the robustness of the previous neuralmethods is limited by the large-scale annotated data. There are a few noises in the annotated corpus. Limited efforts have been made by pre013 vious studies to deal with such problems. In this work, we propose a self-supervised CWS approach with a straightforward and effective architecture. First, we train a word segmentation model and use it to generate the segmentation results. Then, we use a revised masked language model (MLM) to evaluate the quality of the segmentation results based on the predictions of the MLM. Finally, we leverage the evaluations to aid the training of the segmenter by improved minimum risk training. Experimental results show that our approach outperforms previous methods on 9 different CWS datasets with single criterion training and multiple criteria training and achieves better robustness.