Graph-based Text Classification by Contrastive Learning with Text-level Graph Augmentation
Ximing Li, Bing Wang, Yang Wang, Meng Wang- General Computer Science
T ext C lassification ( TC ) is a fundamental task in the information retrieval community. Nowadays, the mainstay TC methods are built on the deep neural networks, which can learn much more discriminative text features than the traditional shallow learning methods. Among existing deep TC methods, the ones based on Graph Neural Network (GNN) have attracted more attention due to the superior performance. Technically, the GNN-based TC methods mainly transform the full training dataset to a graph of texts, however they often neglect the dependency between words, so as to miss potential semantic information of texts, which may be significant to exactly represent them. To solve the aforementioned problem, we generate graphs of words instead, so as to capture the dependency information of words. Specifically, each text is translated into a graph of words, where neighboring words are linked. We learn the node features of words by a GNN-like procedure and then aggregate them as the graph feature to represent the current text. To further improve the text representations, we suggest a contrastive learning regularization term. Specifically, we generate two augmented text graphs for each original text graph, we constrain the representations of the two augmented graphs from the same text close and the ones from different texts far away. We propose various techniques to generate the augmented graphs. Upon those ideas, we develop a novel deep TC model, namely T ext-level G raph N etworks with C ontrastive L earning ( TGN