Short Text Classification Model based on Pre-trained Language Model with Feature Fusion

Authors

  • Haihui Huang
  • Shiyang Hu

DOI:

https://doi.org/10.56028/aetr.9.1.534.2024

Keywords:

Attention Mechanism, BiSRU, Feature fusion, Text Classification.

Abstract

 In response to the low accuracy of Chinese short text classification in the current data mining field and the defects of existing deep learning models with more model parameters and higher time complexity, this paper proposes a new text classification model - short text classification model (ACBSM) based on pre-trained language model with feature expansion. In ACBSM, to address the problem of high dimensionality of text data without accurate text representation, the Bert model is used to train word vector representation to solve the problem of multiple meanings of a word. From the parallelization acceleration level, a parallel acceleration strategy of two-channel neural network is designed to improve the efficiency of the algorithm in processing massive data. To address the sparsity of text data and the more complex semantics, an attention mechanism is introduced and a CNN model is used to enhance the extraction of keyword information; secondly, BiSRU is used to capture the contextual features of the text, and finally, experimental validation is conducted on a news dataset. The experimental results show that ACBSM improves the accuracy of text classification to 95.83% under the same environment and dataset, and its classification performance is better than other text classification methods.

Downloads

Published

2024-01-25