Enhancing Public Affairs Text Classification via BERT-CNN-BiLSTM Feature Fusion
DOI:
https://doi.org/10.5755/j01.itc.55.1.42337Keywords:
Natural Language Processing, Public Affairs Text, Text Classification, Fusion Model, BERT, BiLSTM, CNNAbstract
Amid rapid advancement in digital governance and exponential growth of public appeal data, traditional manual text classification increasingly fails to meet governmental requirements for efficient, accurate, and timely service delivery. This study focuses on automatic classification of public affairs appeal texts through systematic investigation of deep learning models. To resolve data duplication, class imbalance, and textual noise, we implemented optimization strategies including deduplication and class resampling. Addressing the generalization and stability limitations of individual models — specifically Enhanced TextCNN, BiLSTM with attention, BERT, and ERNIE3.0 — we propose a deep neural network that integrates BERT’s contextual semantic embeddings, CNN’s local feature extraction, and BiLSTM’s temporal dependency modeling. This architecture employs feature concatenation and dropout mechanisms to effectively synthesize global semantics, local phrases, and sequential features. Experimental results demonstrate substantial superiority over conventional models, achieving 99.06% accuracy and 99.03% F1-score on the validation set, confirming exceptional classification performance and robustness. This approach offers an efficient solution for intelligent public appeal processing while advancing digital governance capabilities and governmental modernization. Furthermore, it establishes a valuable reference framework for complex Chinese text classification tasks.
Downloads
Published
Issue
Section
License
Copyright terms are indicated in the Republic of Lithuania Law on Copyright and Related Rights, Articles 4-37.


