Prompting Large Language Models for Automatic Question Tagging
-
Graphical Abstract
-
Abstract
Automatic question tagging (AQT) represents a crucial task in community question answering (CQA) websites. Its pivotal role lies in substantially augmenting user experience through the optimization of question-answering efficiency. Existing question tagging models focus on the features of questions and tags, ignoring the external knowledge of the real world. Large language models can work as knowledge engines for incorporating real-world facts for different tasks. However, it is difficult for large language models to output tags in the database of CQA websites. To address this challenge, we propose a large language model enhanced question tagging method called LLMEQT to perform the question tagging task. In LLMEQT, a traditional question tagging method is first applied to pre-retrieve tags for questions. Then prompts are formulated for LLMs to comprehend the task and select more suitable tags from the candidate tags for questions. Results of our experiments on two real-world datasets demonstrate that LLMEQT significantly enhances the automatic question tagging performance for CQA, surpassing the performance of state-of-the-art methods.
-
-