Language model optimization for mental health question answering application
10.11591/ijece.v15i5.pp4829-4836
Fardan Zamakhsyari
,
Agung Fatwanto
Question answering (QA) is a task in natural language processing (NLP) where the bidirectional encoder representations from transformers (BERT) language model has shown remarkable results. This research focuses on optimizing the IndoBERT and MBERT models for the QA task in the mental health domain, using a translated version of the Amod/mental_health_counseling_conversations dataset on Hugging Face. The optimization process involves fine-tuning IndoBERT and MBERT to enhance their performance, evaluated using BERTScore components: F1, recall, and precision. The results indicate that fine-tuning significantly boosts IndoBERT’s performance, achieving an F1-BERTScore of 91.8%, a recall of 89.9%, and precision of 93.9%, marking a 28% improvement. For the model, M-BERT’s fine-tuning results include an F1-BERTScore of 79.2%, recall of 73.4%, and precision of 86.2%, with only a 5% improvement. These findings underscore the importance of fine-tuning and using language-specific models like IndoBERT for specialized NLP tasks, demonstrating the potential to create more accurate and contextually relevant question-answering systems in the mental health domain.