BiLSTM Models with and Without Pretrained Embeddings and BERT on German Patient Reviews

dc.contributor.authorRoy, P.
dc.contributor.authorRiad, M.J.A.
dc.contributor.authorAkter, L.
dc.contributor.authorHasan, N.
dc.contributor.authorShuvo, M.R.
dc.contributor.authorQuader, M.A.,
dc.contributor.authorAnwar, A.S.
dc.date.accessioned2025-05-02T08:37:17Z
dc.date.issued2024-05-16
dc.description.abstractThis research searches into the use of Deep learning in sentiment analysis, focusing on the analysis of German patient reviews with two neural network architectures: a Bidirectional Long Short-Term Memory (BiLSTM) model with pretrained embeddings and a BERT (Bidirectional Encoder Representations from Transformers) model. The primary goal is to determine how effective these models are at categorising patient sentiments into 'poor' and 'excellent' categories. We used a dataset of approximately 500,000 reviews, each with a numerical rating and a textual comment. To facilitate model input, the text data was preprocessed. Our findings show a significant difference in performance between the two models, with the BERT model performing better in terms of accuracy and balanced classification across sentiment categories, emphasising the importance of model selection and application. The BERT model demonstrated superior performance, achieving a remarkable accuracy of 98.4%. It exhibited high precision, recall, and F1-scores, particularly in classifying 'poor' sentiments (0.98, 0.99, and 0.99, respectively). BiLSTM model with pretrained embeddings demonstrated a pronounced dip in performance for the 'excellent' category, with zero scores across precision, recall, and F1-score metrics, while retaining high performance for the 'poor' category. LSTM Without Pretrained Embeddings model's accuracy of the model is 91%,LSTM With Pretrained Embeddings result 97%.The study underscores the critical influence of model selection and the incorporation of pretrained embeddings in achieving high accuracy and balanced classification across sentiment categories. This research contributes to the ongoing conversation about the application of advanced machine learning techniques in the analysis of patient feedback and sentiment classification
dc.identifier.citationRoy, P., Riad, M. J. A., Akter, L., Hasan, N., Shuvo, M. R., Quader, M. A., ... & Anwar, A. S. (2024, May). BiLSTM Models with and Without Pretrained Embeddings and BERT on German Patient Reviews. In 2024 International Conference on Advances in Modern Age Technologies for Health and Engineering Science (AMATHE) (pp. 1-5). IEEE.
dc.identifier.uri979-835037156-7
dc.identifier.urihttp://dspace.uttarauniversity.edu.bd:4000/handle/123456789/620
dc.language.isoen
dc.publisherInstitute of Electrical and Electronics Engineers Inc.
dc.subjectBERT
dc.subjectBiLSTM
dc.subjectsentiment
dc.subjectclassification
dc.subjectembeddings
dc.titleBiLSTM Models with and Without Pretrained Embeddings and BERT on German Patient Reviews
dc.typeOther

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
BiLSTM Models with and Without Pretrained Embeddings and BERT on German Patient Reviews(Conference Paper).pdf
Size:
164.77 KB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed to upon submission
Description:

Collections