Improving Fine-tuned Question Answering Models for Electronic Health Records

Reviewed, Featured
Tittaya Mairittha, Nattaya Mairittha, Sozo Inoue,
Ubicomp Workshop on Computing for Well-Being (WellComp)
(Not Available)
(Not Available)
688 - 691
2020-09-10
Mexico (virtual)
https://doi.org/10.1145/3410530.3414436
The prevalence of voice assistants has strengthened the interest in a
question answering for the medical domain, allowing both patients
and healthcare providers to enter a question naturally and pinpoint
useful information quickly. However, a large number of medical
terms makes the creation of such a system a demanding task. To
address this challenge, we explore transfer learning techniques
for constructing a personalized EHR-QA system. The goal is to
answer questions regarding a discharge summary in an electronic
health record (EHR). We present the experiments with a pre-trained
BERT (Bidirectional Encoder Representations from Transformers)
model fine-tuned on different tasks and show the results obtained
to provide insights into learning effects and training effectiveness.

Data Files