Pharmacovigilance

Improving the Identification of Medication Mentions in Social Media (SM):

While certain forums, such as Daily Strength and WebMD identify the medication that is referenced in the posts. However, within these posts other medications are often mentioned, thus NLP methods are required to identify and extract them. Additionally, other SM outlets, such as Twitter, require that the medication mention be searched directly. To collect tweets mentioning a medication in Twitter, we used a keyword search of medication names using the Twitter Application Programming Interface (API) 1. User’s positing to social media may not use the correct spelling of the medication name, thus limiting the number of tweets collected based solely of keywords using the proper spelling of both generic and trade names of medications.  To address this limitation, we developed an unsupervised spelling variant generator.  This generator improved upon our previously developed phonetic based system 2, uses dense vector models used to find semantically close terms to the original, lexically dissimilar terms are then filtered 3.  This system achieved a state of the art performance of an F1 score of 0.69.

Keyword and their variant matches may not be referencing a medication as medication names can be polysemous. Furthermore, the detection of medication mentions in a collection of tweets, or other SM posts, not collected based on medications name keywords may be important for pharmacovigilant studies related to these collections.  To identify true positive mentions of any medications in a SM post, we developed Kusuri, a binary classifier based on ensemble learning 4.  Though the classifier achieved high performance, F1-score = 0.937, we found the system too slow to process large numbers of tweets and it showed limited performance on imbalanced data set – tweets mentioning drugs are very rare among a random sample of tweets from Twitter. We updated our existing classifier with a transformer-based classifier, BERT, trained with under-sampling to disambiguate tweets matched by a lexicon derived from RxNorm 5. We obtained a similar performance as the best existing system but processing the data was 28 times faster. We also expanded our classifier into a sequence labeler to extract the spans of the medication in tweets.  We have collected over 27 million tweets using medication keywords and our generated variants.  Kusuri has classified 13+ million of these tweets as true positive medication mentions.

Systems for Detecting Adverse Drug Reactions, Symptom/Disease, Medication Change mentions in SM posts:

Improving upon our previous effort on the detection of adverse event mentions in SM1, we developed a deep learning pipeline (DeepADEMiner)6. This tool is currently available as a Demo, standalone tool and an API with publicly source code. DeepADEMiner has the capability to classify tweets that contain one or more ADRs, extract the text spans that contain the ADRs and normalize them to the MedDRA ontology. DeepADEMiner uses a semi-supervised learning approach where in it has the capability of extracting ADRs that were not present in the training set used for developing the tool.

 

In addition to these efforts, we also combined annotations from our 2015 study on extracting ADRs and Indications on Twitter and DailyStrength to develop a general-purpose tool called SEED which extracts self-reported symptoms and diseases mentioned by users7. Similar to DeepADEMiner, SEED has the capability to extract symptoms that were not part of the training set which is critical for discovering emerging symptoms, diseases, indications and ADRs that were previously unknown. SEED utilized multi-corpus training and deep learning to identify and extract symptom/disease mentions and normalizes them to UMLS terms.  The system achieved an F-1 score of 0.86 on DailyStrength and 0.72 on a Twitter corpus.

 

To detect mentions of medication change in SM posts, we created a binary classifier to detect mentions of change in medication treatments in users’ posts, regardless of whether or not the changes were recommended by a physician. Detecting changes in medications regimens are less ambiguous than identifying medication non-adherence and enables us to deploy automatic methods on a larger scale. We annotated 9,830 tweets mentioning medications and used the corpus to train a convolutional neural network (CNN) to find mentions of medication treatment changes. We used active and transfer learning from 12,972 reviews we annotated from WebMD.com to address the class imbalance of our Twitter corpus – tweets mentioning a medication change are scarce among the tweets mentioning a medication. To validate our CNN, we applied it on 1.9 million tweets and we then annotated 1,956 positive tweets as to whether they indicate non-adherence and categorized the reasons given.

 

Cohort Detection and Studies:

In addition to our pregnancy related studies, we have used NLP methods to identify other patient cohorts.  We developed a pipeline to identify possible cases of Covid-19 in the US and UK 10,11.  Our pipeline was trained on an annotated dataset of 8976 tweets that were collected based on a set of Covid related keywords and filtered using a set a regular expressions.  The annotations were used to train a deep neural network to detect self-reports of Covid-19 which achieved and F1-score of 0.76. As this system was developed early in the pandemic, we are updating the classifier, training on a new dataset with more stringent guidelines for classifying a patient with a Covid diagnosis.  We have annotated a dataset of 10000 tweets to use in the development of the new classifier. We used our SEED tool on the timelines extracted from 13,200 users (containing 21 million tweets) that self-reported Covid-19 detected by our classifier and discovered more than 1.3 million symptoms most of which were found in common symptoms among hospitalized and non-hospitalized symptoms reported by CDC.

 

We have used the approach of searching Twitter for keywords and then filtering the results with precise regular expressions to identify caregivers for patients with dementia and men who have sex with men (MSM).  For caregivers of dementia patients, we search Twitter for keywords specific to dementia and a term that indicates diagnosis with the disease 12.  The collected tweets were then filtered with regular expressions related to familial relationships.  A set of 8846 were manually annotated and used to train a classifier which achieved an F1-score of 0.962.

 

We collected over 3 million tweets containing keywords that men may include in posts to self-identify as gay, bisexual or MSM 13.  In addition to tweets, the Twitter profiles of the users were downloaded.  High precision regular expressions were utilized at the tweet level and on the user’s profile to filter for true self reports.  A manual validation of those identified found that the pipeline has a precision of 0.85.  From the 10 043 positively identified users, who were geolocated in the US, their full timeline of tweets was downloaded, and using regular expressions, we identified tweets that mentioned PrEP or PrEP related medications.  These tweets will be used to analyze user’s discussions around the medication to elucidate the patient’s experience.

 

As a proof of concept study, we searched the tweets of users in our SMPC for mentions of beta blockers and their variants3,  to identify a cohort of users who took or may have taken the medication during pregnancy14. We utilized several of our NLP tools, to identify the timeframe of pregnancy15 and the outcome of the pregnancy16–18 for users for whom it was determined to have, or possibly have taken the medication.  We identified 257 pregnancies during which a beta blocker may have been taken along with the indication for taking the medication in 76.7% of the pregnancies and the maternal age19 for 86.4%.  This study indicated the utility of Twitter as a potential source of cohorts for drug safety studies to complement traditional study methods.

 

Medication Studies:

To identify the similarities and differences in adverse events reported in SM compared to regulatory data or published trial, we extracted the adverse events reported in Twitter for one drug, adalimumab and a drug class, statins.  We collected 10 188 tweets mentioning adalimumab which were then processed by ADRMiner1 to automatically detect, extract and normalize mentions of adverse event in tweets20. After manual validation of the identified adverse events, there were 801 events which mapped to 232 UMLS concepts.  We found agreement in the adverse events reported in Twitter with those in FAERS, drug information databases, and controlled studies.  We did identify concepts (sleep/nervousness) in Twitter that were not identified in drug information databases, indicating that SM may be a complementary resource for the identification of adverse events experienced by patients. Similar to this study, we compared adverse events mentions in Twitter that the patient ascribed to their statin medication with adverse events reported to regulatory agencies, drug information databases and systematic reviews 21.  As with the prior study, we found similarity in the types of adverse events reports with similar frequency in SM when compared to more traditional reporting sources.

 

Patient Perspectives:

SM can provide unfettered access to patient discussions on a myriad of topics related to health.  To discover topics of discussion in a mother-to-be forum, What to Expect, we downloaded seven forums which a user will join based on their expected due date 22.  Using the first post from each thread within the forums, we utilized the unsupervised method of Latent Dirichlet Allocation (LDA) topic modeling. We manually label the 50 generated topics and analyzed their distribution by trimester.  Our findings discovered that women turn to these forums to discuss health related topics, such as miscarriage and labor and delivery.  The identification of topics of discussions on the forums may help identify information gaps that exist between provider and patient in everyday practice.

 

A manual review of the tweets collected that mention a statin medication in Twitter coded the main themes of the tweet as well as the user’s relationship to the statin, e.g., patient, healthcare professional, when they could be identified23. Of the 11852 tweets coded, 5201 were health related.  Of the health related tweets, 21.1% of tweets included personal beliefs about statins, some polarizing beliefs such as the benefits of statins as well as statements of the harmful effects of statins were stated.  However, the most frequently expressed belief expressed the notion of risk compensation, that is the protection gained from the preventative medication afforded them the leeway to engage in riskier behavior, such as poor dietary choices or being more sedentary. Such beliefs have not been extensively reported in other studies.  The identification of such patient beliefs may help guide public health messages or provider/patient conversations.

 

We classified our collection of 324k WebMD medication reviews using our medication change detection system. This allows us to study the patient reported reasons for medication change for a given class of medications. We chose to analyze statins for our pilot study 9. In our WebMD collection, there are 5156 statin reviews and our pipeline identified 2458 as containing a mention of a medication change, 2121 of which were deemed true positive or a change of interest in our manual review.  The overwhelming majority (90%) expressed that adverse events were the primary driver of their reasons for discontinuing or switching their statin medication, with over half reporting more than one adverse event type.  A comparison of the adverse events reported found them to be similar to those reported in other spontaneous reporting systems, FAERS and MHRA. Additionally, we found reports of patient’s dissatisfaction with their provider interactions, information that is often not reported in interview studies, and has been shown to affect adherence.

 

Evidence has consistently shown that the Covid-19 vaccine is safe and effective even in the pregnant population.  Despite the emergence of these studies, vaccination rates among pregnant people remain less than optimal.  To identify the patients’ reasons for not accepting the vaccine, we mined our SMPC and the health forum, What to Expect 24.  We search the forum for vaccine related keywords, then applied regular expressions to identify user who expressed not having received the Covid-19 vaccine.  These same expressions were applied to our SMPC to user’s whose due date were calculated to be after December 8, 2020.  All posts were manually reviewed and classified as relevant or not.  Relevant posts were further coded into one of the three key barriers to vaccine uptake, as well as 12 tailored codes to capture the specific reason for vaccine hesitancy. The most frequent reason stated was concerns over the safety related to the perceived speed at which the vaccine was developed as well as a lack of data in pregnancy. Other themes such as those waiting until the 2nd trimester or after birth, or taking other precautions have not been reported in other studies. The ability to rapidly identify these reasons so they can be disseminated and addressed, particularly during an emergent health crisis, highlight the value of SM.

 

 

  1. Nikfarjam, A., Sarker, A., O’Connor, K., Ginn, R. & Gonzalez, G. Pharmacovigilance from social media: mining adverse drug reaction mentions using sequence labeling with word embedding cluster features. J. Am. Med. Inform. Assoc. JAMIA 22, 671–681 (2015).
  2. Pimpalkhute, P., Patki, A., Nikfarjam, A. & Gonzalez, G. Phonetic spelling filter for keyword selection in drug mention mining from social media. AMIA Jt. Summits Transl. Sci. Proc. AMIA Jt. Summits Transl. Sci. 2014, 90–5 (2014).
  3. Sarker, A. & Gonzalez-Hernandez, G. An unsupervised and customizable misspelling generator for mining noisy health-related text sources. J. Biomed. Inform. 88, 98–107 (2018).
  4. Weissenbacher, D. et al. Deep neural networks ensemble for detecting medication mentions in tweets. J. Am. Med. Inform. Assoc. 26, 1618–1626 (12 01).
  5. Weissenbacher, D., Rawal, S., Magge, A. & Gonzalez-Hernandez, G. Addressing Extreme Imbalance for Detecting Medications Mentioned in Twitter User Timelines. in 19th Annual Conference on Artificail Intelligence (AIME, 2021). doi:10.1101/2021.02.09.21251453.
  6. Magge, A. et al. DeepADEMiner: a deep learning pharmacovigilance pipeline for extraction and normalization of adverse drug event mentions on Twitter. J. Am. Med. Inform. Assoc. 28, 2184–2192 (2021).
  7. Magge, A., O’ Connor, K., Scotch, M. & Gonzalez-Hernandez, G. SEED: Symptom Extraction from English Social Media Posts using Deep Learning and Transfer Learning. MedRxiv Prepr. Serv. Health Sci. (2021) doi:10.1101/2021.02.09.21251454.
  8. Weissenbacher, D. et al. Active neural networks to detect mentions of changes to medication treatment in social media. J. Am. Med. Inform. Assoc. 28, 2551–2561 (2021).
  9. Golder, S. et al. Patient-Reported Reasons for Switching or Discontinuing Statin Therapy: A Mixed Methods Study Using Social Media. Drug Saf. 45, 971–981 (2022).
  10. Klein, A. Z. et al. Toward Using Twitter for Tracking COVID-19: A Natural Language Processing Pipeline and Exploratory Data Set. J. Med. Internet Res. 23, e25314 (2021).
  11. Golder, S. et al. A chronological and geographical analysis of personal reports of COVID-19 on Twitter from the UK. Digit. Health 8, 20552076221097508 (2022).
  12. Klein, A. Z., Magge, A., O’Connor, K. & Gonzalez-Hernandez, G. Automatically Identifying Twitter Users for Interventions to Support Dementia Family Caregivers: Annotated Data Set and Benchmark Classification Models. JMIR Aging 5, e39547 (2022).
  13. Klein, A., Meanley, S., O’Connor, K., Bauermeister, J. & Gonzalez Hernandez, G. Toward Using Twitter for PrEP-Related Interventions: An Automated Natural Language Processing Pipeline for Identifying Gay or Bisexual Men in the United States. JMIR Public Health Surveill. In Review, (2021).
  14. Klein, A. Z., O’Connor, K., Levine, L. D. & Gonzalez-Hernandez, G. Using Twitter Data for Cohort Studies of Drug Safety in Pregnancy: Proof-of-concept With β-Blockers. JMIR Form. Res. 6, e36771 (2022).
  15. Rouhizadeh, M., Magge, A., Klein, A., Sarker, A. & Gonzalez, G. A Rule-based Approach to Determining Pregnancy Timeframe from Contextual Social Media Postings. in Proceedings of the 2018 International Conference on Digital Health – DH ’18 16–20 (ACM Press, 2018). doi:10.1145/3194658.3194679.
  16. Klein, A. Z., Sarker, A., Weissenbacher, D. & Gonzalez-Hernandez, G. Towards scaling Twitter for digital epidemiology of birth defects. Npj Digit. Med. 2, 1–9 (2019).
  17. Klein, A. Z., Cai, H., Weissenbacher, D., Levine, L. D. & Gonzalez-Hernandez, G. A natural language processing pipeline to advance the use of Twitter data for digital epidemiology of adverse pregnancy outcomes. J. Biomed. Inform. X 100076 (2020) doi:10.1016/j.yjbinx.2020.100076.
  18. Klein, A. Z., Gebreyesus, A. & Gonzalez-Hernandez, G. Automatically Identifying Comparator Groups on Twitter for Digital Epidemiology of Pregnancy Outcomes. in AMIA Joint Summits on Translational Science 317–325 (2020).
  19. Klein, A. Z., Magge, A. & Gonzalez-Hernandez, G. ReportAGE: Automatically extracting the exact age of Twitter users based on self-reports in tweets. PLOS ONE 17, e0262087 (2022).
  20. Smith, K. et al. Methods to Compare Adverse Events in Twitter to FAERS, Drug Information Databases, and Systematic Reviews: Proof of Concept with Adalimumab. Drug Saf. 41, 1397–1410 (2018).
  21. Golder, S. et al. A Comparative View of Reported Adverse Effects of Statins in Social Media, Regulatory Data, Drug Information Databases and Systematic Reviews. Drug Saf. (2020) doi:10.1007/s40264-020-00998-1.
  22. Wexler, A. et al. Pregnancy and health in the age of the Internet: A content analysis of online “birth club” forums. PLOS ONE 15, e0230947–e0230947 (2020).
  23. Golder, S., O’Connor, K., Hennessy, S., Gross, R. & Gonzalez-Hernandez, G. Assessment of Beliefs and Attitudes About Statins Posted on Twitter: A Qualitative Study. JAMA Netw. Open 3, e208953–e208953 (2020).
  24. Golder, S., McRobbie-Johnson, A., Klein, A., Polite, F. & Hernandez, G. G. COVID-19 Vaccination Hesitancy during pregnancy: A Mixed Methods Social Media Analysis. https://www.authorea.com/users/504442/articles/583790-covid-19-vaccination-hesitancy-during-pregnancy-a-mixed-methods-social-media-analysis?commit=cc53e9aa83705beb3782f3f7b7ca21081dfab278 (2022) doi:10.22541/au.166178522.27783111/v1.