Skip to main content

A mixed methods evaluation of patient perspectives on the implementation of an electronic health record-integrated patient-reported symptom and needs monitoring program in cancer care

Abstract

Background

As cancer centers have increased focus on patient-centered, evidenced-based care, implementing efficient programs that facilitate effective patient-clinician communication remains critical. We implemented an electronic health record-integrated patient-reported symptom and needs monitoring program (‘cPRO’ for cancer patient-reported outcomes). To aid evaluation of cPRO implementation, we asked patients receiving care in one of three geographical regions of an academic healthcare system about their experiences.

Methods

Using a sequential mixed-methods approach, we collected feedback in two waves. Wave 1 included virtual focus groups and interviews with patients who had completed cPRO. In Wave 2, we administered a structured survey to systematically examine Wave 1 themes. All participants had a diagnosed malignancy and received at least 2 invitations to complete cPRO. We used rapid and traditional qualitative methods to analyze Wave 1 data and focused on identifying facilitators and barriers to cPRO implementation. Wave 2 data were analyzed descriptively.

Results

Participants (n = 180) were on average 62.9 years old; were majority female, White, non-Hispanic, and married; and represented various cancer types and phases of treatment. Wave 1 participants (n = 37) identified facilitators, including cPRO’s perceived value and favorable usability, and barriers, including confusion about cPRO’s purpose and various considerations for responding. High levels of clinician engagement with, and patient education on, cPRO were described as facilitators while low levels were described as barriers. Wave 2 (n = 143) data demonstrated high endorsement rates of cPRO’s usability on domains such as navigability (91.6%), comprehensibility (98.7%), and relevance (82.4%). Wave 2 data also indicated low rates of understanding cPRO’s purpose (56.7%), education from care teams about cPRO (22.5%), and discussing results of cPRO with care teams (16.3%).

Conclusions

While patients reported high value and ease of use when completing cPRO, they also reported areas of confusion, emphasizing the importance of patient education on the purpose and use of cPRO and clinician engagement to sustain participation. These results guided successful implementation changes and will inform future improvements.

Background

Context

Cancer care has shifted toward patient-centeredness, which prioritizes delivering evidence-based, quality care to improve patient outcomes [1, 2]. Accordingly, healthcare organizations are learning how to effectively implement new standards of cancer care, including those that address patient-level outcomes (i.e., patient-reported quality of life, symptoms, treatment satisfaction, and experiences with healthcare systems). Research has demonstrated that patient-centered cancer care can improve multilevel patient outcomes, including survival [3, 4].

While advances in cancer screening and therapeutics have made notable impacts on survival, this benefit can be offset by compromises in health-related quality of life (QOL) [5]. Patients commonly receive multimodal treatments with varying toxicities that can complicate symptom management and negatively impact QOL. Literature highlights the prevalence, persistence, and burden of disease- and treatment-related symptoms and the psychosocial sequalae of living with a chronic or life-threatening condition, as well as needs related to practical concerns (e.g., nutritional or financial), which may go untreated without proactive clinical management systems [6,7,8,9,10]. Additional evidence suggests that effective communication between patients and clinicians remains a challenge [11, 12] and that poor communication can adversely impact health and other relevant outcomes [11, 13]. Implementing programs within routine practice to better identify, communicate, and manage patients’ health needs can promote patient-centered care that improves individual outcomes. Prior evaluations of routine symptom monitoring in cancer care have demonstrated multilevel clinical utility and value, and potential to influence meaningful outcomes [3, 4, 14].

However, implementing an intervention as a standard of care requires strategic planning and iterative evaluation [15, 16]. Use of implementation science (IS) methods to bridge the research-to-practice translation gap has been shown to augment success [17, 18]. IS offers methodological and evaluative models and frameworks to inform implementation processes, drive adoption and system integration, and assess the effects of implementation efforts, including identifying facilitators and barriers and the strategies required to support uptake and sustained delivery [19]. The goal of IS is to facilitate the uptake of evidence-based practice and research evidence into regular use by practitioners, health systems, and health policymakers [20]. Implementation scientists commonly focus on strategies—methods or techniques used to enhance the uptake, implementation, and sustainment of research evidence [21]—that align with context-specific barriers and facilitators to support uptake [19]. IS goes beyond effectiveness of interventions and health innovations to understand the system processes, resources, and capacities needed to support sustained use of best available research evidence [22].

Preliminary work

To address the need for comprehensive symptom monitoring in cancer care, we previously developed and piloted an electronic health record-integrated patient-reported symptom and needs monitoring program (‘cPRO’ for cancer patient-reported outcomes) within Northwestern Medicine’s (NM) electronic health record (EHR) [23, 24]. cPRO is custom-designed to administer validated patient-reported outcome (PRO) measures that assess key symptoms in oncology (depression, anxiety, fatigue, pain interference, and physical function) from the Patient-Reported Outcomes Measurement Information System (PROMIS®) [25, 26] and a checklist to identify supportive care needs. The automated system releases cPRO assessments 72 h before oncology appointments (limited to once every 30 days) and is completed by patients via the EHR patient portal prior to their visits. Scores are calculated in real-time and immediately available in the EHR to enhance communication and decision-making. Scores that meet or exceed severity thresholds, or indicate an endorsed need, trigger an ‘alert’ via EHR in-box messaging for clinician intervention. Results from our initial feasibility studies demonstrated the successful EHR-integration and feasible implementation in a single ambulatory cancer care setting [23, 24].

Current work

We conducted a modified stepped wedge trial with a type 2 hybrid effectiveness-implementation design and formally evaluated effectiveness and implementation outcomes from key constituents, including patients [27]. Our implementation efforts were guided by IS models, primarily the RE-AIM (Reach, Effectiveness, Adoption, Implementation, and Maintenance) planning and evaluation framework [28, 29] and the Consolidated Framework for Implementation Research (CFIR) for determinants [30], and we report elsewhere on the use of 34 discrete implementation strategies from the nine categories in the Expert Recommendations for Implementing Change (ERIC) taxonomy [31, 32]. Key implementation strategies included the creation and distribution of education materials (e.g., pamphlets and posters), hosting clinician orientation sessions, and developing EHR smartphrases for easier clinician access to cPRO patient responses. We intentionally provided such ‘light-touch’ guidelines for clinicians to give them agency in how they use cPRO results and approach them with patients.

Here, we examine our efforts to implement cPRO from the patient perspective, focusing on facilitators and barriers they experienced in regularly completing cPRO. Given clinic operational variability, differences across sites and the size and diversity of the patient population, we applied a mixed methods research (MMR) approach, offering us the ability to enhance the depth and quality of data with context to better inform results [33, 34].

Methods

We followed the Standards for Reporting Qualitative Research [35] guidelines for presenting our findings (checklist in Supplementary Material).

Aim

Our overall aim was to elicit direct feedback from patients regarding their experiences of cPRO implementation during the expansion of cPRO within the NM healthcare system.

Design

Using an MMR approach, we collected feedback in two waves over one calendar year. We used qualitative methods (Wave 1; 1-hour focus groups and individual interviews) to identify themes pertaining to patient acceptability [32], and quantitative methods (Wave 2; structured survey of up to 86 items) to conduct a more systematic exploration of identified themes. To include clinics and hospitals in larger urban areas as well as smaller suburban and rural areas, patients from three geographical regions of the Chicago-area NM healthcare system were invited to participate. This project was approved by the Social and Behavioral Research Panel of Northwestern University’s Institutional Review Board (IRB; #STU00207807).

Setting

We conducted this study at outpatient adult oncology clinics across multiple hospitals within a single healthcare system (NM). Existing regions (Central, North, and West) served as clusters for the larger cluster-randomized stepped wedge trial [33]. The Central region includes a single large, urban-based medical center; the North and West regions are each comprised of smaller hospitals in suburban communities.

Population

Eligible participants for both waves were 18 years of age or older and met the criteria listed in Table 1.

Table 1 Eligibility Criteria

In Wave 1, we defined a priori four user groups to ensure representation regarding number of cPRO completions and held separate feedback sessions for each group. User groups were defined as (1) regular users (completed cPRO at least twice), (2) one-time users (completed cPRO once), (3) never users (never completed cPRO), and (4) users who generated clinical alerts (completed cPRO at least twice and responses prompted alert messages to care teams).

Materials

Wave 1 feedback sessions were conducted using semi-structured interview guides customized by user group and session type (interview versus focus group). Interview guide content was informed by CFIR [30] to elicit contextual determinants (i.e., barriers and facilitators) from patient perspectives. Questions covered 4 of the 5 CFIR domains [30] (intervention characteristics, outer setting, inner setting, and characteristics of individuals), excluding questions on cPRO’s implementation process as patients would not be privy to process-related determinants. Topics covered participants’ experiences completing cPRO, why they completed it, comprehension of its purpose, and associated care team communications. Wave 1 sessions were conducted using Zoom Meetings video conference software (Zoom Video Communications Inc., 2016).

Wave 2 participants completed an online survey based on themes that emerged from Wave 1 feedback. Survey items addressed patient understanding of cPRO purpose and functionality, care team cPRO use and related clinical communications, exposure to cPRO educational materials, cPRO impact on health management self-efficacy and care, usability, and compliance. Additional items explored the frequency with which participants’ clinicians asked about cPRO symptom domains during clinical encounters. Survey items utilized multiple choice and both 4- and 5-point Likert-type response scales (see Table 4 in Supplementary Material).

Process

Data collection was sequential with parallel sampling [36] in both waves. Recruitment was purposeful and stratified [36] by region (and by user group for Wave 1). We planned to enroll up to 50 patients across regions to provide feedback via focus groups of 6–10 participants for ideal group conditions [37] and, for privacy reasons, individual interviews for those who generated clinical alerts. For Wave 2, we aimed to enroll 150 patients (50 from each region) to promote results that yield stable estimates and are representative of our overall population [38].

Recruitment

We recruited Wave 1 participants based on eligibility criteria (Table 1), first screening and approaching those from the larger study [39] who consented to re-contact for similar research opportunities, and then those in the NM Enterprise Data Warehouse (an integrated repository of NM clinical data) who had received four or more cPRO invitations as part of their cancer care.

In Wave 2, an automated EHR report was used to screen patients cross-regionally. Eligible patients had completed one or more cPRO assessments during the previous 8 calendar months and were recruited via NM EHR patient-portal messaging.

Data collection

All study participants completed an electronic consent form via the Research Electronic Data Capture (REDCap) platform [40, 41] followed by questions on clinical and sociodemographic characteristics, patient portal usage, and technology use literacy (for descriptive purposes). Wave 1 participants provided live feedback via video conference. Study team members (including ML, KW, EP) with experience and/or training in qualitative research conducted the sessions, which lasted no longer than 45 min for interviews and 60 min for focus groups. Feedback sessions were audiotaped and transcribed for analysis. Transcripts were de-identified once analysis was complete. For Wave 2, we gathered feedback via an electronic survey in REDCap, designed to quantitatively verify endorsement of themes related to barriers and facilitators identified in our Wave 1 qualitative work. Participants completed 46 to 86 items depending on their responses to items with branching logic.

Participant compensation

Participants who completed a Wave 1 session were compensated $50. Participants who completed a Wave 2 survey were compensated $25.

Data analysis

Data from Waves 1 and 2 were cleaned and analyzed with descriptive statistics using IBM SPSS Statistics Version 28.0. Sample characteristics were described with counts and frequencies on variables spanning demographics, health history, and general technology use; analyses were separated by Wave 1 versus Wave 2 subgroups.

Wave 1 focus group data were analyzed directly from session transcripts enabled by Zoom software. Three team members (ML, EP, JC) entered interview responses into an Excel database to enable team coding. We used both rapid and traditional qualitative methods to analyze Wave 1 data. Team members with experience in qualitative analysis (ML, KW, EP) conducted a rapid analysis [42] of the patient feedback to evaluate initial impressions and inform ongoing cPRO implementation [39, 43, 44]. Given the quality improvement nature of the larger project, analyses focused on identifying facilitators and barriers to successful implementation of PROs in cancer care. We applied a directed content analysis approach using implementation research frameworks [45] identifying, categorizing, and condensing all barriers and facilitators discussed (implicitly or explicitly) by participants until themes emerged. Data were double coded, and the coding team met regularly with a principal investigator (SG) to refine emergent themes and resolve disagreements in coding.

Wave 2 data were described using counts and frequencies for each level of categorical variables. Continuous variables were described with means and standard deviations. A small amount of missing data varied across questions, ranging from 0 (0.0%) to 6 (4.2%). Therefore, we present descriptive statistics using complete cases for each variable, which is acceptable for this degree of missingness [46].

Results

The final analytic sample size was 180 (n = 37 from Wave 1 and n = 143 from Wave 2). Participants were equally represented across the three regional cancer centers sites. Participants’ mean age was 62.9 years (range 33–90) and mean age of diagnosis was 57.6 years (range 26–85). The majority were female, White, non-Hispanic, and married; represented various solid tumor types and hematologic malignancies (with breast cancer diagnoses being predominant); and were relatively equally distributed by treatment status (Table 2). Our sample reported a high level of education, computer literacy, and patient portal usage. Over three-fourths of participants indicated they were “Very Comfortable” using computers or touchscreen devices and used the patient portal frequently.

Table 2. Participating patient characteristics

Wave 1 results

Although we recruited and collected data separately from distinct user groups, formative review of preliminary findings indicated responses across regions and user groups were highly uniform. Therefore, we report cPRO user group data in a consolidated manner. No new themes emerged after analyzing data from 37 participants, indicating that we had reached saturation [47].

The feedback from Wave 1 sessions fell into four themes: (1) practical facilitators; (2) conceptual facilitators and motivators for regularly completing cPRO; (3) practical barriers; and (4) conceptual barriers to completing cPRO. The study team defined practical barriers as more objective and having a tangible or simple solution, and practical facilitators and motivators as those that directly prompt or enable a specific action. Conceptual barriers, facilitators, and motivators were defined as being more subjective and rooted in participants’ perceptions and understandings of various aspects of cPRO. Exemplary quotes from themes and sub-themes are presented in Table 3.

Practical facilitators and motivators

Some participants expressed appreciation for how completing cPRO enabled them to track their own progress and aided communication with their care team outside of appointments. Completing cPRO led some to think about their symptoms ahead of time and feel more prepared for their doctor visits. Others expressed that completing cPRO helped them feel less pressure to remember everything about their symptoms at appointments, which is especially helpful when cognition is affected by cancer treatment. These factors led to what some described as more efficient appointments, which is advantageous when time is limited. Other practical factors included the relative ease of completing cPRO. Participants reported that the EHR-integrated questionnaires were easy to find and complete and that the time required was reasonable. The facilitating practical factor reported most often was acknowledgement of cPRO by their care team (e.g., asking patients to look for cPRO email invitations or referencing participants’ responses during appointments).

Conceptual facilitators and motivators

Participants described various examples of cPRO’s value. For example, they appreciated healthcare teams asking how patients are doing, increasing the perception that they care. Some were comforted, feeling that cPRO made their care team more informed. Regularly completing cPRO was also viewed as a way to increase participation in their own care. Participants also viewed cPRO as helpful for reflection; some described how cPRO led to expanding or organizing their thoughts about their symptoms relative to those listed in the questionnaire. They also expressed appreciation for the inclusion of psychological symptoms as an in-depth way to reflect and report on mental health. Most interview participants reported that cPRO questions were relevant to their cancer experiences (due to time constraints, this question was left out of focus groups). When asked what additional symptoms they would add to cPRO, participants offered some ideas, but we found no consistently mentioned symptoms, further suggesting cPRO’s relevancy.

Conceptual barriers

While participants found cPRO questions germane to their experiences overall, they also mentioned instances where cPRO did not feel relevant. Some described how certain items did not apply to them, at the current time or before cancer (e.g., ability to do yardwork—a PROMIS item) [48]. Others felt the items did not match with what they wanted their care team to know or the reason for their visit. For example, some said evaluating symptoms would be more useful after a treatment visit rather than before seeing their oncologist. Some also questioned the value of completing symptom monitoring questionnaires after entering post-treatment survivorship.

The other primary conceptual barrier was confusion and uncertainty regarding various aspects of cPRO. One source of confusion was whether to complete cPRO items in reference to cancer-specific experiences only or to consider all factors (like aging). Participants were also unsure of the source and purpose of the questionnaire, who views responses, and how responses are used. Common misconceptions included the questionnaires being used for research or tied to evaluating patient satisfaction, rather than directly informing their care.

Practical barriers

Practical barriers reported by participants primarily pertained to (1) specific cPRO items and how they were asked and (2) lack of communication and education from the healthcare system. Participants reported growing tired of answering questions they felt were redundant, particularly post-treatment, and felt they had nothing new to report symptom-wise. Some participants thought the ‘past 7-day’ time frame was too short and that a larger range, like one month, would be more effective, especially in capturing symptoms driven by treatment. Flexibility in how to respond was also desired, for example being able to skip items, write-in additional symptoms, or indicate desire to speak directly with their doctor about certain symptoms. Many participants wanted cPRO to include open text options to provide additional details about their Likert responses.

The other primary practical barrier was lack of acknowledgement from participants’ care teams, including lack of education about cPRO from the healthcare system. All Wave 1 participants reported seeing no educational materials regarding cPRO (or could not remember seeing any). Some expressed a need for an orientation to cPRO’s purpose and how it is used. While some participants reported various forms of acknowledgement by their care team, most stated that no one had referenced cPRO. Some wondered if their care team used or even looked at the responses. Given the lack of acknowledgment, especially during appointments, some participants came to expect they would have to repeat the same information in appointments that they reported in the cPRO questionnaires. One participant explained how she had diligently completed cPRO but stopped after having to repeat herself during appointments (Table 3).

Table 3 Participant Quotes

In summary, implementation determinants identified by participants broadly relate to the principal domains of perceived value, usability and relevance, education and communication, and care team engagement, each of which appears to be a key facilitator when present and a barrier when absent (Fig. 1). Patients saw cPRO’s unique value in its ability to monitor symptoms, facilitate reflection, boost self-efficacy, improve appointment efficiency, and strengthen sense of care quality. In terms of usability and relevance, patients found cPRO easy to access, navigate and complete and felt items were relevant while desiring additional flexibility when responding. Patients had not seen educational materials (brochures and posters) and wanted more communication from their care team about cPRO’s purpose and functionality and emphasized the importance of their care team acknowledging their results and referring to completed cPRO rather than asking the same questions again during the visit.

Wave 2 results

We first asked survey respondents about their general recall of the cPRO screener and most (85.2%) indicated (“Somewhat” to “Very much”) that they remembered completing it. However, when asked about the purpose of cPRO, only just over half (56.7%) accurately understood that cPRO results were used to inform their care team. Others were unsure (13.5%) or thought cPRO was used for research or patient satisfaction assessment (29.0%). Similarly, very few (7.0%) participants reported noticing educational materials about cPRO.

Responses about cPRO usability were unilaterally positive. When asked about navigability, a majority (91.6%) found the questionnaire easy to find (“Somewhat” to “Very much”). In terms of comprehension, there was substantial endorsement (“Somewhat” to “Very much”) that the cPRO questions were easy to understand (98.7%) and easy to answer (98.6%). Participants (82.4%) also indicated (“Somewhat” to “Very much”) that the cPRO questions covered symptoms and needs relevant to them.

We asked respondents why they did or did not complete cPRO. Top reasons for completion included (1) thinking it was important for the care team to know how they were doing (46.9%), (2) being asked to complete it by a care team member (30.8%), and (3) feeling it would improve communication about symptoms and needs with their care team (30.0%). Further, 22.4% of patients thought it would improve the quality of their care. When asked to choose a top reason for completing cPRO, being asked to complete it by a member of their care team was most frequently endorsed (27.1%). While more than half (55.9%) said they complete cPRO whenever they are asked, the top reason for non-completion was lack of time (16.1%).

Cancer care team communication about cPRO was reported as lower than anticipated; only some patients (22.5%) indicated with confidence that a member of their care team discussed cPRO. Many more said they were unsure (39.4%) or their care team never mentioned cPRO (38.0%). Further evidence of low clinician-to-patient communication about cPRO was evidenced in a 16.3% endorsement of how often cPRO results were discussed with a member of their care team (“Sometimes” or “Often”). Interestingly, despite reporting low care-team engagement, almost a third (29.5%) of participants felt (“Somewhat” to “Very much”) that completing cPRO had improved communication about their symptoms and needs with their care team. Similarly, 41.5% reported (“Somewhat” to “Very much”) that completing cPRO helped them feel more in-control of their care.

Finally, we asked patients to rate, according to their general experience, how often their doctor asks them about the five symptom domains included in cPRO. Although not a direct assessment of a patient-facing implementation determinant, we aimed to better understand the perceived frequency with which these symptoms are addressed during routine care, independent of the cPRO assessment (to provide contextual information). A relatively high percentage of participants reported that clinicians “Sometimes” or “Often” ask about fatigue/tiredness (72.3%), pain interference (69.5%), and physical functioning (67.1%) during appointments. Consistent with other research findings, fewer participants reported routine inquiry (“Sometimes” or “Often”) about mental health concerns (50.3% for worry/anxiety and 43.2% for sadness/low mood) [8, 49,50,51].

Results from the Wave 2 survey helped us understand the degree to which identified facilitators and barriers were endorsed or experienced by patients (see Fig. 1). Broadly, results suggest high (82–99%) endorsement of usability and relevance (items are relevant and easy to comprehend; the system is navigable), moderate (30–47%) endorsement of perceived value (cPRO improves communication at appointments and sense of self-efficacy; useful as a monitoring tool), low to moderate (7–57%) endorsement of education and communication (saw educational materials; care team communicated about cPRO; understood purpose of cPRO) and low (16%) endorsement of care team engagement (care team acknowledged/discussed cPRO results).

Fig. 1
figure 1

Patient perspectives on cPRO Implementation: Qualitative themes (facilitators and barriers) and survey results (level of endorsement)

Initial impact

Rapid analysis of this mixed methods data set has already led to effective, measurable improvements in cPRO implementation. Upon presenting preliminary findings to the implementation team, they responded to patient frustration over cPRO item redundancy by designing a shorter version of the tool (moving from PROMIS computer adaptive tests to two-item short form measures but still assessing the same domains), which also reduced average completion time from 6 to 7 to 2–3 min. This change significantly improved completion rates (increases of 9–35% over 18 months), also addressing the Wave 2 finding that showed lack of time as the top reason patients do not complete cPRO. Additionally, the study team clarified within the assessment instructions that patients should answer questions based on symptoms due to any cause, addressing patients’ uncertainty about responding to symptoms or needs that are driven by factors other than cancer. Finally, in response to participants’ desire for more feedback from their clinical care team about their cPRO responses, we added a mechanism into the EHR informing patients whether their clinicians saw their results (via a smartphrase incorporated into progress notes & visible in patients’ after-visit summaries).

Discussion

This mixed methods analysis explored patient perspectives of healthcare system-wide implementation of cPRO, an electronic health record-integrated patient-reported symptom and needs monitoring program, as part of routine cancer care. After conducting semi-structured discussions with patients in Wave 1, we sought to confirm and expand on emergent themes via a survey completed by a larger sample in Wave 2. Collectively, these data provided insight on patient attitudes and experiences that can inform actionable changes to cPRO implementation. Results centered on four principal domains that appear to enhance or detract from patient uptake and adherence and point to implementation strategy enhancements needed to improve reach, adoption, sustainability, and effectiveness.

Findings aligned with what we had learned anecdotally from clinicians, administrators, and patients during cPRO implementation, but there were some unexpected results that contribute to the literature on facilitators and barriers to implementing electronic patient-reported symptom monitoring programs.

First, patients found value in cPRO, including that it improved communication with their care team, despite low care team engagement. Likewise, a significant number of patients (42%) indicated (“Somewhat” to “Very much”) that cPRO enhanced their sense of self-efficacy, a desirable patient-centered benefit, pointing to how symptom monitoring programs like cPRO can activate patients [52]. Specifically, patients described how cPRO facilitated thoughtful reflection on their symptoms and needs and better prepared them to communicate concerns in medical visits. This finding maps onto one of the basic principles of patient-clinician communication: “the right information,” (i.e., patients sharing relevant symptoms and experiences) [53]. Finally, it is noteworthy that a quarter to over half (27.7-56.7%) of survey respondents said they were “Never” or “Rarely” asked, within routine care, about some of the most common physical and, especially, psychological symptoms reported in oncology settings, This finding highlights the general need for symptom monitoring, and the specific need for mental health surveillance in cancer care [54].

Analysis of these data have prompted effective changes to cPRO implementation, including designing a shorter version of the tool, which led to increased completion rates. The investigators will continue to use these findings to guide additional implementation strategies focused on enhancing communication, education, and clinician engagement. Because participants reported not remembering educational materials, we plan to increase and expand educational materials to help them understand the value of regular cPRO completion. Doing so is more feasible now that handouts and other materials are allowed in clinics again, following previous COVID-19 restrictions limiting widespread distribution during our data collection period. We are also working to create a patient-facing video, providing care teams with cPRO talking points, and tracking distribution of patient education materials.

Further, our results suggest that enhancing clinician communication and engagement via more intensive clinician-facing implementation strategies may improve compliance and patients’ cPRO-related experiences. Topics to explore include: what motivates clinicians to use cPRO results and discuss them with patients; how often clinicians discuss different symptoms during visits; how they choose what to discuss; how these discussions affect care; how the degree of clinician acknowledgement motivates patients; and whether cPRO should be addressed at all visits or more selectively.

Limitations and strengths

The time between most recent cPRO completion and feedback session participation varied across participants, which potentially impacted recall. We are also unable to generalize our findings to the entire local cancer population because most participants regularly used cPRO and had high technology literacy. Our study participants’ demographics and our implementation of cPRO in a well-resourced academic health center limits generalizability to more diverse populations and other settings [55]. In particular, this study’s sample was less diverse in terms of race and ethnicity, including compared to larger efficacy analysis data set for the parent study, which recruited from the same clinics—limiting the generalizability of findings. Additionally, most data collection occurred during various phases of the COVID-19 pandemic, when in-clinic appointments were minimal and physicians were overburdened. As a result, in-clinic cPRO administration was largely not feasible, and we were reluctant to use higher-touch implementation strategies that demanded greater clinician effort. Finally, the finding that patients reported low frequency of clinicians discussing cPRO results needs to be examined further. Future work should examine what prompts clinicians to discuss results (e.g., severe symptoms or worsening).This work also has various strengths. We explored patient perspectives on an EHR-integrated symptom and needs monitoring program as it was being implemented into standard care across a large academic healthcare system. By purposefully sampling patients who had engaged with cPRO to different extents, our results capture different perspectives regarding EHR-integrated symptom monitoring. Using a mixed methods approach guided by implementation frameworks, we amplified patient perspectives, which are not often highlighted within implementation processes. Further, we demonstrate how this kind of assessment (including rapid analysis), conducted while implementation was underway, can inform program improvements. That approach facilitated iterative changes to cPRO and will inform future versions that reflect patient preferences and experiences.

Conclusion

Adult oncology outpatients found completing cPRO easy to do and valuable, but they also were confused about key aspects of the tool and emphasized the importance of education and clinician engagement to motivate sustained regular completion. Their feedback offers important insight to inform actionable changes. Informed by these data, future cPRO implementation strategies and modifications should target increasing clinician engagement and patient education to further enhance perceived value, compliance, and, ultimately, higher quality, patient-centered cancer care.

Data availability

The datasets generated and/or analyzed during the current study are not yet publicly available but are available from the corresponding author on reasonable request. Once the trial has closed and its results disseminated, anonymized data supporting the conclusions of this, and other published articles will be shared in a publicly accessible way with the research community at large.

Abbreviations

CFIR:

Consolidated Framework for Implementation Research

cPRO:

Cancer Patient-Reported Outcomes

EHR:

Electronic health record

IS:

Implementation Science

IRB:

Institutional Review Board

MMR:

Mixed Methods Research

NM:

Northwestern Medicine

PRO:

Patient-reported outcomes

QOL:

Quality of life

REDCap:

Research Electronic Data Capture

References

  1. Epstein RM, Street RL (2011) The values and value of patient-centered care. Ann Fam Med 9(2):100–103

  2. Scholl I, Zill JM, Härter M, Dirmaier J (2014) An integrative model of patient-centeredness–a systematic review and concept analysis. PLoS ONE 9(9):e107828

    Article  PubMed  PubMed Central  Google Scholar 

  3. Basch E, Schrag D, Henson S, Jansen J, Ginos B, Stover AM et al (2022) Effect of electronic symptom monitoring on patient-reported outcomes among patients with metastatic Cancer: a Randomized Clinical Trial. JAMA 327(24):2413–2422

  4. Basch E, Deal AM, Dueck AC, Scher HI, Kris MG, Hudis C et al (2017) Overall survival results of a trial assessing patient-reported outcomes for symptom monitoring during routine cancer treatment. JAMA 318(2):197–198

    Article  PubMed  PubMed Central  Google Scholar 

  5. Caruso R, Nanni MG, Riba MB, Sabato S, Grassi L (2017) The burden of psychosocial morbidity related to cancer: patient and family issues. Int Rev Psychiatry 29(5):389–402

    Article  PubMed  Google Scholar 

  6. Carlson LE, Zelinski EL, Toivonen KI, Sundstrom L, Jobin CT, Damaskos P et al (2019) Prevalence of psychosocial distress in cancer patients across 55 north American cancer centers. J Psychosoc Oncol 37(1):5–21. https://doi.org/10.1080/07347332.2018.1521490

    Article  PubMed  Google Scholar 

  7. Penedo FJ, Cella D (2017) Responding to the quality imperative to embed mental health care into ambulatory oncology. Wiley Online Library

  8. Pearman T, Garcia S, Penedo F, Yanez B, Wagner L, Cella D (2015) Implementation of distress screening in an oncology setting. J Community Supportive Oncol 13(12):423–428

    Article  Google Scholar 

  9. Wang T, Molassiotis A, Chung BPM, Tan J-Y (2018) Unmet care needs of advanced cancer patients and their informal caregivers: a systematic review. BMC Palliat care 17(1):1–29

    Article  Google Scholar 

  10. Burg MA, Adorno G, Lopez ED, Loerzel V, Stein K, Wallace C et al (2015) Current unmet needs of cancer survivors: analysis of open-ended responses to the A merican C ancer S ociety S tudy of C ancer S urvivors II. Cancer 121(4):623–630

    Article  PubMed  Google Scholar 

  11. Roth LM, Tirodkar M, Patel T, Friedberg M, Smith-McLallen A, Scholle SH (2020) Patient-centered oncology care: impact on utilization, patient experiences, and quality. Am J Manag Care 26(9):372–380

    Article  PubMed  Google Scholar 

  12. McInnes DK, Cleary PD, Stein KD, Ding L, Mehta CC, Ayanian JZ (2008) Perceptions of cancer-related information among cancer survivors: a report from the American Cancer Society’s studies of Cancer survivors. Cancer: Interdisciplinary Int J Am Cancer Soc 113(6):1471–1479

    Article  Google Scholar 

  13. Street RL Jr, Makoul G, Arora NK, Epstein RM (2009) How does communication heal? Pathways linking clinician–patient communication to health outcomes. Patient Educ Couns 74(3):295–301

    Article  PubMed  Google Scholar 

  14. Basch E, Stover AM, Schrag D, Chung A, Jansen J, Henson S et al (2020) Clinical utility and user perceptions of a digital system for electronic patient-reported symptom monitoring during routine cancer care: findings from the PRO-TECT trial. JCO Clin Cancer Inf 4:947–957

    Article  Google Scholar 

  15. Kirchner JE, Smith JL, Powell BJ, Waltz TJ, Proctor EK (2020) Getting a clinical innovation into practice: an introduction to implementation strategies. Psychiatry Res 283:112467

    Article  PubMed  Google Scholar 

  16. Hyland CJ, Mou D, Virji AZ, Sokas CM, Bokhour B, Pusic AL et al (2023) How to make PROMs work: qualitative insights from leaders at United States hospitals with successful PROMs programs. Qual Life Res 1–11

  17. Powell BJ, Fernandez ME, Williams NJ, Aarons GA, Beidas RS, Lewis CC et al (2019) Enhancing the impact of implementation strategies in healthcare: a research agenda. Front Public Health 7:3

    Article  PubMed  PubMed Central  Google Scholar 

  18. Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC et al (2012) A compilation of strategies for implementing clinical innovations in health and mental health. Med care Res Rev 69(2):123–157

    Article  PubMed  Google Scholar 

  19. Smith JD, Li DH, Rafferty MR (2020) The implementation research logic model: a method for planning, executing, reporting, and synthesizing implementation projects. Implement Sci 15:1–12

    Article  Google Scholar 

  20. Bauer MS, Kirchner J (2020) Implementation science: what is it and why should I care? Psychiatry Res 283:112376

    Article  PubMed  Google Scholar 

  21. Smith JD, Norton WE, Mitchell SA, Cronin C, Hassett MJ, Ridgeway JL et al (2023) The longitudinal implementation strategy Tracking System (LISTS): feasibility, usability, and pilot testing of a novel method. Implement Sci Commun 4(1):153

    Article  PubMed  PubMed Central  Google Scholar 

  22. Smith JD, Hasan M (2020) Quantitative approaches for the evaluation of implementation research studies. Psychiatry Res 283:112521

    Article  PubMed  Google Scholar 

  23. Wagner LI, Schink J, Bass M, Patel S, Diaz MV, Rothrock N et al (2015) Bringing PROMIS to practice: brief and precise symptom screening in ambulatory cancer care. Cancer 121(6):927–934

    Article  PubMed  Google Scholar 

  24. Garcia SF, Wortman K, Cella D, Wagner LI, Bass M, Kircher S et al (2019) Implementing electronic health record–integrated screening of patient-reported symptoms and supportive care needs in a comprehensive cancer center. Cancer 125(22):4059–4068

    Article  PubMed  Google Scholar 

  25. Jensen RE, Moinpour CM, Potosky AL, Lobo T, Hahn EA, Hays RD et al (2017) Responsiveness of 8 patient-reported outcomes Measurement Information System (PROMIS) measures in a large, community‐based cancer study cohort. Cancer 123(2):327–335

    Article  PubMed  Google Scholar 

  26. Cella D, Riley W, Stone A, Rothrock N, Reeve B, Yount S et al (2010) The patient-reported outcomes Measurement Information System (PROMIS) developed and tested its first wave of adult self-reported health outcome item banks: 2005–2008. J Clin Epidemiol 63(11):1179–1194

    Article  PubMed  PubMed Central  Google Scholar 

  27. Cella D, Garcia SF, Cahue S, Smith JD, Yanez B, Scholtens D et al (2023) Implementation and evaluation of an expanded electronic health record-integrated bilingual electronic symptom management program across a multi-site Comprehensive Cancer Center: the NU IMPACT protocol. Contemp Clin Trials 128:107171

    Article  PubMed  PubMed Central  Google Scholar 

  28. Glasgow RE, Harden SM, Gaglio B, Rabin B, Smith ML, Porter GC et al (2019) RE-AIM planning and evaluation framework: adapting to new science and practice with a 20-year review. Front Public Health 7:64

    Article  PubMed  PubMed Central  Google Scholar 

  29. Glasgow RE, Vogt TM, Boles SM (1999) Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health 89(9):1322–1327

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  30. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC (2009) Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci 4:50. https://doi.org/10.1186/1748-5908-4-50

    Article  PubMed  PubMed Central  Google Scholar 

  31. Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM et al (2015) A refined compilation of implementation strategies: results from the Expert recommendations for Implementing Change (ERIC) project. Implement Sci 10(1):1–14

    Article  Google Scholar 

  32. Smith JD, Merle JL, Webster KA, Cahue S, Penedo FJ, Garcia SF (2022) Tracking dynamic changes in implementation strategies over time within a hybrid type 2 trial of an electronic patient-reported oncology symptom and needs monitoring program. Front Health Serv 2. https://doi.org/10.3389/frhs.2022.983217

  33. Gupta M, Bosma H, Angeli F, Kaur M, Chakrapani V, Rana M et al (2017) A mixed methods study on evaluating the performance of a multi-strategy national health program to reduce maternal and child health disparities in Haryana, India. BMC Public Health 17(1):698. https://doi.org/10.1186/s12889-017-4706-9

    Article  PubMed  PubMed Central  Google Scholar 

  34. Regnault A, Willgoss T, Barbic S, On behalf of the International Society for Quality of Life Research Mixed Methods Special Interest G (2018) Towards the use of mixed methods inquiry as best practice in health outcomes research. J Patient-Reported Outcomes 2(1):19. https://doi.org/10.1186/s41687-018-0043-8

    Article  Google Scholar 

  35. O’Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA (2014) Standards for reporting qualitative research: a synthesis of recommendations. Acad Med 89(9):1245–1251

    Article  PubMed  Google Scholar 

  36. Onwuegbuzie AJ, Collins KM (2007) A typology of mixed methods sampling designs in social science research. Qualitative Rep 12(2):281–316

    Google Scholar 

  37. Krueger RA (2014) Focus groups: a practical guide for applied research. Sage

  38. Piovesana A, Senior G (2018) How small is big: sample size and skewness. Assessment 25(6):793–800

    Article  PubMed  Google Scholar 

  39. Garcia SF, Smith JD, Kallen M, Webster KA, Lyleroehr M, Kircher S et al (2022) Protocol for a type 2 hybrid effectiveness-implementation study expanding, implementing and evaluating electronic health record-integrated patient-reported symptom monitoring in a multisite cancer centre. BMJ open 12(5):e059563

    Article  PubMed  PubMed Central  Google Scholar 

  40. Harris PA, Taylor R, Minor BL, Elliott V, Fernandez M, O’Neal L et al (2019) The REDCap consortium: building an international community of software platform partners. J Biomed Inform 95:103208

    Article  PubMed  PubMed Central  Google Scholar 

  41. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG (2009) Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform 42(2):377–381

    Article  PubMed  Google Scholar 

  42. Nevedal AL, Reardon CM, Opra Widerquist MA, Jackson GL, Cutrona SL, White BS et al (2021) Rapid versus traditional qualitative analysis using the Consolidated Framework for Implementation Research (CFIR). Implement Sci 16(1):1–12

    Article  Google Scholar 

  43. Lewinski AA, Crowley MJ, Miller C, Bosworth HB, Jackson GL, Steinhauser K et al (2021) Applied rapid qualitative analysis to develop a contextually appropriate intervention and increase the likelihood of uptake. Med Care 59(6 Suppl 3):S242

    Article  PubMed  PubMed Central  Google Scholar 

  44. Hamilton A (2013) Qualitative methods in rapid turn-around health services research. Health services research & development cyberseminar

  45. Hsieh H-F, Shannon SE (2005) Three approaches to qualitative content analysis. Qual Health Res 15(9):1277–1288

    Article  PubMed  Google Scholar 

  46. Tabachnick BG, Fidell LS, Ullman JB (2013) Using multivariate statistics. pearson Boston, MA

  47. Bowen GA (2008) Naturalistic inquiry and the saturation concept: a research note. Qualitative Res 8(1):137–152

    Article  Google Scholar 

  48. Cella D, Riley W, Stone A, Rothrock N, Reeve B, Yount S et al (2010) The patient-reported outcomes Measurement Information System (PROMIS) developed and tested its first wave of adult self-reported health outcome item banks: 2005–2008. J Clin Epidemiol 63(11):1179–1194. https://doi.org/10.1016/j.jclinepi.2010.04.011

    Article  PubMed  PubMed Central  Google Scholar 

  49. Kanani R, Davies EA, Hanchett N, Jack RH (2016) The association of mood disorders with breast cancer survival: an investigation of linked cancer registration and hospital admission data for South East England. Psycho-oncology 25(1):19–27

    Article  PubMed  Google Scholar 

  50. Mitchell AJ, Chan M, Bhatti H, Halton M, Grassi L, Johansen C et al (2011) Prevalence of depression, anxiety, and adjustment disorder in oncological, haematological, and palliative-care settings: a meta-analysis of 94 interview-based studies. Lancet Oncol 12(2):160–174

    Article  PubMed  Google Scholar 

  51. Cella D, Peterman A, Passik S, Jacobsen P, Breitbart W (1998) Progress toward guidelines for the management of fatigue. Oncol (Williston Park NY) 12(11A):369–377

    CAS  Google Scholar 

  52. Howell D, Rosberger Z, Mayer C, Faria R, Hamel M, Snider A et al (2020) Personalized symptom management: a quality improvement collaborative for implementation of patient reported outcomes (PROs) in ‘real-world’oncology multisite practices. J Patient-Reported Outcomes 4(1):1–13

    Article  Google Scholar 

  53. Paget L, Han P, Nedza S, Kurtz P, Racine E, Russell S et al (2011) Patient-clinician communication: basic principles and expectations. NAM Perspect

  54. McFarland DC, Holland JC (2016) The management of psychological issues in oncology. Clin Adv Hematol Oncol 8:13–16

    Google Scholar 

  55. Dzimitrowicz HE, Blakely LJ, Jones LW, LeBlanc TW (2022) Bridging new technology into clinical practice with mobile apps, electronic patient-reported outcomes, and wearables. Am Soc Clin Oncol Educ Book 42:94– 9

Download references

Acknowledgements

REDCap is supported by the Northwestern University Clinical and Translational Science (NUCATS) Institute. Research reported in this publication was supported, in part, by the National Institutes of Health’s National Center for Advancing Translational Sciences, Grant Number UL1TR001422. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Funding

This work was supported by the Agency for Healthcare Research and Quality (AHRQ) grant number R18HS026170 and National Cancer Institute grant number UM1CA233035. LMP was supported by the National Institutes of Health/National Cancer Institute training grant T32CA193193.

Author information

Authors and Affiliations

Authors

Contributions

SFG and FJP conceived of the overall trial design, were awarded the grant supporting this work, and provided critical edits to the manuscript. SFG also provided critical reviews of the qualitative guides and survey design, and oversight of related analyses. MJL contributed extensively to the design, execution, analysis, and interpretation of the reported work, wrote sections of the manuscript, and provided critical edits to the manuscript. KAW contributed to the design, analysis, and interpretation of the reported work, wrote sections of the manuscript, and provided critical edits to the manuscript. EAP contributed to the execution and analysis of the reported work, wrote sections of the manuscript, and provided critical edits to the manuscript. LMP contributed to data analysis and interpretation, as well as writing and editing of the manuscript. JC contributed substantially to acquisition of the data analyzed in this work. JDS helped conceive and design the trial. Each author approved all components of the submitted manuscript and agrees to be personally accountable for its accuracy and integrity. DC made substantial contributions to the conception and design of the work, in addition to providing revisions to the manuscript.

Corresponding author

Correspondence to Sofia F. Garcia.

Ethics declarations

Ethics approval and consent to participate

This study has undergone rigorous scientific evaluation via the Agency for Healthcare Research and Quality (AHRQ) peer-review process, and the protocol has been approved by the Social and Behavioral Research Panel of Northwestern University’s Institutional Review Board (IRB; study number STU00207807). All study sites fall under a single IRB reliance agreement. All component human subjects’ research has been deemed ‘low risk’ and participation required signed informed consent of an IRB-approved consent form.

Consent for publication

Not applicable.

Competing interests

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary Material 2

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lyleroehr, M.J., Webster, K.A., Perry, L.M. et al. A mixed methods evaluation of patient perspectives on the implementation of an electronic health record-integrated patient-reported symptom and needs monitoring program in cancer care. J Patient Rep Outcomes 8, 66 (2024). https://doi.org/10.1186/s41687-024-00742-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41687-024-00742-8