Skip to main content

Advertisement

How do patient reported outcome measures (PROMs) support clinician-patient communication and patient care? A realist synthesis

Article metrics

Abstract

Background

In this paper, we report the findings of a realist synthesis that aimed to understand how and in what circumstances patient reported outcome measures (PROMs) support patient-clinician communication and subsequent care processes and outcomes in clinical care. We tested two overarching programme theories: (1) PROMs completion prompts a process of self-reflection and supports patients to raise issues with clinicians and (2) PROMs scores raise clinicians’ awareness of patients’ problems and prompts discussion and action. We examined how the structure of the PROM and care context shaped the ways in which PROMs support clinician-patient communication and subsequent care processes.

Results

PROMs completion prompts patients to reflect on their health and gives them permission to raise issues with clinicians. However, clinicians found standardised PROMs completion during patient assessments sometimes constrained rather than supported communication. In response, clinicians adapted their use of PROMs to render them compatible with the ongoing management of patient relationships. Individualised PROMs supported dialogue by enabling the patient to tell their story. In oncology, PROMs completion outside of the consultation enabled clinicians to identify problematic symptoms when the PROM acted as a substitute rather than addition to the clinical encounter and when the PROM focused on symptoms and side effects, rather than health related quality of life (HRQoL). Patients did not always feel it was appropriate to discuss emotional, functional or HRQoL issues with doctors and doctors did not perceive this was within their remit.

Conclusions

This paper makes two important contributions to the literature. First, our findings show that PROMs completion is not a neutral act of information retrieval but can change how patients think about their condition. Second, our findings reveal that the ways in which clinicians use PROMs is shaped by their relationships with patients and professional roles and boundaries. Future research should examine how PROMs completion and feedback shapes and is influenced by the process of building relationships with patients, rather than just their impact on information exchange and decision making.

Introduction

The clinician-patient relationship has been an enduring focus of research across many disciplines and has received considerable attention from policy makers internationally. Efforts to increase patient involvement in decision making about their care [1, 2] are expected to improve patient well-being and health outcomes [3]. Changing the ways in which clinicians and patients communicate with each other, in particular, the practice of patient centred communication, is cast as one of the mechanisms through which these outcomes will be realised [4, 5]. The completion of patient reported outcome measures (PROMs) by patients and the feedback of these data to clinicians is one intervention that has been argued to support communication between clinicians and patients and, in turn, improve care processes and outcomes [6, 7].

There are many quantitative systematic reviews of PROMs feedback in the care of individual patients but they have struggled to draw definitive conclusions about its impact [8,9,10,11,12]. Most reviews limited their inclusion criteria to randomised controlled trials (RCTs) to examine whether (rather than how or why) PROMs feedback ‘works’. One pattern evident in quantitative reviews is that PROMs feedback has a greater impact on clinician-patient communication, the provision of advice or counselling and the detection of problems than on patient management and subsequent patient outcomes. However, why and how this pattern of impact occurs has rarely been explored [13]. More recently, reviews have included qualitative studies of clinician and patient experiences of using PROMs to synthesise evidence on their implementation and use [14,15,16]. While these reviews provide a useful summary of barriers and facilitators, they do not explain why barriers in one context may be facilitators in another [15] nor the mechanisms through which these barriers or facilitators work. An alternative review methodology is required that can address this complexity.

In this paper, we report the findings of a realist synthesis that aimed to understand how and in what circumstances PROMs support patient-clinician communication and subsequent care processes and outcomes in clinical care. First, we describe realist synthesis and the methodology of the review. Second, we outline the programme theories which constitute the anticipated mechanisms through which PROMs may (or may not) support clinician-patient communication and subsequent care processes. Thirdly, we present the findings of our evidence synthesis. Finally, we discuss our findings in the context of broader debates about how respondents make sense of survey items and clinician-patient communication and consider the implications of our findings for future research and clinical practice.

Methodology

Realist synthesis is a review methodology based on the premise that interventions constitute ideas and assumptions, (programme theories), about how and why they are supposed to work [17]. Interventions offer (or remove) resources and the ways in which participants respond to these resources (mechanisms) determine the outcomes of the intervention. These responses are shaped by the design of the intervention itself and the circumstances into which the programme is implemented (context). Our realist synthesis aimed to identify, test and refine programme theories to build explanations about how context shapes the mechanisms through which PROMs use may support clinician-patient communication, subsequent care processes and why. Our protocol was published [18] and was registered with PROSPERO (registration number 42013005938). We followed the RAMESES I guidelines to report the synthesis [19]. A full report of the synthesis was published by the funder in the NIHR journals library [20]. This paper builds on this report in three important ways. First, it offers an extended analysis of existing literature by a closer interrogation of context and outcome patterns in an existing systematic review [8]. Second, it provides an updated review of recent literature by conducting further searches to capture studies published since 2016; nine additional papers were included in the synthesis [21,22,23,24,25,26,27,28,29]. Third, it advances the interpretation of our findings by drawing on theory from the philosophy of measurement [30] and current debates in clinician-patient communication [31].

Realist synthesis is iterative but for simplicity we describe it here as having two main phases: (1) theory identification and (2) theory testing and refining. The first phase of our synthesis sought to identify the programme theories underlying the use of PROMs in clinical practice. We identified these theories through an analysis of policy documents, commentaries, reviews, comments, letters, and editorials. Papers were found drawing on references used to write the research funding proposal (‘personal library’), searches of electronic databases in October 2014 (search strategy in Additional file 1) and citation tracking of these papers. In total, 39 papers contributed to the development of the initial programme theories (Fig. 1). Other screened papers also contributed to our thinking; however, the 39 ‘included’ papers were deemed to provide the clearest exemplars of programme theory in the final synthesis. We also held a two-hour workshop with clinicians, managers, policy-makers and patients to verify and expand the theories.

Fig. 1
figure1

Selection of papers that contributed to the development of the initial programme theories

The second phase of our synthesis involved testing and refining these programme theories through an examination of empirical studies to understand how, why and in what circumstances the purported benefits of PROMs are realised in practice. We identified this empirical evidence as follows. We used forward citation searches of six key [10, 11, 13, 15, 32, 33]. These included four systematic reviews [10, 11, 13, 15] that were chosen because they represented the use of PROMs across different care settings and incorporated quantitative, qualitative and theory driven reviews. These key papers also included a qualitative study chosen because it explicitly explored the use of PROMs within clinician-patient interactions [32] and a study of the use of needs assessments in health visiting [33], chosen because it was an intervention which shared similar programme theories but drew on sociological theories to interpret the findings. We also searched the reference lists of five systematic reviews [8, 11, 13, 15, 34] which represented a range of clinical contexts and review methodologies and one of the above key papers [32]. These searches were undertaken using Web of Science Core Collection Citation Indexes (Thomson Reuters) in May 2015. As the synthesis progressed, we conducted supplementary searches including key author searches and additional citation tracking of key papers and systematic reviews to identify related studies [35, 36]. In February 2018, we updated the forward citation tracking of the six key papers [10, 11, 13, 15, 32, 33] using both Scopus and Google Scholar citation tools to capture studies published since the original searches were carried out. We also conducted a forward citation search for a systematic review the played a key role in testing theory 2 [8].

Study selection, data extraction, quality assessment, synthesis and additional literature searching occurred simultaneously. Study selection was undertaken by JG and discussed and agreed with KG and EG. Studies were selected on the basis of their contribution to theory testing using a set of inclusion and exclusion criteria as a guide (Additional file 2). In some instances, the whole study contributed to theory testing and in others, only a fragment or fragments of the study were relevant to the theory. Each fragment of evidence was appraised, as it was extracted, for its relevance to theory testing and the rigour with which it has been produced [37]. The rigour of randomised controlled trials were assessed using the Cochrane risk of bias tool [38]. Formal checklists were not used to appraise the qualitative studies; as others have noted, qualitative research methods vary widely and rigour in qualitative studies is often not reducible to technical fixes and checklists [39,40,41]. Therefore, quality appraisal represented a judgement based on the particular methods used and related specifically to the validity of the causal claims made in these subset of findings, rather than the study as a whole. Trust in these causal claims was also enhanced by the accumulation of evidence from a number of different studies which provide further lateral support for the theory being tested. Papers were summarised using a data extraction template to facilitate cross and within analysis of the papers. The ongoing synthesis was discussed at regular meetings with wider project group (JMV, NB and SD).

In total, 46 papers were included in the evidence synthesis of which 42 are drawn on for this paper (Table 1); the remaining four tested theories about patients’ views on the length of PROMs, which is not discussed in this paper. Figure 2 shows how the papers were selected for our original report (n = 37) and Fig. 3 shows how the papers were selected from our February 2018 forward citations searches (n = 9). Details of how we synthesised the evidence for each theory are provided in the ‘Findings: Theory Testing and Refining’ section.

Fig. 2
figure2

Process of paper review and selection for synthesis reported in this paper

Fig. 3
figure3

Selection of papers from citation searches conducted in February 2018

Findings: Theory identification

‘Positive’ programme theories

The review of programme theories indicated several hypothesised roles for PROMs in the care of individual patients that have evolved and changed over time [6, 42, 43]. These include identifying patients with anxiety or depression, assessing patient needs, monitoring the outcomes and side effects of treatment, informing goal setting, supporting shared decision making and enabling patients to self-manage long term conditions [44]. Drawing on previous work [13, 45], we developed an initial diagram of the PROMs feedback ‘implementation chain’ to illustrate the pathways through which proximal, intermediate and distal outcomes are thought to be achieved (Fig. 4). We decided that self-tracking using PROMs to support patient self-management [46] and use of PROMs by clinicians without discussion with patients were beyond the scope of the review and concentrated on how PROMs supported interactions between patients and clinicians. Previous systematic reviews of PROMs feedback suggested there is a ‘blockage’ between communication and action along these pathways [8, 47]. Therefore we focused on understanding how PROMs feedback supports clinician-patient communication and subsequent care processes; we wanted to understand what happens ‘inside the arrows’ shown in Fig. 4. We identified two dominant, overarching programme theories about the mechanisms through which PROMs might support clinician-patient communication, though each theory also encapsulated a number of ‘sub’ theories:

Fig. 4
figure4

PROMs feedback in the care of individual patients: Implementation chain at start of synthesis

Theory 1: PROMs completion supports patients to raise issues with clinicians

We identified several mechanisms through which this may occur. PROMs completion may prompt patients to engage in a process of self-reflection and help them to identify what is important to them [45, 48, 49]. It may also empower patients or give them ‘permission’ to raise issues with clinicians [45, 48].

Theory 2: PROMs scores raise clinicians’ awareness of patients’ problems

PROMs may offer a systematic and comprehensive assessment of patients’ perceptions of their symptoms, functioning or HRQoL. PROM scores alert clinicians to issues that, it is assumed, they were previously unaware of [50,51,52]. This is expected to prompt clinicians to explore these issues with patients; implicit here is that clinicians value and discuss patient’s experience as well as biomedical information [53, 54].

These theories are not necessarily mutually exclusive but represent different stages of the implementation chain (Fig. 2) and constitute different understandings of the function of PROMs. Theory 1 emphasises the process of PROMs completion whereas in theory 2, the PROM score is assumed to act like a test result, similar to biomedical indicators. In turn, these are expected to support care processes.

‘Counter’ programme theories

We identified a number of ‘counter’ or ‘opposing’ programme theories that challenged the assumptions underlying the theories discussed above. First, some queried whether the content and structure of standardised PROMs adequately capture and reflect patients’ views [49, 55, 56]. Second, the assumption that clinicians do not effectively elicit information from patients or insufficiently engage with their emotional cues has been questioned [57,58,59]. Third, it has been argued that PROMs may not provide clinicians with any new information about patients over and above talking to them [59, 60]. For example, Salander [57] observes there is a fundamental difference between (for example) screening for breast cancer and screening for psychological distress. The presence of a tumour is a ‘biomedical fact’ that was hitherto unknown to the patient whereas the insight revealed from a distress questionnaire is dependent on what a patient chooses to disclose; disclosure that, it is argued, could also occur through open dialogue. Finally, PROMs may offer an inferior substitute for meaningful nuanced communication and their structured nature may divert attention away from patients’ concerns and reinforce the clinicians’ agenda [59, 61].

Context

The programme theories above largely focus on mechanisms and some draw on models that elucidate the pathways through which communication is anticipated to influence patient care and outcomes [62, 63]. However, such models often reduce complex social processes to variables, assume a linear, universal relationship between them and do not consider how context may shape these pathways [31, 61, 64]. We wanted to understand what contextual conditions are hypothesised to shape the ways in which PROMs may support clinician-patient communication to inform the process of theory testing within our evidence synthesis.

First, the content and structure of the PROM and the ways it is administered and fed back is likely to shape its impact on clinician-patient communication and subsequent care processes. Although many PROMs blur the two, clinicians may respond differently to information on patients’ symptoms compared to data on HRQoL [59]. It has been argued that individualised PROMs may be more appropriate than standardised PROMs for use in routine clinical practice, as they allow patients to nominate domains of most relevance and indicate the relative importance of each domain [119, 120]. The process of completing individualised instruments such as the Schedule for the Evaluation of Individual Quality of Life (SEIQoL) is envisaged to provide the ‘therapeutic foundation’ for goal setting and developing the clinician-patient relationship [121]. However, others note the requirement to distil complex and dynamic experiences into a score renders such measures as reductionist as standardised measures [49].

Second is the care setting or context; this encompasses a number of interrelated sets practices, norms and relationships that may shape the how PROMs are used and how participants respond [65]. Salmon and Young [31] highlight differences between care settings in the extent to which clinicians are expected to emotionally engage with patients. For example, in a mental health context, there is a presupposition that clinicians engage with patients emotionally as part of the care and treatment process whereas in an oncology context, patients expect their oncologist to cure or manage their tumour in order to preserve life. However, others consider that recognising and explicitly responding to patient’s emotional concerns is central to the care of cancer patients [62]. Related to this, Lafata et al. [64] observe that the purpose of care and thus the nature of clinician-patient communication changes over time throughout a patient’s journey, for example, in cancer care, from screening, treatment and advanced cancer care. Thus, PROMs are likely to play a different role in supporting clinician-patient communication during initial assessments compared to during active treatment or during end of life care. Finally, this is also framed by differences in care delivery across settings; for example, in specialist mental health care, patients may see the same therapist over time whereas in cancer care, they may see different oncologists and different healthcare professionals. This also influences the nature of the relationships and the relationship building process between patients and clinicians across settings.

Findings: Theory testing and refining

Synthesis: Testing the theories

In the second phase of our synthesis, we sought to explore whether, how and why the programme theories and counter theories are realised in practice by reviewing empirical evidence and refine our theories in light of that evidence. To test and refine theory 1, we compared the findings of studies examining clinicians’ and patients’ experiences of using either or both standardised and individualised PROMs across three different settings (primary care, specialist mental health care and cancer care), which we theorised represented a range of care contexts and in particular, different configurations of care delivery and clinician-patient relationships. Much of the evidence we reviewed to test this theory involved the use of PROMs within the care encounter or where clinicians were responsible for approaching patients to request they complete a PROM. To test theory 2, we focused on oncology as there is a high volume of literature [8, 16, 47, 66] and it has been the focus of debate regarding the role of explicit emotional communication in the care of patients [31]. We examined how PROMs influence communication and subsequent care, drawing on systematic reviews and RCTs, qualitative studies and studies of interactions in oncology consultations. Most of the RCTs involved PROMs completion outside the consultation. Table 1 sets out how each included study contributed to the process of theory testing and refinement.

Table 1 Studies used to test theories 1 and 2

Theory 1: PROMs support patients to raise issues with clinicians

Across all contexts, we found evidence to support the theory that PROMs completion prompts patients to engage in self-reflection [21, 25, 26, 67,68,69,70,71], enables them to identify what is important to them and develop a deeper understanding of how their condition has affected their life [69,70,71,72]. However, this depended on the care context; in palliative care, patients found PROMs completion an emotional experience and the degree to which they engage may depend on their preferred coping strategy; patients who cope by denying their current situation may avoid completing PROMs or not report the true extent of their feelings [68]. Furthermore, frequent PROMs completion for terminally ill patients without formal channels of feedback to clinicians can reduce patients’ HRQoL [73]. In contrast, being asked to complete a PROM when responses are fed back to clinicians can signal to the patient that they feel someone is interested in their feelings [67, 68, 74, 75] and gives them ‘permission’ to share or raise issues with clinicians [25, 69,70,71].

We then tested whether the structure of the PROM shaped patients’ experiences of completing them and how well patients and clinicians perceived they captured patients’ problems. In primary care and specialist mental health care, some patients felt that standardised PROMs simply did not fully capture the complexity or dynamic nature of their symptoms, particularly for patients with mental health problems [67, 75, 76]. These observations were shared by clinicians [67, 77,78,79] who expressed concern that the wording of some PROMs upset or alienated patients [29, 80, 81]. Those studies that directly compared individualised and standardised PROMs found that patients felt the former had greater validity and were less distressing [82]; clinicians also preferred individualised measures [70]. However, qualitative studies have noted that cues are co-produced by patients and clinicians during individualised PROMs completion [83] and the process of reducing these cues to a score can result in a loss of meaning [83, 84].

Next, we examined how PROMs structure and care context shaped patients’ and clinicians’ experiences of PROMs as a means for patient to raise issues with clinicians. In primary care, some patients felt that the ‘impersonal’ nature of standardised PROMs was helpful in enabling them to share issues [67]. Similarly, in specialist mental health care, clinicians [29] and service users [21] perceived that patients liked the structured nature of PROMs as it gave a framework for discussion and made talking about problems easier. While patients were generally supportive of the use of standardised PROMs as means of enabling them to share their experiences, clinicians expressed some reservations. In primary care, GPs perceived the use of standardised PROMs for identifying patients with depression as detrimental to clinician-patient communication because they ‘trivialised’ patients’ emotions and resulted in ‘bombarding’ patients with questions in a ‘mechanistic’ way [77, 78, 85]. GPs also found it difficult to incorporate PROMs completion and review into the natural flow of consultations [77, 85]. In specialist mental health care, clinicians expressed concern that asking patient to complete PROMs to comply with the reporting requirements of the Improving Access to Psychological Therapies programme when clients did not wish to complete them was detrimental to the therapeutic alliance [29]. In palliative care, the picture was mixed. In some studies, nurses perceived standardised PROMs constrained relationship building when they were used during first assessments [86] or routine visits [80, 81]. The difficulties clinicians reported echoed the ‘interactional strangeness’ of the standardised survey interview, where standardisation is required to support the psychometric validity of the PROM but at the same time restricts opportunities for sense making [87]. However, other studies found that when standardised PROMs were completed together by the clinician and patient on a tablet, this opened up opportunities for a conversation about the patients answers [26].

We found some evidence to suggest that in palliative care and mental health care, individualised PROMs were perceived as supporting communication by enabling the patient to tell their story in their own words [21, 69, 70]. However, clinicians struggled to use the scores produced to track change over time, as the issues patients nominated changed [70]. These findings mirror studies of individualised PROMs completion outside of clinical settings discussed previously [83, 84]. They provide further lateral support to Theory 1; suggesting that the process of PROMs completion is the stimulus of discussion, rather than the score itself.

Finally, we explored how clinician-patient relationships shaped the ways in which clinicians used PROMs in their interactions with patients. We found that across all care contexts, clinicians and patients felt that having a trusting relationship was necessary to support the sharing of concerns and problems [67, 70, 75,76,77,78,79, 85, 86, 88]. Clinicians placed great emphasis on developing rapport and a trusting relationship with patients through verbal interactions and preferred to let patients ‘tell their story’ in their own words [27, 77, 78, 85, 86]. In secondary mental health care, patients were reluctant to share their feelings through PROMs completion until this relationship had been developed [75, 76]. In palliative care, clinicians used a number of strategies to manage the process of completing a PROM in a way that preserved their relationship with patients. These included completing the PROM alongside patients[26], delaying the use of standardised PROMs to assess patients’ needs during their interactions with patients until they perceived a relationship had been sufficiently built [86], avoiding using them at all [81] or omitting or changing items to avoid upsetting patients [80]. Thus, clinicians adapted their use of PROMs to render them compatible with the ongoing management of patient relationships.

Theory 2: PROMs raise clinicians’ awareness of patients’ problems

Theory 2 hypothesised that PROM scores alert clinicians to patients’ problems and in turn prompt discussion and subsequent care processes. To test and refine this theory we began by identifying patterns in the impact of PROMs on communication and patient outcomes in oncology within an existing systematic review [8]. First, we explored whether there were any similarities in context between the RCTs that demonstrated a positive impact on patient outcomes and those that revealed no impact. We identified a notable shift in the type of PROM used to provide feedback to patients within RCTs over time; earlier trials evaluated the use of PROMs which measured patients’ functioning, HRQoL and symptoms [89, 90] whereas more recent trials have largely fed back symptom measures [23, 91]. In addition, in earlier trials, PROMs data were fed back to the clinician just before or during the consultation and provided additional information to inform discussion, whereas in later trials, feedback occurred in between clinic visits, enabling more frequent monitoring of patients. Chen et al. [8] found that of the 15 studies that reported any impact of patient outcomes, 13 reported some positive effect, of which nine involved the feedback of symptom measures, rather than HRQoL. Improvements in physical symptoms and chemotherapy side effects were most common. Feedback often occurred between clinic visits and thus acted as a substitute for more frequent follow up. An RCT published subsequent to the systematic review showed a similar pattern, revealing that feedback of patient reported symptoms and side effects resulted in reduced symptom severity, improved health related quality of life (as measured by the EQ-5D), receipt of active chemotherapy for longer, reduced visits to the emergency department and increased survival [22, 23]. Two studies from the systematic review found no impact on outcomes both involved the feedback of HRQoL measures [92, 93]. This tentatively suggests that clinicians may be more likely to respond when feedback provides them with information on symptom severity rather than on HRQoL and when it acts as a substitute, rather than an addition, to clinical encounters.

To further test this emerging explanation, we conducted a more detailed analysis of the impact of PROMs feedback on communication within the studies included in Chen et al. [8] review. Chen et al. [8] found that 21 out of 23 studies reported a ‘positive effect’ on communication. However, who was asked and the ways in which this positive impact was measured varied. The majority of studies relied on retrospective single item questions of satisfaction with communication or on questionnaire surveys and interviews. A small number of trials, which examined the feedback of PROMs during systemic cancer therapy, audio-recorded consultations and subjected them to content analysis [89, 90, 94, 95]. What participants think or recall being discussed or how they experience an interaction can depend on who is asked (the patient or the clinician) and can be different to what the analysis of tape recordings reveal was actually discussed [96,97,98]. Few of the trials adopted more than one method. In our synthesis, we focused on those studies that had conducted a detailed analysis of interactions within oncology consultations. This enabled us to explore subtle but important variations in what was discussed, by whom and when.

Takeuchi et al. [95] conducted a detailed analysis of consultations recorded within Velikova et al’s [89] trial involving feedback of the cancer specific HRQoL measure (EORTC QLQC-30) and the Hospital Anxiety and Depression Scale. They found that the difference in the number of symptoms discussed between control (no PROM) and intervention groups was largest during the first consultation, suggesting PROMs are most likely to provide ‘new’ information to the clinician at this point. They found no differences in the number of functional impairments discussed. While the severity of symptoms was predictive of whether they were discussed, there was no relationship between the severity of functional impairments and the likelihood of them being discussed. They also observed that discussion of symptoms and functions were predominantly initiated by patients (with the exception of dyspneoa and bowel habits) and that PROM feedback did not prompt oncologists to increase their enquiries about patients’ problems. Similarly, Berry et al. [94] found that whether symptoms were discussed depended on their severity. However, clinicians continued to focus on common side effects of chemotherapy, irrespective of whether or not these were reported as a severe problem by patients.

These findings provide some support to theory 1, that PROMs may support clinician-patient communication through giving patients ‘permission’ to raise issues with clinicians. They also suggest that although PROMs may not lead to clinicians initiating a discussion about either patients’ symptoms or functional status, they enable clinicians to identify symptoms that are particularly severe for patients. They further indicate that patients’ functional status is less likely to be explicitly discussed as a result of PROMs feedback than symptoms or biomedical issues and that the overall focus of the consultation remains on the management of chemotherapy side effects and reviewing treatment effectiveness. The next phase of our synthesis examined possible explanations for these findings by reviewing qualitative studies and surveys that explored patients and clinicians’ experiences and views of using PROMs in cancer care.

One explanation is that clinicians or patients (or both) do not feel that consultations during chemotherapy are an appropriate context for discussion of emotional, social or non-biomedical issues [95]. Detmar et al. [99] found that patients and doctors saw physical symptoms as clearly within the doctor’s remit and would be willing to discuss them in the consultation while none of the doctors would be willing to initiate discussion about emotional issues on their own. Taylor et al. [96] found that although many patients and doctors felt it appropriate to raise discussion about emotional issues, in reality, emotional issues were only mentioned in 27% of consultations and led to a discussion in less than half of these instances. While fewer patients and clinicians felt it was appropriate to raise concerns about social functioning in the consultation, these issues were actually raised more frequently (46% of the time) than emotional problems [96]. The authors hypothesise that this is because problems with social functioning are more likely to be caused by the physical impact of cancer, which oncologists see as the within their remit and the purpose of the consultation. Surveys have also found that clinicians were concerned patients may raise issues not related to cancer [24].

Another explanation of these findings is that doctors do not see the explicit discussion of functional or emotional issues as falling within their remit [62]. Surveys of cancer care professionals have shown that a higher percentage of nurses expressed positive attitudes to the value of PROMs in supporting patient care compared to physicians [28]. Qualitative studies have found that patients see the doctor’s remit as focusing on biomedical issues and are unsure whether it is the doctor’s role to address emotional or functional issues [100, 101]. Doctors also acknowledged that emotional issues are not routinely discussed and would not enquire about them unless patients volunteered information. Doctors, especially surgeons, felt their remit was treating the patient’s cancer and although they felt able to deal with emotional issues related to clinical problems, they felt that it was the nurses’ role to address wider emotional issues, a view shared by nurses [101]. Similarly, Greenhalgh et al. [32] observed that PROM scores do not distinguish between ‘problems related to cancer’ and ‘other problems’. As a result, oncologists had to reconcile between PROMs scores and patients’ verbal reports by inviting patients to account for high PROMs scores. Thus, to make sense of PROMs scores, clinicians needed to explore how and why patients had arrived at their answers. While these strategies opened up a discussion between the patient and the doctor, these discussions tended to be closed down when patients’ accounts revealed that the issue was not problematic for them or was not related to cancer.

Discussion

In this section we discuss and explain the main findings of our review in the context of broader debates about meaning in survey completion and clinician-patient communication. We note the limitations of our review and finally consider the implications of our findings for the use of PROMs in clinical practice.

Main findings

With regards to the theory that PROMs completion supports patients to raise issues with clinicians (Theory 1), we found that, for both standardised and individualised PROMs, the process of PROMs completion prompts patients to reflect on their health and in doing so, patients develop a deeper understanding of how their condition affects them. We also found that PROMS completion can enable patients to raise issues with clinicians by providing a framework for discussion and giving them ‘permission’ to raise issues, as it can signal that the clinician is interested in their views. This suggests that the process of PROMs completion is not simply a task of information retrieval, nor is it a neutral, inert activity of obtaining structured, standardised information from patients. Rather, the ways in which patients interpret questions and construct their answers is shaped by social and cultural factors and can affect the ways in which patients understand, frame or think about their condition [49, 87]. Drawing on the work of Gadamer [122], McClimans [123] offers a theoretical account of the PROMs completion process that can explain our findings. She argues that PROMs ask ‘genuine questions’, that is, questions which open up inquiry into the subject matter at hand but also the meaning of that subject matter. In order to answer a PROM item, respondents must infer both the subject matter of the question and the meaning of the subject matter implied by the question. McClimans [123] and others [102] observe that respondents bring their own understandings of that construct to bear on the question and attempt to understand PROMs items by relating these items to their own lives. In doing so, they may find that their understanding of that subject matter, that is, how their condition is affecting their symptoms, functioning and health related quality of life, is transformed. Thus, it is through these processes that PROMs provide an opportunity for respondents to reflect on their health and come to a deeper understanding of how their own condition affects them or what is most important to them. Furthermore, when patients are asked to complete a PROM, they often assume that this is because the clinician is interested in the findings [103]; this may signal to the patient that items contained in the PROM are appropriate topics for discussion, thus giving patients ‘permission’ to raise them.

In contrast, clinicians across a range of clinical settings found using a standardised PROM during initial assessments could constrain, rather than support communication and interfered with the process of managing relationships with patients, while individualised PROMs supported this dialogue. The ways in which PROMs data are socially produced can also explain how and why the structure of the PROM shapes clinicians’ experiences of using PROMs in clinical practice. As Mallinson [87] noted, when a PROM is completed within an interview, an additional layer of social interaction is brought into play. Standardised PROM completion is unlike the usual flow of conversation and is different to the interaction which occurs within consultations [104]. The direction of questioning is one way and the wording of the questions should not be altered, otherwise the validity of the PROM, as underpinned by psychometric testing, is threatened. Thus, as Mallinson [87] observes, the standardised survey interview creates an ‘interactional strangeness’ where ‘most of the mechanisms to check meaning are supressed’. Other studies have also found that standardised checklists and frameworks can narrow discussion and disrupt the process of managing and building relationships with patients [33, 105, 106]. In contrast, Krawczyk et al. [26] showed that when standardised PROMs were used in a ‘relational’ way, that is, nurses sat with patients as they completed the PROM and probed patients answers as they were produced, a dialogue with the patient was opened up. Similarly, individualised PROMs, appeared to mimic the more open structure clinicians used in their interactions with patients and allowed patients to ‘tell their story’ in their own words and provided opportunities to check meanings. Thus, it is the interaction between the material properties of the PROM and the existing social relations that shape how the support or detract from clinician patient relationships [26].

We tested theory 2 by focusing on oncology, where, in the majority of studies, patients completed a PROM prior to the consultation. PROMs act like a test result that prompts clinicians to discuss problems with patients and offer support for symptom management when the PROM functioned as a substitute, rather than an addition to the clinical encounter and when the PROM focused on symptoms and side effects rather than HRQoL. Following PROMs feedback, consultations with doctors largely focused on symptoms and side effects, rather than on patient functioning. Patients did not always feel it was appropriate to discuss functional and HRQL aspects with doctors and doctors did not perceive this was within their remit. In contrast, nurses felt discussion of such issues fell to them. These findings reflect the wider literature on communication in oncology [107,108,109,110] and have often been interpreted as doctors not being ‘patient centred’ or holding a lack of concern for patients’ emotional well-being [31]. However, recent studies have shown that patients can feel emotionally supported without explicit emotional talk within oncology consultations because they view doctors as experts who have the knowledge and authority to treat them [98, 111, 112]. Salmon and Young [31] argue that rather than seeing biomedical talk as an attempt to avoid engaging emotionally with patients, it represents doctors meeting their responsibility to provide emotional support to patients with cancer by treating the disease.

Limitations

Our findings for theory 2 may only be generalizable to oncology settings as we tested this theory using only empirical evidence from oncology. To test this theory further, it would be valuable to contrast oncology with the ways in which clinicians respond to PROMs scores in (for example) psychotherapy and specialist mental health care, as explicit emotional discussion is a central feature of therapy. We were not able to do this in our current review due to time and resource constraints. Similarly, we did not review the emerging literature about the use of PROMs to support patient care in orthopaedics [113, 114]. We also recognise that we did not test and refine all of the programme theories underlying how PROMs are thought to support the care of individual patients. For example, we did not explore whether and how PROMs enables shared decision-making or supports patient activation in the self-management of long term conditions [46]. Although we identified some key contextual conditions that shape how PROMs are used, we did not consider how the use of PROMs might be shaped by race, gender and age. We cannot rule out the possibility that an important study was missed from our review. However, the aim of this review was not to conduct an exhaustive search for all studies, but rather to sample those studies that were most relevant to testing and refining our two selected theories. Nonetheless, we specifically included on a wide range of recent and updated systematic reviews of both RCTs and qualitative studies [8, 15, 115]. Accepting these limitations, the review still has important implications for the use of PROMs in clinical practice and future research.

Conclusions and implications for research and practice

Studies evaluating the impact of PROMs feedback to clinicians on patient-physician communication have rarely considered how the process of PROMs completion impacts upon patients. Our review highlights that this is an important consideration. Exploring how and why patients answer PROMs in the ways that they do, in addition to understanding how clinicians and patients interpret the score itself, can expand our knowledge of how patients understand their condition and its impact [30].

Studies exploring the impact of PROMs on clinician patient communication and care processes have also largely focused on the ways in which PROMs impact on the information sharing and decision-making functions of the consultation, rather than on the impact of PROMs on the relationship building function. Our review indicates that the process of relationship building with patients is affected by and shapes the ways in which clinicians use PROMs in clinical practice. Krawczyk et al. [26] argue that we need to consider the interconnection between social relations and the materiality of PROMs to understand how PROMs support collaboration between patients and clinicians. As Brewster et al. [106] note, clinicians experience measurement tools as being socially situated; their use is entangled with the ongoing work of managing patient relationships. This has an important, but often overlooked, impact on how such measurement tools are used in clinical practice. Those implementing PROMs in clinical practice to support patient management need to consider how PROMs can be introduced in a way that supports, rather than detracts from the clinician-patient relationship. This will entail giving clinicians considerable freedom in how and when they use PROMs and allowing adaption of PROM items to local and disease trajectory specific circumstances.

Our review also suggests that PROMs can enable better identification of and greater discussion of problematic symptoms and side effects during chemotherapy but do not necessarily shift the focus of the consultation onto emotional and functional aspects. To achieve this shift would require a change in clinicians’, specifically doctors’, perceptions of their remit. Professional groups’ perceptions of their remit are socialised through many years of education and training and are mutually reinforced through the division of labour in the everyday practices of different clinical groups [116]. PROMs feedback alone is not sufficient to change these practices, which would require concomitant changes to organisation-wide structures that both produce and reinforce these professional boundaries. While training clinicians in the use and interpretation of PROMs is helpful [117], this does not address the structural constraints which may limit these discussions.

Furthermore, our review suggest that patients may not always want to discuss these issues with doctors; recent studies also indicate that discussion and management of biomedical issues can provide emotional support to patients [111]. Many current measures of patient centred communication assume that the discussion of functional impairments and emotional concerns is an important component of patient centred communication [118]. However, recent reviews [58] indicate that we need to rethink what constitutes patient centred communication and have argued that emotional and instrumental care are inseparable. In an oncology context, for example, patients value clinicians who can treat their cancer [112]. Our review suggests that PROMs feedback can support instrumental and emotional care through enabling clinicians to identify problematic symptoms and through supporting patients to raise concerns about these symptoms. Thus, future research on how PROMs feedback supports clinician patient communication and care processes should explore not what is talked about but how it is talked about and how patients experience this care using multiple methods [112].

Abbreviations

EORTC QLQC-30:

European Organisation for Research and Treatment of Cancer Quality of Life Questionnaire Core − 30

HRQoL:

Health Related Quality of Life

PROM:

patient reported outcome measure

SEiQoL:

Schedule for the Evaluation of Individual Quality of Life

References

  1. 1.

    Department of Health. (2010). Equity and excellence: liberating the NHS. London: Department of Health.

  2. 2.

    Institute of Medicine, U. S. (2001). Crossing the quality chasm: a new health system for the 21st century. Washington D.C.

  3. 3.

    Epstein, R. M., & Street, R. L. (2007). Patient-centered communication in Cancer care: promoting healing and reducing suffering. Bethesda.

  4. 4.

    McDonald, A., & Sherlock, J. (2016). A long and winding road. Improving communication with patients in the NHS. London: Marie Curie.

  5. 5.

    Street, R. L. (2013). How clinician–patient communication contributes to health improvement: modeling pathways from talk to outcome. Patient Education and Counseling, 92(3), 286–291.

  6. 6.

    Nelson, E. C., Eftimovska, E., Lind, C., Hager, A., Wasson, J. H., & Lindblad, S. (2015). Patient reported outcome measures in practice. BMJ (Online), 350(g7818).

  7. 7.

    Sutherland, H. J., & Till, J. E. (1993). Quality of life assessments and levels of decision making: differentiating objectives. Quality of Life Research, 2(4), 297–303.

  8. 8.

    Chen, J., Ou, L., & Hollis, S. J. (2013). A systematic review of the impact of routine collection of patient reported outcome measures on patients, providers and health organisations in an oncologic setting. BMC Health Services Research, 13, 211.

  9. 9.

    Boyce, M., & Browne, J. (2013). Does providing feedback on patient-reported outcomes to healthcare professionals result in better outcomes for patients? A systematic review. Quality of Life Research, 22(9), 2265–2278.

  10. 10.

    Knaup, C., Koesters, M., Schoefer, D., Becker, T., & Puschner, B. (2009). Effect of feedback of treatment outcome in specialist mental healthcare: a meta-analysis. The British Journal of Psychiatry, 195, 15–22.

  11. 11.

    Valderas, J. M., Kotzeva, A., Espallargues, M., Guyatt, G., Ferrans, C. E., Halyard, M. Y., Revicki, D. A., Symonds, T., Parada, A., & Alonso, J. (2008). The impact of measuring patient-reported outcomes in clinical practice: a systematic review of the literature. Quality of Life Research, 17(2), 179–193.

  12. 12.

    Gondek, D., Edbrooke-Childs, J., Fink, E., Deighton, J., & Wolpert, M. (2016). Feedback from outcome measures and treatment effectiveness, treatment efficiency, and collaborative practice: a systematic review. Administration and Policy in Mental Health and Mental Health Services Research, 43(3), 325–343.

  13. 13.

    Greenhalgh, J., Long, A. F., & Flynn, R. (2005). The use of patient reported outcome measures in clinical practice: lacking an impact or lacking a theory? Social Science and Medicine, 60, 833–843.

  14. 14.

    Antunes, B., Harding, R., & Higginson, I. J. (2014). Implementing patient-reported outcome measures in palliative care clinical practice: a systematic review of facilitators and barriers. Palliative Medicine, 28(2), 158–175.

  15. 15.

    Boyce, M., Browne, J., & Greenhalgh, J. (2014). The experiences of professionals with using information from patient-reported outcome measures to improve the quality of healthcare: a systematic review of qualitative research. BMJ Quality & Safety, 23(6), 508–518.

  16. 16.

    Etkind, S. N., Daveson, B. A., Kwok, W., Witt, J., Bausewein, C., Higginson, I. J., & Murtagh, F. E. (2015). Capture, transfer, and feedback of patient-centered outcomes data in palliative care populations: does it make a difference? A systematic review. Journal of Pain and Symptom Management, 49(3), 611–324.

  17. 17.

    Pawson, R. (2006). Evidence based policy: a realist perspective. London: Sage.

  18. 18.

    Greenhalgh, J., Pawson, R., Wright, J., Black, N., Valderas, J. M., Meads, D., Gibbons, E., Wood, L., Wood, C., Mills, C., & Dalkin, S. (2014). Functionality and feedback: a protocol for a realist synthesis of the collation, interpretation and utilisation of PROMs data to improve patient care. BMJ Open, 4(7).

  19. 19.

    Wong, C. K., Greenhalgh, T., Westhorp, G., Buckingham, J., & Pawson, R. (2013). RAMESES publication standards: realist synthesis. BMC Medicine, 11, 21.

  20. 20.

    Greenhalgh, J., Dalkin, S., Gooding, K., Gibbons, E., Wright, J., Meads, D., Black, N., Valderas, J. M., & Pawson, R. (2017). Functionality and feedback: a realist synthesis of the collation, interpretation and utilisation of patient-reported outcome measures data to improve patient care. Health Services and Delivery Research, 5(2).

  21. 21.

    Alves, P. C. G., Sales, C. M. D., & Ashworth, M. (2016). "it is not just about the alcohol": service users' views about individualised and standardised clinical assessment in a therapeutic community for alcohol dependence. Substance Abuse Treatment, Prevention, and Policy, 11(1).

  22. 22.

    Basch, E., Deal, A. M., Dueck, A. C., et al. (2017). Overall survival results of a trial assessing patient-reported outcomes for symptom monitoring during routine cancer treatment. JAMA, 318(2), 197–198.

  23. 23.

    Basch, E., Deal, A. M., Kris, M. G., Scher, H. I., Hudis, C. A., Sabbatini, P., Rogak, L., Bennett, A. V., Dueck, A. C., Atkinson, T. M., Chou, J. F., Dulko, D., Sit, L., Barz, A., Novotny, P., Fruscione, M., Sloan, J. A., & Schrag, D. (2016). Symptom monitoring with patient-reported outcomes during routine cancer treatment: a randomized controlled trial. Journal of Clinical Oncology, 34(6), 557–565.

  24. 24.

    Green, E., Yuen, D., Chasen, M., Amernic, H., Shabestari, O., Brundage, M., Krzyzanowska, M. K., Klinger, C., Ismail, Z., & Pereira, J. (2017). Oncology nurses' attitudes toward the Edmonton symptom assessment system: results from a large cancer care Ontario study. Oncology Nursing Forum, 44(1), 116–125.

  25. 25.

    Kane, P. M., Ellis-Smith, C. I., Daveson, B. A., Ryan, K., Mahon, N. G., McAdam, B., McQuillan, R., Tracey, C., Howley, C., & O’gara, G. (2018). Understanding how a palliative-specific patient-reported outcome intervention works to facilitate patient-centred care in advanced heart failure: a qualitative study. Palliative medicine, 32(1), 143–155.

  26. 26.

    Krawczyk, M., & Sawatzky, R. (2018). Relational use of an electronic quality of life and practice support system in hospital palliative consult care: a pilot study. Palliative & Supportive Care, 1–6.

  27. 27.

    Krawczyk, M., Sawatzky, R., Schick-Makaroff, K., Stajduhar, K., Öhlen, J., Reimer-Kirkham, S., Mercedes Laforest, E., & Cohen, R. (2018). Micro-Meso-macro practice tensions in using patient-reported outcome and experience measures in hospital palliative care. Qualitative Health Research 1049732318761366.

  28. 28.

    Pereira, J. L., Chasen, M. R., Molloy, S., Amernic, H., Brundage, M. D., Green, E., Kurkjian, S., Krzyzanowska, M. K., Mahase, W., Shabestari, O., Tabing, R., & Klinger, C. A. (2016). Cancer care Professionals' attitudes toward systematic standardized symptom assessment and the Edmonton symptom assessment system after large-scale population-based implementation in Ontario, Canada. Journal of Pain and Symptom Management, 51(4), 662–672 e668.

  29. 29.

    Sharples, E., Qin, C., Goveas, V., Gondek, D., Deighton, J., Wolpert, M., & Edbrooke-Childs, J. (2017). A qualitative exploration of attitudes towards the use of outcome measures in child and adolescent mental health services. Clinical Child Psychology and Psychiatry, 22(2), 219–228.

  30. 30.

    McClimans, L. (2010). A theoretical framework for patient-reported outcome measures. Theoretical Medicine and Bioethics, 31(3), 225–240.

  31. 31.

    Salmon, P., & Young, B. (2017). The inseparability of emotional and instrumental care in cancer: towards a more powerful science of clinical communication. Patient Education and Counseling.

  32. 32.

    Greenhalgh, J., Abhyankar, P., McCluskey, S., Takeuchi, E. E., & Velikova, G. (2013). How do doctors refer to patient-reported outcome measures (PROMS) in oncology consultations? Quality of Life Research, 22(5), 939–950.

  33. 33.

    Cowley, S., Mitcheson, J., & Houston, A. M. (2004). Structuring health needs assessments: the medicalisation of health visiting. Sociology of Health & Illness, 26(5), 503–526.

  34. 34.

    Krageloh, C. U., Czuba, K. J., Billington, D. R., Kersten, P., & Siegert, R. J. (2015). Using feedback from patient reported outcome measures in mental health services: a scoping study and typology. Psychiatric Services, 66(3), 224–241.

  35. 35.

    Booth, A., Harris, J., Croot, E., Springett, J., Campbell, F., & Wilkins, E. (2013). Towards a methodology for cluster searching to provide conceptual and contextual “richness” for systematic reviews of complex interventions: case study (CLUSTER). BMC Medical Research Methodology, 13(1), 118.

  36. 36.

    Booth, A., Wright, J., & Briscoe, S. (2018). Scoping and searching to support realist approaches. In N. Emmel, J. Greenhalgh, A. Manzano, M. Monaghan, & S. M. Dalkin (Eds.), Doing realist research. London: Sage.

  37. 37.

    Pawson, R. (2006). Digging for nuggets: how bad research can yield good evidence. International Journal of Social Research Methodology, 9, 127–142.

  38. 38.

    Higgins, J. P. T., Altman, D. G., Gøtzsche, P. C., Jüni, P., Moher, D., Oxman, A. D., Savović, J., Schulz, K. F., Weeks, L., & Sterne, J. A. C. (2011). The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ, 343.

  39. 39.

    Barbour, R. S. (2001). Checklists for improving rigour in qualitative research: a case of the tail wagging the dog? BMJ, 322(7294), 1115–1117.

  40. 40.

    Dixon-Woods, M., Shaw, R. L., Agarwal, S., & Smith, J. A. (2004). The problem of appraising qualitative research. Quality and Safety in Health Care, 13(3), 223–225.

  41. 41.

    Eakin, J. M., & Mykhalovskiy, E. (2003). Reframing the evaluation of qualitative health research: reflections on a review of appraisal guidelines in the health sciences. Journal of Evaluation in Clinical Practice, 9(2), 187–194.

  42. 42.

    Higginson, I. J., & Carr, A. J. (2001). Measuring quality of life: using quality of life measures in the clinical setting. BMJ, 322(7297), 1297–1300.

  43. 43.

    Greenhalgh, J. (2009). The applications of PROs in clinical practice: what are they, do they work, and why? Qual Life Res, 18(1), 115–123.

  44. 44.

    Porter, I., Gonçalves-Bradley, D., Ricci-Cabello, I., Gibbons, C., Gangannagaripalli, J., Fitzpatrick, R., Black, N., Greenhalgh, J., & Valderas, J. M. (2016). Framework and guidance for implementing patient-reported outcomes in clinical practice: evidence, challenges and opportunities. Journal of Comparative Effectiveness Research, 5(5), 507–519.

  45. 45.

    Santana, M. J., & Feeny, D. (2014). Framework to assess the effects of using patient-reported outcome measures in chronic care management. Quality of Life Research, 23(5), 1505–1513.

  46. 46.

    Morton, K., Dennison, L., May, C., Murray, E., Little, P., McManus, R. J., & Yardley, L. (2017). Using digital interventions for self-management of chronic physical health conditions: a meta-ethnography review of published studies. Patient Education and Counseling, 100(4), 616–635.

  47. 47.

    Kotronoulas, G., Kearney, N., Maguire, R., Harrow, A., Di Domenico, D., Croy, S., & MacGillivray, S. (2014). What is the value of the routine use of patient-reported outcome measures toward improvement of patient outcomes, processes of care, and health service outcomes in cancer care? A systematic review of controlled trials. Journal of Clinical Oncology, 32(14), 1480–1501.

  48. 48.

    Feldman-Stewart, D., & Brundage, M. (2009). A conceptual framework for patient-provider communication: a tool in the PRO research tool box. Quality of Life Research, 18, 109–114.

  49. 49.

    Neale, J., & Strang, J. (2015). Philosophical ruminations on measurement: methodological orientations of patient reported outcome measures (PROMS). Journal of Mental Health, 24(3), 123–125.

  50. 50.

    Wagner, A. K., & Vickrey, B. G. (1995). The routine use of health-related quality of life measures in the care of patients with epilepsy: rationale and research agenda. Quality of Life Research, 4(2), 169–177.

  51. 51.

    Frost, M. H., Bonomi, A. E., Cappelleri, J. C., Schunemann, H. J., Moynihan, T. J., & Aaronson, N. (2007). Applying quality of life data formally and systematically into clinical practice. Mayo Clinic Proceedings, 82(10), 1214–1228.

  52. 52.

    Jacobsen, P. B. (2007). Screening for psychological distress in Cancer patients: challenges and opportunities. Journal of Clinical Oncology, 25(29), 4526–4527.

  53. 53.

    Coulter, A., Roberts, S., & Dixon, A. (2013). Delivering better services for people with long term conditions: building the house of care. London: The King's Fund.

  54. 54.

    Delbanco, T. (1992). Enriching the doctor-patient relationship by inviting the Patient's perspective. Annals of Internal Medicine, 116(5), 414–418.

  55. 55.

    Trujols, J., Portella, M. J., Iraurgi, I., Campins, M. J., Siñol, N., & de Los Cobos, J. P. (2013). Patient-reported outcome measures: are they patient-generated, patient-centred or patient-valued? Journal of Mental Health, 22(6), 555–562.

  56. 56.

    Greenhalgh, J., Long, A. F., Brettle, A. J., & Grant, M. J. (1998). Reviewing and selecting outcome measures for use in routine practice. Journal of Evaluation in Clinical Practice, 4(4), 339–350.

  57. 57.

    Salander, P. (2017). Does advocating screening for distress in cancer rest more on ideology than on science? Patient Education & Counseling, 100, 858–860.

  58. 58.

    Salmon, P., & Young, B. (2017). A new paradigm for clinical communication: critical review of literature in cancer care. Medical Education, 51(3), 258–268.

  59. 59.

    Lohr, K. N., & Zebrack, B. J. (2009). Using patient-reported outcomes in clinical practice: challenges and opportunities. Quality of Life Research, 18(1), 99–107.

  60. 60.

    Wright, J. G. (2000). Evaluating the outcome of treatment: shouldn't we be asking patients if they are better? Journal of Clinical Epidemiology, 53(6), 549–553.

  61. 61.

    Miller, D., Gray, C. S., & K., K., & Cott, C. (2015). Patient centred care and patient reported measures: let's look before we leap. Patient, 8, 293–299.

  62. 62.

    Dean, M., & Street, R. L. (2014). A 3-stage model of patient-centered communication for addressing cancer patients’ emotional distress. Patient Education & Counseling, 94(2), 143–148.

  63. 63.

    Street, R. L., Makoul, G., Arora, N. K., & Epstein, R. M. (2009). How does communication heal? Pathways linking clinician-patient communication to health outcomes. Patient Education & Counseling, 74(3), 295–301.

  64. 64.

    Lafata, J. E., Shay, L. A., & Winship, J. M. (2017). Understanding the influences and impact of patient-clinician communication in cancer care. Health expectations: an international journal of public participation in health care and health policy.

  65. 65.

    Pawson, R., & Tilley, N. (1997). Realistic Evaluation. London: Sage.

  66. 66.

    Howell, D., Molloy, S., Wilkinson, K., Green, E., Orchard, K., Wang, K., & Liberty, J. (2015). Patient-reported outcomes in routine cancer clinical practice: a scoping review of use, impact on health outcomes, and implementation factors. Annals of Oncology.

  67. 67.

    Dowrick, C., Leydon, G. M., McBride, A., Howe, A., Burgess, H., Clarke, P., Maisey, S., & Kendrick, T. (2009). Patients' and doctors' views on depression severity questionnaires incentivised in UK quality and outcomes framework: qualitative study. BMJ, 338, b663.

  68. 68.

    Slater, A., & Freeman, E. (2004). Patients' views of using an outcome measure in palliative day care: a focus group study. International Journal of Palliative Nursing, 10(7), 343–351.

  69. 69.

    Cheyne, A., & Kinn, S. (2001). Counsellors' perspectives on the use of the schedule for the evaluation of individual quality of life (SEIQoL) in an alcohol counselling setting. British Journal of Guidance & Counselling, 29(1), 35–46.

  70. 70.

    Annells, M., & Koch, T. (2001). 'The real stuff': implications for nursing of assessing and measuring a terminally ill person's quality of life. Journal of Clinical Nursing, 10(6), 806–812.

  71. 71.

    Kettis-Lindblad, A., Ring, L., Widmark, E., Bendtsen, P., & Glimelius, B. (2007). Patients' and doctors' views of using the schedule for individual quality of life in clinical practice. J Support Oncol, 5(6), 281–287.

  72. 72.

    Hall, C. L., Taylor, J., Moldavsky, M., Marriott, M., Pass, S., Newell, K., Goodman, R., Sayal, K., & Hollis, C. (2014). A qualitative process evaluation of electronic session-by-session outcome measurement in child and adolescent mental health services. BMC Psychiatry, 14, 11.

  73. 73.

    Mills, M. E., Murray, L. J., Johnston, B. T., Cardwell, C., & Donnelly, M. (2009). Does a patient-held quality-of-life diary benefit patients with inoperable lung cancer? Journal of Clinical Oncology, 27(1), 70–77.

  74. 74.

    Nilsson, E., Wenemark, M., Bendtsen, P., & Kristenson, M. (2007). Respondent satisfaction regarding SF-36 and EQ-5D, and patients' perspectives concerning health outcome assessment within routine health care. Quality of Life Research, 16(10), 1647–1654.

  75. 75.

    Stasiak, K., Parkin, A., Seymour, F., Lambie, I., Crengle, S., Pasene-Mizziebo, E., & Merry, S. (2013). Measuring outcome in child and adolescent mental health services: consumers’ views of measures. Clinical Child Psychology and Psychiatry, 18(4), 519–535.

  76. 76.

    Wolpert, M., Curtis-Tyler, K., & Edbrooke-Childs, J. (2014). A qualitative exploration of patient and clinician views on patient reported outcome measures in child mental health and diabetes services. Administration & Policy in Mental Health, 1–7.

  77. 77.

    Leydon, G. M., Dowrick, C. F., McBride, A. S., Burgess, H. J., Howe, A. C., Clarke, P. D., Maisey, S. P., Kendrick, T., & Team, Q. O. F. D. S. (2011). Questionnaire severity measures for depression: a threat to the doctor-patient relationship? British Journal of General Practice, 61(583), 117–123.

  78. 78.

    Pettersson, A., Björkelund, C., & Petersson, E. L. (2014). To score or not to score: a qualitative study on GPs views on the use of instruments for depression. Family Practice, 31(2), 215–221.

  79. 79.

    Slater, A., & Freeman, E. (2005). Is the palliative care outcome scale useful to staff in a day hospice unit? International Journal of Palliative Nursing, 11(7), 346–354.

  80. 80.

    Hughes, R., Aspinal, F., Addington-Hall, J., Chidgey, J., Drescher, U., Dunckley, M., & Higginson, I. J. (2003). Professionals' views and experiences of using outcome measures in palliative care. International Journal of Palliative Nursing, 9(6), 234–238.

  81. 81.

    Hughes, R., Aspinal, F., Addington-Hall, J. M., Dunckley, M., Faull, C., & Higginson, I. (2004). It just didn't work: the realities of quality assessment in the English health care context. International Journal of Nursing Studies, 41(7), 705–712.

  82. 82.

    Neudert, C., Wasner, M., & Borasio, G. D. (2001). Patients' assessment of quality of life instruments: a randomised study of SIP, SF-36 and SEIQoL-DW in patients with amyotrophic lateral sclerosis. Journal of the Neurological Sciences, 191(1–2), 103–109.

  83. 83.

    Westerman, M., Hak, T., The, A.-M., Groen, H., & van der Wal, G. (2006). Problems eliciting cues in SEIQoL-DW: quality of life areas in small-cell lung cancer patients. Quality of Life Research, 15(3), 441–449.

  84. 84.

    Farquhar, M., Ewing, G., Higginson, I. J., & Booth, S. (2010). The experience of using the SEIQoL-DW with patients with advanced chronic obstructive pulmonary disease (COPD): issues of process and outcome. Quality of Life Research, 19(5), 619–629.

  85. 85.

    Mitchell, C., Dwyer, R., Hagan, T., & Mathers, N. (2011). Impact of the QOF and the NICE guideline in the diagnosis and management of depression: a qualitative study. British Journal of General Practice, 61(586), e279–e289.

  86. 86.

    Gamlen, E., & Arber, A. (2013). First assessments by specialist cancer nurses in the community: an ethnography. European Journal of Oncology Nursing, 17(6), 797–801.

  87. 87.

    Mallinson, S. (2002). Listening to respondents: a qualitative assessment of the short-form 36 health status questionnaire. Social Science & Medicine, 54(1), 11–21.

  88. 88.

    Eischens, M. J., Elliott, B. A., & Elliott, T. E. (1998). Two hospice quality of life surveys: a comparison. American Journal of Hospice & Palliative Medicine, 15(3), 143–148.

  89. 89.

    Velikova, G., Booth, L., Smith, A. B., Brown, P. M., Lynch, P., Brown, J. M., & Selby, P. J. (2004). Measuring quality of life in routine oncology practice improves communication and patient well being: a randomized controlled trial. Journal of Clinical Oncology, 22(4), 714–724.

  90. 90.

    Detmar, S. B., Muller, M. J., Schornagel, J. H., Wever, L. V., & Aaronson, N. K. (2002). Health-related quality-of-life assessments and patient-physician communication: a randomized controlled trial. JAMA, 288(23), 3027–3034.

  91. 91.

    Cleeland, C. S., Wang, X. S., Shi, Q., Mendoza, T. R., Wright, S. L., Berry, D., Malveaux, D., Shah, S. K., Gning, I., Hofstetter, W. L., Putnam, J. B., & Vaporciyan, A. A. (2011). Automated symptom alerts reduce postoperative symptom severity after Cancer surgery: a randomized controlled clinical trial. Journal of Clinical Oncology, 29(8), 994–1000.

  92. 92.

    Rosenbloom, S. K., Victorson, D. E., Hahn, E. A., Peterman, A. H., & Cella, D. (2007). Assessment is not enough: a randomized controlled trial of the effects of HRQL assessment on quality of life and satisfaction in oncology clinical practice. Psycho-Oncology, 16(12), 1069–1079.

  93. 93.

    McLachlan, S. A., Allenby, A., Matthews, J., Wirth, A., Kissane, D., Bishop, M., Beresford, J., & Zalcberg, J. (2001). Randomized trial of coordinated psychosocial interventions based on patient self-assessments versus standard care to improve the psychosocial functioning of patients with Cancer. Journal of Clinical Oncology, 19(21), 4117–4125.

  94. 94.

    Berry, D. L., Blumenstein, B. A., Halpenny, B., Wolpin, S., Fann, J. R., Austin-Seymour, M., Bush, N., Karras, B. T., Lober, W. B., & McCorkle, R. (2011). Enhancing patient-provider communication with the electronic self-report assessment for Cancer: a randomized trial. Journal of Clinical Oncology, 29(8), 1029–1035.

  95. 95.

    Takeuchi, E. E., Keding, A., Awad, N., Hofmann, U., Campbell, L. J., Selby, P. J., Brown, J. M., & Velikova, G. (2011). Impact of patient-reported outcomes in oncology: a longitudinal analysis of patient-physician communication. Journal of Clinical Oncology, 29(21), 2910–2917.

  96. 96.

    Taylor, S., Harley, C., Campbell, L. J., Bingham, L., Podmore, E. J., Newsham, A. C., Selby, P. J., Brown, J. M., & Velikova, G. (2011). Discussion of emotional and social impact of cancer during outpatient oncology consultations. Psycho-Oncology, 20(3), 242–251.

  97. 97.

    Fagerlind, H., Kettis, A., Bergstrom, I., Glimelius, B., & Ring, L. (2012). Different perspectives on communication quality and emotional functioning during routine oncology consultations. Patient Education & Counseling, 88, 16–22.

  98. 98.

    Young, B., Ward, J., Forsey, M., Gravenhorst, K., & Salmon, P. (2011). Examining the validity of the unitary theory of clinical relationships: comparison of observed and experienced parent–doctor interaction. Patient Education and Counseling, 85(1), 60–67.

  99. 99.

    Detmar, S. B., Aaronson, N. K., Wever, L. D. V., Muller, M., & Schornagel, J. H. (2000). How are you feeling? Who wants to know? Patients’ and oncologists’ preferences for discussing health-related quality-of-life issues. Journal of Clinical Oncology, 18(18), 3295–3301.

  100. 100.

    Velikova, G., Awad, N., Coles-Gale, R., Wright, E. P., Brown, J. M., & Selby, P. J. (2008). The clinical value of quality of life assessment in oncology practice: a qualitative study of patient and physician views. Psycho-Oncology, 17(7), 690–698.

  101. 101.

    Absolom, K., Holch, P., Pini, S., Hill, K., Liu, A., Sharpe, M., Richardson, A., Velikova, G., Supportive, N. C., & Palliative Care Research, C. (2011). The detection and management of emotional distress in cancer patients: the views of health-care professionals. Psycho-Oncology, 20(6), 601–608.

  102. 102.

    Ong, B. N., Hooper, H., Jinks, C., Dunn, K., & Croft, P. (2006). 'I suppose that depends on how I was feeling at the time': perspectives on questionnaires measuring quality of life and musculoskeletal pain. Journal of Health Services Research & Policy, 11(2), 81–88.

  103. 103.

    Bergh, I., Kvalem, I. L., Aass, N., & Hjermstad, M. J. (2011). What does the answer mean? A qualitative study of how palliative cancer patients interpret and respond to the Edmonton symptom assessment system. Palliative Medicine, 25(7), 716–724.

  104. 104.

    Robinson, J. D. (2003). An interactional structure of medical activities during acute visits and its implications for Patients' participation. Health Communication, 15(1), 27–59.

  105. 105.

    Blakeman, T., Bower, P., Reeves, D., & Chew-Graham, C. (2010). Bringing self management into clinical view: a qualitative study of long-term condition management in primary care consultations. Chronic Illness, 6, 136–150.

  106. 106.

    Brewster, L., Tarrant, C., Willars, J., & Armstrong, N. (2017). Measurement of harms in community care: a qualitative study of use of the NHS safety thermometer. BMJ Quality and Safety.

  107. 107.

    Rodriguez, K. L., Bayliss, N., Alexander, S. C., Jeffreys, A. M., Olsen, M. K., Pollak, K. I., Kennifer, S. L., Tulsky, J. S., & Arnold, R. M. (2010). How do oncologists and their patients with advanced cancer communicate about health-related quality of life. Psychooncology, 19(5), 490–499.

  108. 108.

    Fagerlind, H., Lindblad, A. K., Bergstrom, I., Nilsson, M., Naucler, G., Glimelius, B., & Ring, L. (2008). Patient-physician communication during oncology consultations. Psycho-Oncology, 17(10), 975–985.

  109. 109.

    Hack, T. F., Pickles, T., Ruether, J. D., Weir, J., Bultz, B. D., & Degner, L. F. (2010). Behind closed doors: systematic analysis of breast cancer consultation communication and predictors of satisfaction with communication. Psycho-Oncology, 19, 626–636.

  110. 110.

    Yoon, S., & M., C., Hung, W. K., Ying, M. S., Or, A., & Lam, W. W. T. (2014). Communicative characteristics of interaction between surgeons and Chinese women with breast cancer in oncology consultation: a conversation analysis. Health Expectations, 18, 2825–2840.

  111. 111.

    Young, B., Hill, J., Gravenhorst, K., Ward, J., Eden, T., & Salmon, P. (2013). Is communication guidance mistaken? Qualitative study of parent–oncologist communication in childhood cancer. British Journal of Cancer, 109(4), 836–843.

  112. 112.

    Salmon, P., Mendick, N., & Young, B. (2011). Integrative qualitative communication analysis of consultation and patient practitioner perspectives: towards a theory of authentic caring in clinical relationships. Patient Education & Counseling, 82, 448–454.

  113. 113.

    Stranger, M., Morrison, P. T., Yazgangolu, E., & Bhalla, M. (2016). A study to assess patient-reported outcome measures (PROMs) and to investigate the practicality of using PROMs in a surgical office. BC Medical Journal, 58(2), 82–89.

  114. 114.

    Ayers, D. C., Zheng, H., & Franklin, P. D. (2013). Integrating patient-reported outcomes into orthopaedic clinical practice: proof of concept from FORCE-TJR. Clinical Orthopaedics & Related Research, 471(11), 3419–3425.

  115. 115.

    Gonçalves Bradley, D. C., Gibbons, C., Ricci-Cabello, I., Bobrovitz, N. J. H., Gibbons, E. J., Kotzeva, A., Alonso, J., Fitzpatrick, R., Bower, P., van der Wees, P. J., Rajmil, L., Roberts, N. W., Taylor, R. S., Greenhalgh, J., Porter, I., & Valderas, J. M. (2015). Routine provision of information on patient-reported outcome measures to healthcare providers and patients in clinical practice. The Cochrane Library.

  116. 116.

    Nancarrow, S. A., & Borthwick, A. M. (2005). Dynamic professional boundaries in the healthcare workforce. Sociology of Health & Illness, 27(7), 897–919.

  117. 117.

    Santana, M. J., Haverman, L., Absolom, K., Takeuchi, E., Feeny, D., Grootenhuis, M., & Velikova, G. (2015). Training clinicians in how to use patient-reported outcome measures in routine clinical practice. Quality of Life Research, 24(7), 1707–1718.

  118. 118.

    Street, R. L. (2017). The many "Disguises" of patient-centered communication: problems of conceptualization and measurement. Patient education and counseling.

  119. 119.

    Brow, J.P, McGee, H.M., & O’Boyle, C.A. (1997). Conceptual approaches to the assessment of quality of life. Psychology and Health, 6, 737-751.

  120. 120.

    Howell, D., Mayo, S., Currie, S., Jones, G., Boyle, M., Hack, T., Green, E., Hoffman, L., Collacutt, V., McLeod, D., & Simpson, J. (2012). Psychosocial health care needs assessment of adult cancer patients: a consensus-based guideline. Supportive Care in Cancer, 20(12), 3343-3354.

  121. 121.

    Macduff, C. (2000). Respondent-generated quality of life measures: useful tools for nursing or more fool’s gold? Journal of Advanced Nursing, 32(2), 375-382.

  122. 122.

    Gadamer, H-G. (2003). Truth and method, 2nd ed. Trans. Joel Weinsheimer and Donald G. Marshall. New York: Continuum Press.

  123. 123.

    McClimans L. (2010). A theoretical framework for patient-reported outcome measures. Theoretical Medicine and Bioethics, 21(3), 225-240.

Download references

Availability of data and materials

All data cited within this manuscript has previously been published in either peer reviewed journals or policy documents.

Funding acknowledgement

This project was funded by NIHR Health Services and Delivery Research 12/136/31. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health. EG was also funded by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care Oxford at Oxford Health NHS Foundation Trust.

Additional acknowledgements

We would like to thank Professor Ray Pawson for his methodological advice throughout the review. We would also like to thank our patient advisory group for their advice and insight during the review: Laurence Wood, Rosie Hassman, Gill Riley and Eileen Exeter. Finally, we would also like to thank the policy makers, NHS managers and clinicians who attended our theory workshop.

Author information

JG designed the synthesis, carried out the synthesis and took the lead in drafting the manuscript. KG contributed to the design on the synthesis, carried out the synthesis and critically revised the manuscript. EG contributed to the design on the synthesis, carried out the synthesis and critically revised the manuscript. SD contributed to the design on the synthesis, carried out the synthesis and critically revised the manuscript. JMW developed and carried out the searches and critically revised the manuscript. JMW contributed to the design of the synthesis and critically revised the manuscript. NB contributed to the design of the synthesis and critically revised the manuscript. All authors read and approved the final manuscript.

Correspondence to Joanne Greenhalgh.

Ethics declarations

Ethics approval and consent to participate

Our review did not require that we collected data from human subjects. NHS managers, clinicians and patients helped shape the focus of the review through workshops and we applied for light touch ethical review and were given ethical approval from the University of Leeds Ethics Committee, Reference: LTSSP-019. Our review was registered on the PROSPERO database: CRD42013005938.

Consent for publication

N/A

Competing interests

We confirm that all authors have approved the final manuscript and have no conflicts of interest.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:

Search strategies. (DOCX 18 kb)

Additional file 2:

Inclusion and exclusion criteria. (DOCX 12 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Greenhalgh, J., Gooding, K., Gibbons, E. et al. How do patient reported outcome measures (PROMs) support clinician-patient communication and patient care? A realist synthesis. J Patient Rep Outcomes 2, 42 (2018) doi:10.1186/s41687-018-0061-6

Download citation

Keywords

  • Patient reported outcome measures
  • Realist synthesis
  • Clinician-patient communication
  • Feedback