Skip to main content

Listening to the elephant in the room: response-shift effects in clinical trials research

Abstract

Background

While a substantial body of work postulates that adaptation (response-shift effects) may serve to hide intervention benefits, much of the research was conducted in observational studies, not randomized-controlled trials. This scoping review identified all clinical trials that addressed response shift phenomena, and characterized how response-shift effects impacted trial findings.

Methods

A scoping review was done of the medical literature from 1968 to 2021 using as keywords “response shift” and “clinical trial.” Articles were included if they were a clinical trial that explicitly examined response-shift effects; and excluded if they were not a clinical trial, a full report, or if response shift was mentioned only in the discussion. Clinical-trials papers were then reviewed and retained in the scoping review if they focused on randomized participants, showed clear examples of response shift, and used reliable and valid response-shift detection methods. A synthesis of review results further characterized the articles’ design characteristics, samples, interventions, statistical power, and impact of response-shift adjustment on treatment effect.

Results

The search yielded 2148 unique references, 25 of which were randomized-controlled clinical trials that addressed response-shift effects; 17 of which were retained after applying exclusion criteria; 10 of which were adequately powered; and 7 of which revealed clinically-important response-shift effects that made the intervention look significantly better.

Conclusions

These findings supported the presumption that response shift phenomena obfuscate treatment benefits, and revealed a greater intervention effect after integrating response-shift related changes. The formal consideration of response-shift effects in clinical trials research will thus not only improve estimation of treatment effects, but will also integrate the inherent healing process of treatments.

Key points

  • This scoping review supported the presumption that response shift phenomena obfuscate treatment benefits and revealed a greater intervention effect after integrating response-shift related changes.

  • The formal consideration of response-shift effects in clinical trials research will not only improve estimation of treatment effects but will also integrate the inherent healing process of treatments.

Introduction

Clinicians have long-acknowledged that patients adapt to their health condition [1]. They find ways to be happy despite restrictions in ambulation, respiration, and energy [2]. They find meaning and purpose even as their life narrows in scope or activity [3]. They re-think what is important to them [4], what “good quality of life” (QOL) means [5], what “moderate fatigue” means [6]. All of these underlying and often unspoken changes mean that, although the same person completes patient-reported outcomes repeatedly in a longitudinal study, they may be using different internal standards, referencing different values, or considering a different conceptualization of what the investigator is targeting [7,8,9]. These “response shifts” are critical to adaptation, and without response shift, patients’ ability to process the vicissitudes of life, health, and aging would be impaired [10, 11].

While a substantial body of work postulates that response-shift effects may serve to hide intervention benefits [7, 8, 12, 13], much of the research concerns observational studies that are not randomized controlled designs. Response-shift effects are, however, likely of relevance to clinical-trials research. The intervention and control/placebo groups likely adapt differently, particularly if the intervention is effective. If they adapt differently, what are the implications for the treatment’s observed benefit? Does response shift play a role in non-inferiority trials? What can be learned in such trials if, for example, treatment arm differences are negligible? Is response shift ignorable by clinical trialists? By governmental agencies responsible for vetting new drugs?

We postulate that clinical trialists have largely ignored response shift phenomena in pivotal trials, and that any such investigations would be secondary analyses. We believe that this context may relate to concerns that response-shift studies will undermine pivotal trial findings, no matter how well-designed the control group. By ignoring this widely acknowledged human-adaptation process, however, trialists are co-existing with an “elephant in the room.” In other words, everyone knows it’s there and no one is talking about it.

The present work implemented a scoping review of the literature to find all clinical trials that addressed response shift phenomena, and to characterize how response-shift effects impacted trial findings. Such a review is an approach for evidence synthesis [14] that seeks to identify knowledge gaps, clarify concepts, or investigate research conduct [14]. A scoping review may be a precursor to a systematic review, the latter seeking to uncover and appraise international evidence, utilizing methods that minimize bias using rigorous methods to synthesize information related to a particular question, and to inform practice [14]. Accordingly, and consistent with a scoping-review approach [15], the present work seeks to describe the clinical-trials literature with regard to response-shift investigations. It does not seek to quantify average effect sizes, which would be more appropriate to a meta-analysis (e.g., [16]).

Methods

Data sources and search strategy

We implemented a scoping review of the medical literature from 1968 to 2021. The goal of the scoping review was to focus on response-shift research in the context of clinical trials. Using the search terms (i.e., keywords) “response shift” and “clinical trial,” we searched the following databases: Pubmed, CINAHL Plus, Embase, PsycInfo, and Google Scholar. We then combined the search results and removed any duplicate literature.

Article selection and characterization

The research team met repeatedly in advance of the initial screen and selection of articles to make sure we all understood the screening methodology. This approach was similar to an earlier project done by this research team, in which we demonstrated that this collaborative approach was efficient and achieved a high level of reliability [11]. The first author (CES) then examined the list of search results, and identified articles that explicitly examined response-shift effects in clinical-trials data. Articles were included at this initial stage if they were a clinical trial that explicitly examined response-shift effects. Articles were excluded at this initial stage for the following possible reasons: were not a clinical trial (e.g., observational study, literature review, protocol only; were an abstract only, not a full report); or if response shift was mentioned only in the introduction section and/or discussion section of the paper.

The resulting set of clinical-trials articles were then divided among four raters (CES, ICH, GR, RS) for further characterization of: (a) the main research question; (b) the drug or intervention being evaluated; (c) the clinical-trial design; (d) patient population; (e) patient-reported outcome tools used; (f) response-shift methods used; (g) whether there were hypotheses regarding response-shift effects and (h) response-shift findings. At this stage, further exclusions were made if the article focused only on non-randomized study participants or was not a clear example of response shift. Additionally, we excluded articles that used the then-test method, due to the plethora of studies documenting problems of reliability and validity of this obsolete method [17,18,19]. All summary information about the final set of included articles was double-checked by all four raters to ensure accuracy of presented syntheses.

Synthesis of review results

We grouped the final set of retained articles by: (a) whether the study was a primary or secondary/post-hoc analysis; (b) whether the intervention was drug/medical device/surgery or psychosocial/behavioral/nursing intervention; and (c) whether the stated focus of the work was primarily methodological or on the clinical impact of response-shift effects.

We then examined whether response shift affected trial results as a function of the study’s statistical power for the specific response-shift detection method used, using well-accepted guidelines for statistical-power considerations [20,21,22]. For example, using Cohen’s criteria for a comparison of means, a study would need to have at least 26 people per group or treatment arm to be powered to detect a large effect size [21]. For multivariate analytic methods that require larger sample sizes to yield robust estimates, larger sample sizes would be needed [23]. For example, a rule of thumb for structural equation modeling is 200 people per group/time point being evaluated, which would enable adequate power robust estimates of loadings of 0.90 but not for loadings of 0.80 [24]. Although some recent studies indicate a range of sample size goals for such modeling, the differences in sample sizes are primarily driven by number of variables in the model. In studies investigating response-shift effects as measurement invariance using a structural equation modeling framework, we believe the general rule of thumb is a reasonable criterion. Ideally, studies would be powered to detect at least a medium effect size since this corresponds to clinically significant change [25]. Being powered to detect small effect sizes would correspond to current estimates of response-shift effects in observational research [16], although a clinical trial of a potent intervention might be expected to yield at least medium response-shift effects. For the purposes of the current work, “adequate” was defined to be 80% power, α = 0.05, to detect at least a large effect size given the specific statistical method used. The point estimate for small, medium, and large effect sizes differs depending on the statistical method used, and the interested reader is referred to Cohen’s seminal paper for examples using common behavioral-science methods [21].

If adequately powered, we evaluated whether the response-shift adjustment made the treatment look more or less effective. In other words, did the treatment(s) being assessed in the clinical trial have a more or less beneficial impact on outcomes when response-shift effects were considered?

Results

Descriptive characteristics of included articles

The database search yielded 2148 unique references, 116 of which were found using PubMed, CINAHL Plus, EMBASE, and PsycINFO, all of which were duplicated in the Google Scholar search which yielded 2148 articles. After excluding ineligible articles (2088 not clinical trials, 35 mentioning response shift only in introduction or discussion), a set of 25 articles was included in the scoping review (Appendix Table 2) (Fig. 1). Further exclusion was done because the article focused on a non-randomized sample (n = 1) [26]; the article used the then-test exclusively (n = 5) [27,28,29,30,31]; or because the article was not a clear example of response shift (n = 2) [32, 33] (Table 1, Fig. 1). The remaining 17 articles included ten papers reflecting four trials, where 11 of which addressed a distinct response-shift hypothesis (Fig. 1).

Fig. 1
figure 1

Flow chart of the article selection process for final set of retained articles

The 17 articles were predominantly secondary or post-hoc analyses (n = 12) [34,35,36,37,38,39,40,41,42,43,44,45], rather than explicitly designing the study to address response-shift effects (n = 5) [19, 46,47,48,49] (Fig. 2). Eight of the retained articles addressed drug, medical-device, or surgical interventions [11, 36, 37, 39, 40, 43, 44, 48], and nine addressed psychosocial, behavioral, or nursing interventions [34, 35, 38, 41, 45,46,47, 49, 50]. Ten of the articles were focused on methodological development [19, 34,35,36,37,38, 40, 41, 50, 51], and seven on the clinical impact of response shift [11, 39, 43,44,45,46, 49].

Fig. 2
figure 2

Characterization of clinical-trials articles included

Substantive findings

Of note, the articles documented that response shift affected trial results more often than not. This impact was also associated with the statistical power of the comparisons done (Fig. 3). Among the 10 retained articles that had adequate power, seven documented a clinically-important response-shift effect that affected trial results [11, 37, 38, 43, 44, 46, 49], two did not [47, 50], and one did not address the clinical impact of response shift [41]. Among the seven retained articles with inadequate power, two documented a clinically-important response-shift effect (one better [39], one worse [35]), and five documented no impact on the estimated intervention impact [36, 40, 45, 47, 48].

Fig. 3
figure 3

Impact of power on response-shift effects on trial results

Considering only the ten adequately powered studies that documented a response-shift impact on the intervention, seven revealed a clinically-important response-shift effect that made the intervention look significantly better [11, 37, 38, 43, 44, 46, 49], none made it look worse, and three documented no impact [34, 50], and one did not address the impact on the intervention effect [41] (Fig. 4). These findings support the long-standing presumption that response shift phenomena may serve to obfuscate treatment benefits [7,8,9,10, 52, 53].

Fig. 4
figure 4

Impact of response-shift adjustment on estimated treatment benefit

The studies that revealed a greater intervention effect after integrating response-shift related changes. Consistent with intervention goals, there were changes in priorities/preferences after Advance Care Planning interventions [46, 49]; changes in conceptualization after post-stroke rehabilitation [34]; changes in internal standards for physical functioning after anti-hypertensive treatment [37]; and changes in internal standards resulting in increased honesty about risky drinking behavior after a motivational-interviewing intervention [38]. Larger differences in mental-health functioning were found between treatment arms, after considering response-shift effects in a trial comparing a highly effective drug to placebo for patients with a chronic progressive neurological disease [11, 43]; and two effective treatments in a non-inferiority trial for a chronic blood disorder were found to yield “better than normal” QOL compared to the general population [44].

Table 1 Clinical Trials Articles Found After Initial Inclusion/Exclusion Criteria Applied

Discussion

The present scoping review documents an emerging literature on response shift in randomized-controlled clinical-trials research. This literature draws on medical and behavioral interventions, and focuses on a both methodological and clinical-impact studies. Among the two-thirds with adequate statistical power for the response-shift analyses implemented, it was eminently clear that response shift phenomena affected trial results and predominantly in the direction of revealing more substantial treatment benefits. Most of the studies suggesting no impact of response shift phenomena were underpowered for the response-shift analyses implemented, thereby undermining their conclusions. Thus, when the “elephant” was empowered to “speak”, the response-shift effects detected in adequately powered studies suggested greater treatment benefits than previously found in the pivotal trials.

The implications of the present work are substantial for clinical trialists. First, response-shift effects are likely important for better understanding treatment effects for both medical and behavioral interventions. Further, this better understanding is unlikely to denigrate the benefit of the treatment effects, as estimated by analyses that do not explicitly consider response-shift effects. For example, clinical trials that do not document a positive impact on mental health in the context of a powerful treatment that modifies disease progression, may well be “hiding” a response-shift effect that belies the greater benefit of the drug (e.g., [11, 43]). Uncovering such effects is an important and clinically relevant outcome.

A second implication is that clinical trialists interested in evaluating response-shift effects in their trial data should pay attention to statistical power considerations when selecting the response-shift detection method. Our study suggests that being underpowered studies were more likely to conclude that response shift phenomena did not affect trial results. Recent developments in response-shift methods provide efficient and effective ways to examine response-shift effects even in the context of small samples [11, 43] or non-inferiority trials [44]. These include adaptations of random-effects modeling [54], equating [55], and case–control studies [56].

The present study also has implications for the Federal Drug Administration (FDA) and European Medicine Agency (EMA) in their process for considering new drug applications. If considering response-shift effects leads to an increased estimate of the treatments effect, then it should be standard practice to use methods that integrate response-shift effects in clinical trials analyses. Of note, such methods should have a strong evidence base for reliability and validity, thus specifically excluding use of the then-test method. Additionally, treatments that enable response-shift effects are likely more desirable than those that do not. While this idea is apparent in the context of rehabilitative nursing interventions (e.g., [34]), it is also desirable in other contexts. A drug with severe toxicities may make it difficult for the patient to reprioritize or reconceptualize QOL, particularly in the control arm if the drug is the current standard of care, because so much of the patient’s time is spent in suffering. Explicitly requiring analyses that consider response-shift effects for new drug applications would be an appropriate policy implication of the present work. Further, trialists should be encouraged to evaluate response effects separately for the interventional arm and control arm. If the response-shift effect is detected, then the treatment findings should further incorporate or adjust the response-shift effects. Accordingly, the FDA and EMA should integrate response-shift effects into their guidelines and operational standards for incorporating response-shift evaluation in the future trials, particularly with regard to considerations of statistical power for the response-shift detection method(s) being used.

The present work had notable advantages, such as considering a large set of potential articles and reducing the set for further consideration based on clear and replicable criteria. It is possible, however, that this large set was limited by publication bias, that is that null results were not deemed publishable and thus not available for inclusion. This source of bias may have distorted the findings of the present work.

Conclusions

In summary, this scoping review identified 25 randomized-controlled clinical trials that addressed response-shift effects. A subset of 17 were retained after implementing exclusion criteria, of which 10 were adequately powered to implement the statistical methodology used. These papers generally documented a larger treatment effect after considering response-shift effects. This work thus demonstrated that response shift has an effect on clinical-trial outcomes, and supports the recommendation that current and future researchers should incorporate methods to detect response shift when reporting results, especially when reporting null results. The formal consideration of response-shift effects in clinical trials research will thus not only improve estimation of treatment effects, but will also integrate the inherent healing process of treatments. Adaptation is part of a positive outcome process, and thus should be central to clinical trials analyses.

Availability of data and materials

The data used in these analyses are publicly-available publications. Requests for summary tables may be shared upon review.

Abbreviations

EMA:

European Medicines Agency

FDA:

Federal Drug Administration

QOL:

Quality-of-life

5 References

  1. Wilson IB (1999) Clinical understanding and clinical implications of response shift. Soc Sci Med 48(11):1577–1588

    Article  CAS  Google Scholar 

  2. Rothermund K, Brandtstadter J (2003) Coping with deficits and losses in later life: from compensatory action to accommodation. Psychol Aging 18(4):896–905

    Article  Google Scholar 

  3. Richards TA, Folkman S, Schwartz CE, Sprangers MAG (2000) Response shift: a coping perspective. In: Adaptation to changing health: response shift in quality-of-life research. American Psychological Association, Washington, pp 25–36

  4. Wrosch C, Scheier MF (2003) Personality and quality of life: the importance of optimism and goal adjustment. Qual Life Res 12(Suppl. 1):59–72

    Article  Google Scholar 

  5. Ubel PA, Loewenstein G, Jepson C (2003) Whose quality of life? A commentary exploring discrepancies between health state evaluations of patients and the general public 1. Qual Life Res 12(6):599–607

    Article  Google Scholar 

  6. Westerman MJ, Sprangers MA, Groen HJ, van der Wal G, Hak T (2007) Small-cell lung cancer patients are just “a little bit” tired: response shift and self-presentation in the measurement of fatigue. Qual Life Res 16(5):853–861

    Article  Google Scholar 

  7. Sprangers MAG, Schwartz CE (1999) Integrating response shift into health-related quality of life research: a theoretical model. Soc Sci Med 48(11):1507–1515

    Article  CAS  Google Scholar 

  8. Schwartz CE, Sprangers MAG (1999) Methodological approaches for assessing response shift in longitudinal health-related quality-of-life research. Soc Sci Med 48:1531–1548. https://doi.org/10.1016/s0277-9536(99)00047-7

    Article  CAS  PubMed  Google Scholar 

  9. Rapkin BD, Schwartz CE (2004) Toward a theoretical model of quality-of-life appraisal: Implications of findings from studies of response shift. Health Qual Life Outcomes 2(1):14

    Article  Google Scholar 

  10. Rapkin BD, Schwartz CE (2019) Advancing quality-of-life research by deepening our understanding of response shift: a unifying theory of appraisal. Qual Life Res 28(10):2623–2630. https://doi.org/10.1007/s11136-019-02248-z

    Article  PubMed  Google Scholar 

  11. Schwartz CE, Rohde G, Biletch E, Stuart RB, Huang I, Lipscomb J et al (2022) If it’s information, it’s not “bias”: a scoping review and proposed nomenclature for future response-shift research. Qual Life Res 31:2247–2257

    Article  Google Scholar 

  12. Barclay-Goddard R, Epstein JD (2009) Response shift: A brief overview and proposed research priorities. Qual Life Res 18:335–346

    Article  Google Scholar 

  13. Sajobi TT, Brahmbatt R, Lix LM, Zumbo BD, Sawatzky R (2018) Scoping review of response shift methods: current reporting practices and recommendations. Qual Life Res 27(5):1133–1146

    Article  Google Scholar 

  14. Munn Z, Peters MD, Stern C, Tufanaru C, McArthur A, Aromataris E (2018) Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol 18(1):1–7

    Article  Google Scholar 

  15. Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D et al (2018) PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med 169(7):467–473

    Article  Google Scholar 

  16. Schwartz CE, Bode R, Repucci N, Becker J, Sprangers MAG, Fayers PM (2006) The clinical significance of adaptation to changing health: a meta-analysis of response shift. Qual Life Res 15:1533–1550

    Article  Google Scholar 

  17. Schwartz CE, Sprangers MAG, Carey A, Reed G (2004) Exploring response shift in longitudinal data. Psychol Health 19(1):51–69

    Article  Google Scholar 

  18. Schwartz CE, Rapkin BD (2012) Understanding appraisal processes underlying the thentest: a mixed methods investigation. Qual Life Res 21:381–388. https://doi.org/10.1007/s11136-011-0023-4

    Article  PubMed  Google Scholar 

  19. Ahmed S, Mayo NE, Corbiere M, Wood-Dauphinee S, Hanley J, Cohen R (2005) Change in quality of life of people with stroke over time: true change or response shift? Qual Life Res 14:611–627. https://doi.org/10.1007/s11136-004-3708-0

    Article  PubMed  Google Scholar 

  20. Cohen J (1988) Statistical power analysis for the behavioral sciences. Lawrence Erlbaum Associates, Hillsdale

    Google Scholar 

  21. Cohen J (1992) A power primer. Psychol Bull 112:155–159

    Article  CAS  Google Scholar 

  22. Wolf EJ, Harrington KM, Clark SL, Miller MW (2013) Sample size requirements for structural equation models: an evaluation of power, bias, and solution propriety. Educ Psychol Meas 73(6):913–934

    Article  Google Scholar 

  23. Kaplan D (1996) Statistical power in SEM. In: Schumacker RE, Lomax RG (eds) A beginner’s guide to structural equation modeling. Lawrence Erlbaum Associates, Publishers, Mahwah, pp 100–117

    Google Scholar 

  24. Saris WE, Satorra A (1993) Power evaluations in structural equation models. In: Bollen KA, Long JS (eds) Testing structural equation models, vol 154. Sage Publications, Newbury Park, pp 181–204

    Google Scholar 

  25. Norman GR, Sloan JA, Wyrwich KW (2003) Interpretation of changes in health-related quality of life: the remarkable universality of half a standard deviation. Med Care 41(5):582–592

    Article  Google Scholar 

  26. Schwartz CE, Sendor M (1999) Helping others helps oneself: response shift effects in peer support. Soc Sci Med 48(11):1563–1575

    Article  CAS  Google Scholar 

  27. Bernhard J, Hurny C, Maibach R, Herrmann R, Laffer U (1999) Quality of life as subjective experience: reframing of perception in patients with colon cancer undergoing radical resection with or without adjuvant chemotherapy. Swiss Group for Clinical Cancer Research (SAKK). Ann Oncol 10(7):775–782

    Article  CAS  Google Scholar 

  28. Bernhard J, Lowy A, Maibach R, Hürny C (2001) Response shift in the perception of health for utility evaluation an explorative investigation. Eur J Cancer 37(14):1729–1735. https://doi.org/10.1016/s0959-8049(01)00196-4

    Article  CAS  PubMed  Google Scholar 

  29. Ahmed S, Mayo NE, Wood-Dauphinee S, Hanley JA, Cohen SR (2004) Response shift influenced estimates of change in health-related quality of life poststroke. J Clin Epidemiol 57:561–570

    Article  Google Scholar 

  30. Ring L, Höfer S, Heuston F, Harris D, O’Boyle CA (2005) Response shift masks the treatment impact on patient reported outcomes (PROs): the example of individual quality of life in edentulous patients. Health Qual Life Outcomes 3:55. https://doi.org/10.1186/1477-7525-3-55

    Article  PubMed  PubMed Central  Google Scholar 

  31. Bernhard J, Lowy A, Mathys N, Herrmann R, Hürny C (2004) Health related quality of life: a changing construct? Qual Life Res 13(7):1187–1197

    Article  Google Scholar 

  32. Mayo NE, Scott S (2011) Evaluating a complex intervention with a single outcome may not be a good idea: an example from a randomised trial of stroke case management. Age Ageing 40(6):718–724

    Article  Google Scholar 

  33. Mollerup A, Johansen JD (2015) Response shift in severity assessment of hand eczema with visual analogue scales. Contact Dermatitis 72(3):178–183

    Article  Google Scholar 

  34. Mayo N, Scott C, Ahmed S (2009) Case management post-stroke did not induce response shift: the value of residuals. J Clin Epidemiol 62:1148–1156

    Article  Google Scholar 

  35. Ahmed S, Bourbeau J, Maltais F, Mansour A (2009) The Oort structural equation modeling approach detected a response shift after a COPD self-management program not detected by the Schmitt technique. J Clin Epidemiol 62:1165–1172. https://doi.org/10.1016/j.jclinepi.2009.03.015

    Article  PubMed  Google Scholar 

  36. Robertson C, Langston AL, Stapley S, McColl E, Campbell MK, Fraser WD et al (2009) Meaning behind measurement: self-comparisons affect responses to health-related quality of life questionnaires. Qual Life Res 18(2):221–230

    Article  Google Scholar 

  37. Gandhi PK, Ried LD, Kimberlin CL, Kauf TL, Huang IC (2013) Influence of explanatory and confounding variables on HRQoL after controlling for measurement bias and response shift in measurement. Expert Rev Pharmacoecon Outcomes Res 13(6):841–851. https://doi.org/10.1586/14737167.2013.852959

    Article  PubMed  PubMed Central  Google Scholar 

  38. Nirenberg T, Longabaugh R, Baird J, Mello MJ (2013) Treatment may influence self-report and jeopardize our understanding of outcome. J Stud Alcohol Drugs 74(5):770–776

    Article  Google Scholar 

  39. Sajobi TT, Fiest KM, Wiebe S (2014) Changes in quality of life after epilepsy surgery: the role of reprioritization response shift. Epilepsia 55:1331–1338

    Article  Google Scholar 

  40. Murata T, Suzukamo Y, Shiroiwa T, Taira N, Shimozuma K, Ohashi T et al (2020) Response shift-adjusted treatment effect on health-related quality of life in a randomized controlled trial of taxane versus S-1 for metastatic breast cancer: structural equation modeling. Value Health 23:768–774

    Article  Google Scholar 

  41. Sanders JJ, Miller K, Desai M, Geerse OP, Paladino J, Kavanagh J et al (2020) Measuring goal-concordant care: results and reflections from secondary analysis of a trial to improve serious illness communication. J Pain Symptom Manag 60(5):889–897

    Article  Google Scholar 

  42. Schwartz CE, Stark RB, Stucky BD, Li Y, Rapkin BD (2021) Response-shift effects in neuromyelitis optica spectrum disorder: estimating response-shift-adjusted scores using equating. Qual Life Res 1–10

  43. Schwartz CE, Stark RB, Stucky BD (2021) Response-shift effects in neuromyelitis optica spectrum disorder: a secondary analysis of clinical trial data. Qual Life Res 30(5):1267–1282

    Article  Google Scholar 

  44. Schwartz CE, Stark RB, Borowiec K, Nolte S, Myren KJ (2021) Norm-based comparison of the quality-of-life impact of Ravulizumab and Eculizumab in Paroxysmal Nocturnal Hemoglobinuria. Orphanet J Rare Dis 16:389. https://doi.org/10.1186/s13023-021-02016-8

    Article  PubMed  PubMed Central  Google Scholar 

  45. Verdam M, Van Ballegooijen W, Holtmaat C, Knoop H, Lancee J, Oort F et al (2021) Re-evaluating randomized clinical trials of psychological interventions: impact of response shift on the interpretation of trial results. PLoS ONE 16(5):e0252035

    Article  CAS  Google Scholar 

  46. Schwartz CE, Wheeler HB, Hammes B, Basque N, Edmunds J, Reed G et al (2002) Early intervention in planning end-of-life care with ambulatory geriatric patients: results of a pilot trial. Arch Intern Med 162(14):1611–1618

    Article  Google Scholar 

  47. Ahmed S, Mayo NE, Wood-Dauphinee S, Hanley JA, Cohen SR (2005) The structural equation modeling technique did not show a response shift, contrary to the results of the then test and the individualized approaches. J Clin Epidemiol 58:1125–1133. https://doi.org/10.1016/j.jclinepi.2005.03.003

    Article  PubMed  Google Scholar 

  48. Machuca C, Vettore MV, Krasuska M, Baker SR, Robinson PG (2017) Using classification and regression tree modelling to investigate response shift patterns in dentine hypersensitivity. BMC Med Res Methodol 17(1):120. https://doi.org/10.1186/s12874-017-0396-3

    Article  PubMed  PubMed Central  Google Scholar 

  49. Hoerger M, Perry LM, Gramling R, Epstein RM, Duberstein PR (2017) Does educating patients about the Early Palliative Care Study increase preferences for outpatient palliative cancer care? Findings from Project EMPOWER. Health Psychol 36(6):538

    Article  Google Scholar 

  50. Ahmed S, Mayo N, Wood-Dauphinee S, Hanley J, Cohen R (2005) Using the patient generated index to evaluate response shift post-stroke. Qual Life Res 14:2247–2257

    Article  Google Scholar 

  51. Machuca C, Vettore MV, Robinson PG (2020) How peoples’ ratings of dental implant treatment change over time? Qual Life Res 29(5):1323–1334. https://doi.org/10.1007/s11136-019-02408-1

    Article  PubMed  PubMed Central  Google Scholar 

  52. Schwartz CE, Feinberg RG, Jilinskaia E, Applegate JC (1999) An evaluation of a psychosocial intervention for survivors of childhood cancer: paradoxical effects of response shift over time. Psychooncology 8:344–354. https://doi.org/10.1002/(sici)1099-1611(199907/08)8:4%3c344::aid-pon399%3e3.0.co;2-t

    Article  CAS  PubMed  Google Scholar 

  53. Barclay-Goddard R, Epstein JD, Mayo NE (2009) Response shift: a brief overview and proposed research priorities. Qual Life Res 18(3):335–346. https://doi.org/10.1007/s11136-009-9450-x

    Article  PubMed  Google Scholar 

  54. Laird NM, Ware JH (1982) Random-effects models for longitudinal data. Biometrics 38(4):963–974

    Article  CAS  Google Scholar 

  55. Kolen MJ, Brennan RL (1995) Test equating: methods and practices. Springer, New York

    Book  Google Scholar 

  56. Rothman KJ, Greenland S (1998) Case–control studies. In: Rothman KJ, Greenland S (eds) Modern epidemiology, vol 3. Wolters Kluwer Health/Lippincott Williams & Wilkins, Philadelphia, pp 93–114

    Google Scholar 

Download references

Acknowledgements

We gratefully acknowledge Bruce Rapkin, Ph.D., for helpful discussions in earlier drafts of the manuscript, and the International Society for Quality of Life Research for enabling the collaboration and growth of the Response Shift Special Interest Group. This paper was selected as one of four Cutting Edge Plenary talks for ISOQOL 2022 (October 20, 2022) in Prague, CZ.

Funding

This work was not funded by any external agency.

Author information

Authors and Affiliations

Authors

Contributions

CES, ICH, GR, and RLS designed the research study. CES, ICH, GR, and RLS analyzed the data. CES wrote the paper and CES, ICH, GR, and RLS edited the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Carolyn E. Schwartz.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

See Table 2.

Table 2 Clinical trials articles found after initial inclusion/exclusion criteria applied

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Schwartz, C.E., Huang, IC., Rohde, G. et al. Listening to the elephant in the room: response-shift effects in clinical trials research. J Patient Rep Outcomes 6, 105 (2022). https://doi.org/10.1186/s41687-022-00510-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41687-022-00510-6

Keywords