Skip to main content

Comparability of a provisioned device versus bring your own device for completion of patient-reported outcome measures by participants with chronic obstructive pulmonary disease: quantitative study findings

Abstract

Objective

To quantitatively compare equivalence and compliance of patient-reported outcome (PRO) data collected via provisioned device (PD) versus bring your own device (BYOD).

Methods

Participants with stable chronic obstructive pulmonary disease (COPD) completed the EXAcerbations of Chronic Pulmonary Disease Tool (EXACT®) daily and COPD Assessment Test™ (CAT) and Patient Global Impression of Severity (PGIS) of COPD weekly on either PD or BYOD for 15 days, then switched device types for 15 days. EXACT was scored using the Evaluating Respiratory Symptoms in COPD (E-RS®: COPD) algorithm and equivalence assessed using intraclass correlation coefficients (ICCs) adjusting for cross-over sequence, period, and time. Two one-sided tests (TOSTs) used ICC adjusted means with 10%, 20%, and 40% of total score tested as equivalence margins. Compliance and comfort with technology were assessed. Equivalence across 3 device screen sizes was assessed following the second completion period.

Results

Participants (N = 64) reported high comfort with technology, with 79.7% reporting being “quite a bit” or “very” comfortable. Weekly compliance was high (BYOD = 89.7–100%; PD = 76.9–100%). CAT and E-RS: COPD scores correlated well with PGIS (r > 0.50) and demonstrated equivalence between PD and BYOD completion (ICC = 0.863–0.908). TOST equivalence was achieved within 10% of the total score (p > 0.05). PRO measure scores were equivalent across 3 different screen sizes (ICC = 0.972–0.989).

Conclusions

Measure completion was high and scores equivalent between PD and BYOD, supporting use of BYOD in addition to PD for collecting PRO data in COPD studies and in demographically diverse patient populations.

Highlights

  • Historically, provisioned handheld devices [PD] have been provided to participants to enter patient-reported outcome (PRO) data during a clinical trial. Allowing participants to report data using their own smartphone or other internet-connected device (known as ‘bring your own device’ [BYOD]) is of growing interest.

  • Measure completion was high for both device types when assessing daily and weekly compliance. Scores were found to be equivalent for both the Evaluating Respiratory Symptoms in COPD and COPD Assessment Test™ between PD and BYOD using 2 different methods. This study supports use of BYOD in addition to PD for collecting PRO data in COPD studies and contributes evidence that BYOD may be successfully employed in demographically diverse patient populations.

Background

Patient-reported outcome (PRO) measures assess disease-related symptoms and functional impacts, which, in a clinical trial setting, are needed for understanding the clinical benefit of drugs and other medical products from the patient’s perspective. Moreover, when studying a disease or condition which has a high degree of symptom variability, daily PRO measure collection is often required to capture this variability and obtain a representative picture of the experience of the people living with the disease. In the past, daily PRO data collection involved the participant completing a paper version of the PRO measure each day. Paper-based diary formats suffer many issues around completion and missing data, such as unanswered items and retrospective completion. To address such concerns, electronic formats of PRO measures (ePRO) have been developed which provide better control over measure completion through the use of alerts, reminders, and windows which restrict PRO data entry to a specified time of day, and thus prevent retrospective completion.

Modes of ePRO data collection have also evolved over time. Historically, handheld electronic devices were provided to participants to report PRO data in clinical trials and other research studies. These are known as provisioned devices (PD). However, the desire to reduce drug development costs and participant burden, combined with improved technology and greater access, has led to increasing interest in having participants use their own devices (‘bring your own device’ [BYOD]) to collect PRO data. As advances in technology have led to increased data security for storage and transfer of information for both PD and BYOD, the BYOD approach is becoming an increasingly viable option for large-scale, interventional trials. In addition, it is increasingly possible to ensure that the presentation of items is consistent regardless of the device type used. A growing body of evidence supports measurement comparability between paper and a variety of electronic formats, underlining the hypothesis that small presentational or format changes between devices have little to no impact on the integrity of PRO data [1].

Previous studies have explored the feasibility of using a BYOD approach for PRO data collection and have demonstrated high compliance with daily completion, high participant acceptance, and score equivalence between PD and BYOD for common response scale types [2,3,4,5]. However, they did not assess score equivalence and compliance between PD and BYOD in a longitudinal study design. Thus, the primary objective of this study was to quantitatively compare PRO data collected longitudinally via PD versus BYOD in terms of compliance, score agreement and equivalence. In addition, the secondary objective was to cross-sectionally evaluate the comparability of PRO scores collected on screens of varying sizes.

Methods

Participants

Participants with a clinical diagnosis of chronic obstructive pulmonary disease (COPD) were recruited from 4 clinical sites in the United States (US). Participants were screened on the basis of inclusion and exclusion criteria less restrictive than, but similar to, those likely to be used in a COPD clinical trial. Less restrictive criteria were used because this was a non-interventional study for which stable participants were being sought and no treatment effect was being evaluated. The criteria used in this study were similar to the criteria used in the development and testing of the EXACT®:

Inclusion criteria

  1. 1.

    Age ≥ 40 years

  2. 2.

    Established clinical diagnosis of COPD in accordance with the joint American Thoracic Society/European Society’s definition

  3. 3.

    Forced expiratory volume in one second (FEV1)/forced vital capacity ratio of < 0.70 post-bronchodilator

  4. 4.

    FEV1 of predicted < 80%

  5. 5.

    Current or former smoker with a history of at least 10 pack years

  6. 6.

    Clinical status and treatment unlikely to change in the next 30 days in the opinion of the investigator or referring clinician

  7. 7.

    Owns a compatible smartphone for the BYOD component of the study

  8. 8.

    Able to read, comprehend, and complete questionnaires and interviews in US English

  9. 9.

    Able to provide written informed consent given prior to undertaking any study-related procedures

Exclusion criteria

  1. 1.

    Had a COPD exacerbation, including hospitalization or hospitalization for pneumonia, within the previous 90 days

  2. 2.

    Professional involvement in or immediate family member of staff working on this study

  3. 3.

    Participated in any BYOD study within the previous 90 days

  4. 4.

    Had learning, emotional, mental illness, or cognitive difficulties that might limit ability to meaningfully complete the questionnaires

  5. 5.

    Showed evidence of alcohol or drug abuse

Study design

In this observational, cross-over study (Fig. 1) participants were assigned to use one device type for 15 days (PD or BYOD) prior to switching to the other device type for an additional 15 days. Participants were split into 2 groups, Group A and Group B, and attended 3 study visits. At Visit 1, participants in Group A received the PD (Samsung Galaxy 4 running Android v4.4.4 with a 5″ touchscreen) with the software already installed, whereas participants in Group B installed the application on their own smartphone by downloading and activating the application (either Android or iOS) (Additional file 1: Fig. S1). Both groups completed a short training module on the respective device (Fig. 1). A 6-h completion window was set for each participant during this visit; this window was set to range from 3 h before to 3 h after the participant’s “usual” bedtime. It was not possible to complete the PRO measures outside of this 6-h window.

Fig. 1
figure 1

Study design. Note The full study design included interviews with all study participants, and the interview results are reported elsewhere [6]

Following Visit 1, and for the next 15 days, participants completed 3 PRO measures on the device type they were allocated at Visit 1. The EXAcerbations of Chronic Pulmonary Disease Tool (EXACT®) was completed every day during the participant’s completion window. The COPD Assessment Test™ (CAT) was completed immediately after the EXACT® every 7 days at the start, middle, and end of each 15-day data collection period (i.e., Days 1, 8, 15, of each cross-over period). The Patient Global Impression of Severity (PGIS) of COPD was completed after the EXACT® and CAT and was also completed every 7 days at the start, middle, and end of each 15-day data collection period. A reminder notification from the ePRO system was sent to the participant’s device if they had not completed the EXACT® at 1 h before their “usual” bedtime, at bedtime, and 1 h after bedtime. Participants were able to turn off reminders on their own device (although they were not shown how to do so), but not on the PD. Participants did not receive reminders in any other modality (e.g., telephone calls from site staff) to complete the EXACT® or any other PRO measures included in the study.

At Visit 2, participants were switched to the other device type to complete the same PRO measures for an additional 15 days using the same schedule of assessment as per the data collection Period 1. At this visit, participants were interviewed about their experience with the first data collection period and were given training on how to use the second device.

At Visit 3, participants returned to the site to complete all study procedures including returning the PD for those in Group B and a final interview. A subset of 20 participants was invited to complete a screen size equivalence test during which the EXACT® and CAT were completed on a provisioned smartphone, a provisioned tablet, and a provisioned laptop computer at the site with a distraction task (completion of a word search puzzle for approximately 30 min) between each device completion.

Study measures

EXAcerbations of Chronic Pulmonary Disease Tool (EXACT®)

The EXACT® is a 14-item patient-reported daily diary used to quantify and measure exacerbations of COPD [7]. It was scored using the Evaluating Respiratory Symptoms in COPD (E-RS®: COPD) algorithm, which derives a score from 11 items of the EXACT. It measures the effect of treatment on the severity of respiratory symptoms in stable COPD. In this study, the entire 14-item EXACT was administered per license requirements, however, only the 11-item E-RS: COPD was evaluated.

The E-RS: COPD produces a daily Total Score (RS-Total Score; scored from 0 to 40) and three subscales: RS-Breathlessness (5 items; scored from 0 to 17), RS-Cough and Sputum (3 items; scored from 0 to 11), and RS-Chest symptoms (3 items; scored from 0 to 12). A higher score on each domain indicates more severe respiratory symptoms. Daily total scores were computed using scoring rules specified in the user manual. Weekly total scores were calculated using the sum of daily scores divided by the number of diary days completed. A minimum of 4 out of 7 days of data was required to compute a weekly mean.

COPD assessment test (CAT)

The CAT comprises 8 questions that assess domains related to the impacts of COPD: cough, phlegm, chest tightness, breathlessness, activities at home, confidence to leave the home, sleep, and energy [8]. Each question is scored from 0 to 5, with higher scores indicating greater problems with the domain. A total score (0 to 40) is produced, with scores of 0 to 10, 11 to 20, 21 to 30, and 31 to 40 representing mild, moderate, severe, and very severe clinical impact, respectively.

Patient Global Impression of Severity (PGIS)

This single-item measure asks participants to report the perceived severity of their COPD symptoms over the previous 7 days on a 5-point scale from 0 (“none”) to 4 (“very severe”).

Analyses

Compliance analysis

Daily and weekly compliance of the EXACT were evaluated and calculated by device type. Compliance for the CAT was evaluated at each key assessment time point (Day 1, 8, 15 of each cross-over period) and calculated by device type. For the EXACT, compliance was defined based on the number of missing and completed diary days in each weekly period according to device type. EXACT® items could not be skipped, so only form-level compliance evaluation was performed on the measure as a whole. As CAT items could be skipped, item-level compliance for the CAT was calculated at each key assessment time point.

Equivalence analysis

  1. (a)

    Descriptive statistics: Descriptive PRO measure scores between cross-over periods were calculated for top-level comparisons of equivalence.

  2. (b)

    Score agreement of E-RS: COPD and CAT scores between device type: A linear mixed effects model, adjusting for cross-over sequence, period, and time (day/week) was used to derive the variance components used in the calculation of the intraclass correlation coefficients (ICC). An ICC(2,1) model was used to assess the absolute agreement of scores arising from the 2 device types in this cross-over study [9]. The PGIS was included as an anchor measure to define the stable population (those whose score did not change between assessment time points) for these equivalence analyses.

  3. (c)

    Equivalence between device type: Two one-sided test (TOST) analyses on both the unadjusted means and the adjusted means derived from the mixed effects model were conducted. The 90% confidence intervals (CIs) around the difference in score means between the devices was tested against an equivalence margin of 10%, 20%, and 40% of total score. This equated to equivalence margins of 4-points, 8-points and 16-points for both the E-RS: COPD and the CAT.

  4. (d)

    Equivalence between screen size (substudy analysis): For the CAT and E-RS: COPD scores, agreement for the 3 different screen sizes (PD [smartphone], tablet, and laptop) were assessed using the ICC(2,1) model. Additionally, the mean scores, differences between the means for each of the screen sizes, as well as associated standard error of the mean and the 95% CIs were reported.

Results

Demographic and clinical characteristics

Sixty-four participants were enrolled (mean age [SD]: 59.0 [10.55]; 65.6% female; 51.6% Black/African American) (Table 1). Participants in Group A (n = 23) and Group B (n = 41) were of similar mean [SD] ages (Group A = 57.5 years [11.33]; Group B = 59.8 years [10.13]) and the gender distribution was also similar (Group A n = 14 female, 60.9%; Group B n = 28 female, 68.3%). Race distributions amongst the two groups did not reflect the overall study sample. Group A participants were more likely to be White (n = 13, 56.5%) whereas Group B participants were more likely to be Black or African American (n = 27, 65.9%).

Table 1 Demographic and clinical characteristics

The overall sample contained participants who were diagnosed with COPD for an average of 7.2 years prior to enrollment, and half of the sample had no prior exacerbations (n = 32, 50.0%), with around a third of the sample reporting a single previous exacerbation (n = 20, 31.3%). Participants reported high comfort with technology in general, with 79.7% reporting being “quite a bit” or “very” comfortable (Additional file 1: Table S1). Descriptions for BYOD devices are included in Additional file 1: Table S2.

Compliance

For the first 15 days of the study (data collection Period 1), almost all Group A participants (who were using PD) completed the EXACT on at least 5 days of Week 1 (95.2%; n = 20), compared with 89.7% (n = 35) of Group B participants (using BYOD) (Additional file 1: Figs. S2 and S3). Compliance in Week 2 followed a similar pattern but was slightly higher among participants in both groups. One Group B participant in Week 1 and 2 Group B participants in Week 2 did not complete the EXACT on any of the 7 days.

Following the device cross-over, in data collection Period 2, 100% of Group A participants (now using BYOD) completed the EXACT on at least 5 days in Weeks 1 and 2 of Period 2. Compliance among Group B participants (now using a PD) was lower than Group A participants, with 76.9% (n = 30) and 89.2% (n = 34) completing the EXACT on at least 5 days during Week 1 and 2, respectively. Three Group B participants in Week 1 and 4 Group B participants in Week 2 did not complete the EXACT on any of the respective 7 days. Additional file 1: Table S3 presents the distributions of percent days completed and percent of participants completing each week.

Given that participants were able to skip CAT items (with confirmation that skipping was intentional), item-level compliance was assessed at all key assessment time points in participants who had at least 1 CAT item, to identify any items that may have a non-random skip rate. High item-level compliance (94.1 to 100%) was found among participants who completed the CAT ( Additional file 1: Table S4). In Period 1, Items 3 (cough), 4 (breathlessness), 6 (confidence to leave home), and 7 (sleep) all showed some level of missing data. The missingness for these items was, however, minimal. Item compliance was higher in Period 2 of the study, where all participants achieved 100% item compliance.

Equivalence: descriptive assessment

For each study week across all 4 weeks, participants’ overall mean [SD] E-RS: COPD scores were similar (14.73 [6.800]–15.28 [6.497]), suggesting score equivalence between devices. Table 2 shows the means for Group A and Group B independently. Means across the cross-over period remain similar within both groups.

Table 2 ERS-COPD and CAT equivalence descriptive statistics

Overall, CAT scores between Period 1 and Period 2 (Days 1, 8, and 15, respectively) were similar (Table 2). For Group A and Group B independently, means between the 2 cross-over periods showed minor differences with Group B showing a lower mean after switching to the second device type at Period 2 Day 1 while Group A showed a higher mean at Period 2 Day 1 (Group A: Period 1 Day 1 = 20.7 [8.88], Period 2 Day 1 = 23.4 [8.00]; Group B: Period 1 Day 1 = 19.4 [7.36], Period 2 Day 1 = 18.2 [8.51]).

Device order

Information on order preference collected in this participant population and presented in Newton et al. showed that participants preferred their second device [6]. Specifically, participants who started the study on the BYOD device preferred the PD (36.36%) and participants who started the study on the PD device preferred the BYOD device (30.91%; p = 0.0076). Based on this analysis, the TOST and ICC estimations were derived with the inclusion of device in each model.

Equivalence: ICC based

When E-RS: COPD weekly and CAT scores were adjusted for order of device completion, study period, and assessment time point in a full cross-over design using a mixed effects model, agreement across device types was high for the E-RS: COPD weekly average score (ICC(2,1) = 0.878) and the CAT score (ICC(2,1) = 0.864), see Table 3. For Stable Population 1 (SP1) defined as participants who indicated no change from baseline to Day 30 on the PGIS, consistency across device types was even higher for the E-RS: COPD with ICC(2,1) = 0.895. Stable Population 2 (SP2), defined as participants who indicated minimal (|1|) to no change from baseline to Day 30 on the PGIS, had high reliability but with the slightly lower ICC reflecting the increased variance expected with this sample; ICC(2,1) = 0.863.

Table 3 ICC cross-over

Similarly, high agreement statistics were determined for the CAT when assessed in a full cross-over design using the SP1 (ICC(2,1) = 0.908) and SP2 (ICC(2,1) = 0.874) populations. The CAT scores were also consistent between devices on Day 15 of Period 1 and Day 1 of Period 2 (Day 16), with ICC(2,1) = 0.836. These two days were selected because they were approximately 24 h apart, which reduced the chance of a change in the participant’s COPD status when moving from one device type to the next.

Equivalence: TOSTs

When analyses were conducted using a full cross-over design using all available data and accounting for order of device completion, study period, and assessment time point, TOST analysis showed that the 2 device types were equivalent for both the E-RS: COPD and the CAT scores. Ninety percent (90%) CIs around the differences between scores on the 2 device types fell within the ± 10% equivalence levels. This result shows that both measures were equivalent within a ± 10% (4-point) equivalence margin (Table 4).

Table 4 TOSTs cross-over

Screen size equivalence (substudy)

Both the E-RS: COPD and CAT measures performed consistently across the 3 device screen sizes, with high ICCs reported for both the E-RS: COPD (ICC(2,1) = 0.989) and the CAT (ICC(2,1) = 0.972), see Table 5.

Table 5 Screen size substudy descriptive statistics

Mean differences between the different screen sizes were also assessed for the E-RS: COPD and CAT. The differences and associated 95% CIs support the ICC results in showing that participants responded consistently across all device screen sizes.

Discussion

There were some limitations to the study. Device type assignment was not randomized due to technical issues with PDs during enrollment (see Newton et al. [6] for more details), which led to differences between Group A and Group B in terms of unequal sample size and demographics that appeared to impact the analysis of compliance. Therefore, the study was not able to answer the question of whether compliance is better with PD or BYOD because of many confounding factors. In the second part of this cross-over study, there was an increase in the number of participants (n = 7/41; 17%) missing the entire week in Group B after switching to PD; Group A compliance improved in Period 2. We were not able to determine the cause for the poorer compliance in Period 2 for Group B. In addition, subgroup analyses revealed a possible order effect in that participants preferred the device they used in the second study period. Therefore, some planned analyses regarding device compliance by subgroup were not conducted due to this order effect. As a result, we concluded that compliance was adequate using both the PD and BYOD approaches but did not find one better than the other. In the screen size equivalence test, the time interval between completing the 3 different screen sizes was short, which may have contributed to the resulting high ICC. The study included only participants diagnosed with COPD because the content of the PRO measures included in the study was relevant to them; therefore, the study results may not be generalizable to respondents with other diagnoses.

This study found that diary completion was high for both device types when assessing daily compliance as well as a threshold for the weekly assessment of 5 or more days of complete diary data, and neither device type was better nor worse than the other. A key aim of this study was to assess score equivalence of data obtained from the 2 device types. Descriptively, the overall mean scores for the E-RS: COPD and the CAT before and after the cross-over period were very similar between device types, though means for Group B were slightly lower, reflecting a less severe sample. Group B was also less compliant in both study periods. However, despite the minor differences in the group means, the TOST analysis showed that the difference of scores fell within a 20% margin at the majority point and ICC showed that the score concordance between the two device types was very high for the E-RS™: COPD weekly average score and the CAT. Post hoc tests, which adjusted for the cross-over design of the study, showed high ICCs for COPD measure scores between the 2 device types and that scores on both the E-RS™: COPD and the CAT were equivalent at the 10% level between PD and BYOD. Overall, the evidence supports score equivalence between the 2 device types, even when considering the additional variance introduced by the unequal sample sizes. Score equivalence was demonstrated across both the PD and BYOD devices with CIs around the mean score differences for all measures within predefined ranges for equivalence. The screen size equivalence substudy using the E-RS™: COPD and CAT and completed on a PD, tablet, and laptop on the same day demonstrated that when mean scores were assessed between device pairs, mean scores and standard errors were similar between the devices and confidence intervals for mean differences were contained within the ± 10% equivalence acceptance intervals. The quantitative evidence, which met each of the predefined criteria, as well as the qualitative interview data (presented in Newton et al. [6]), which reported that participants felt comfortable with and did not indicate a clear preference for either device, suggests support for cross-platform equivalence of these PRO measures.

Conclusions

The COPD population was chosen intentionally, given the severity of this chronic illness and typical age of persons who have it (average age 65 years). If PRO data and compliance data are similar within this population, these results could help support use of BYOD in less severe participants across a wider age range in future studies. Future research may also be needed to evaluate BYOD for event-driven diaries (e.g., migraine headaches, bowel movements) which our study did not address. This study supports the use of BYOD as a potential addition to PD for collecting PRO data in COPD studies and contributes evidence that BYOD may be employed to collect PRO data in demographically diverse patient populations. The findings from this study should encourage continued use and testing of BYOD in clinical studies, including evaluation in clinical trials, with the ultimate aim of providing study participants and sponsors with expanded remote data collection options.

Availability of data and materials

The datasets generated and/or analyzed during the current study are not publicly available, but are available from the corresponding author on reasonable request.

Abbreviations

BYOD:

Bring your own device

CAT:

COPD assessment test

CI:

Confidence intervals

COPD:

Chronic obstructive pulmonary disease

ePRO:

Electronic formats of PRO

E-RSCOPD:

Evaluating respiratory symptoms in COPD

EXACT:

EXAcerbations of Chronic Pulmonary Disease Tool

FEV1 :

Forced expiratory volume in one second

ICC:

Intraclass correlation coefficient

PD:

Provisioned device

PGIS:

Patient Global Impression of Severity

PRO:

Patient-reported outcome

SP1:

Stable population 1

SP2:

Stable population 2

TOST:

Two one-sided tests

US:

United States

References

  1. Byrom B, Gwaltney C, Slagle A, Gnanasakthy A, Muehlhausen W (2019) Measurement equivalence of patient-reported outcome measures migrated to electronic formats: a review of evidence and recommendations for clinical trials and bring your own device. Ther Innov Regul Sci 53(4):426–430

    Article  PubMed  Google Scholar 

  2. Byrom B, Doll H, Muehlhausen W et al (2018) Measurement equivalence of patient-reported outcome measure response scale types collected using bring your own device compared to paper and a provisioned device: results of a randomized equivalence trial. Value Health 21(5):581–589

    Article  PubMed  Google Scholar 

  3. Michaud K, Schumacher R, Wahba K, Moturu S (2014) FRI0201 are rheumatic disease patient reported outcomes collected passively and directly through smart phones feasible? Early results from a nation-wide pilot study. Ann Rheum Dis 73(Suppl 2):455–456

    Article  Google Scholar 

  4. Pfaeffli L, Maddison R, Jiang Y, Dalleck L, Löf M (2013) Measuring physical activity in a cardiac rehabilitation population using a smartphone-based questionnaire. J Med Internet Res 15(3):e61

    Article  PubMed  PubMed Central  Google Scholar 

  5. Torous J, Staples P, Shanahan M et al (2015) Utilizing a personal smartphone custom app to assess the patient health questionnaire-9 (PHQ-9) depressive symptoms in patients with major depressive disorder. JMIR Mental Health 2(1):e8

    Article  PubMed  PubMed Central  Google Scholar 

  6. Newton L, Knight-West O, Eremenco S et al (2022) Comparability of a provisioned device versus bring your own device for completion of patient-reported outcome measures by participants with chronic obstructive pulmonary disease: qualitative interview findings. J Patient Rep Outcomes 6(1):1–11

    Article  Google Scholar 

  7. Leidy NK, Murray LT (2013) Patient-reported outcome (PRO) measures for clinical trials of COPD: the EXACT and E-RS. COPD J Chronic Obstr Pulm Dis 10(3):393–398

    Article  Google Scholar 

  8. Jones P, Harding G, Berry P, Wiklund I, Chen W, Leidy NK (2009) Development and first validation of the COPD Assessment Test. Eur Respir J 34(3):648–654

    Article  CAS  PubMed  Google Scholar 

  9. Shrout PE, Fleiss JL (1979) Intraclass correlations: uses in assessing rater reliability. Psychol Bull 86(2):420–428

    Article  CAS  PubMed  Google Scholar 

Download references

Acknowledgements

The authors received writing and editorial support in the preparation of this report from Clinical Outcomes Solutions, Folkestone, UK, Tucson, AZ, USA. The authors, however, directed and are fully responsible for all content and editorial decisions for this report. The authors gratefully acknowledge the other members of the BYOD Project Team for their guidance and constructive feedback during the conduct of this research. Critical Path Institute is supported by the Food and Drug Administration (FDA) of the Department of Health and Human Services (HHS) and is 55% funded by the FDA/HHS, totaling $17,612,250, and 45% funded by non-government source(s), totaling $14,203,111. The contents are those of the author(s) and do not necessarily represent the official views of, nor an endorsement by, FDA/HHS or the U.S. Government.

Funding

Funding for this project was provided by Critical Path Institute’s Patient-Reported Outcome (PRO) Consortium and Electronic Clinical Outcome Assessment (eCOA) Consortium along with additional contributions from the following PRO Consortium member firms: Amgen Inc.; Bayer Pharma AG; Daiichi Sankyo, Inc.; GlaxoSmithKline; Ironwood Pharmaceuticals, Inc.; Janssen Pharmaceutical Companies of Johnson and Johnson; Eli Lilly and Company; Merck Sharp & Dohme Corp.; Sanofi; and Pfizer, Inc. Support for the Patient-Reported Outcome (PRO) Consortium comes from membership fees paid by members of the PRO Consortium (https://c-path.org/programs/proc/). Support for the Electronic Clinical Outcome Assessment (eCOA) Consortium comes from membership fees paid by members of the eCOA Consortium (https://c-path.org/programs/ecoac/).

Author information

Authors and Affiliations

Authors

Consortia

Contributions

All authors revised the manuscript critically for important intellectual content and approved the final manuscript. At the time this research was conducted, PCGG was an employee of Clinical Outcomes Solutions, Folkestone, UK; MC was an employee of Critical Path Institute, Tucson, AZ, USA; DSR was an employee of Ironwood Pharmaceuticals, Cambridge, MA, USA; BB was an employee of ICON Clinical Research, Marlow, UK; PO was an employee of CRF Health, London, UK; and, SV was an employee of GlaxoSmithKline, Collegeville, PA, USA. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Stacie Hudgens.

Ethics declarations

Ethics approval and consent to participate

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in the study.

Consent for publication

Not applicable.

Competing interests

The authors declare that they no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1. Table S1.

Participant experience with technology. Table S2. Participant BYOD smartphone description summary. Table S3. Number of missing days of diary data. Table S4. Item level compliance for the CAT. Fig. S1. Provisioned device application. Fig. S2. Number of missing days of EXACT completions in period 1 (weeks 1 and 2). Fig. S3. Number of missing days of EXACT completions in period 2 (weeks 1 and 2).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hudgens, S., Newton, L., Eremenco, S. et al. Comparability of a provisioned device versus bring your own device for completion of patient-reported outcome measures by participants with chronic obstructive pulmonary disease: quantitative study findings. J Patient Rep Outcomes 6, 119 (2022). https://doi.org/10.1186/s41687-022-00521-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41687-022-00521-3

Keywords