Clinical Trials 2
Presenter: Nancy Flournoy
When: Tuesday, July 12, 2016 Time: 4:00 PM - 5:30 PM
Room: Salon C Carson Hall (Level 2)
Session Synopsis:
Inference with Informative Designs
Informative designs are increasingly common in experiments with sequential accrual of subjects, such as when observed response data are used in early stopping decisions, or to change sampling or treatment allocation rules. In inferences following an informative design, it is common to ignore the fact that the design was informative, and treat it as ancillary. Alternatively, inference can be based on fully unconditional probabilities or conditional on an ancillary statistic. To elucidate the situation, we present these alternatives explicitly for a two-stage design in which data accumulated in the first stage are used to select the design for a second stage. Common inference procedures approximate the variance of the parameter estimates by approximating the inverse of expected Fisher information. These procedures often rely on normal distributional assumptions in order to create confidence intervals. Generally however, even assuming a normal model with unknown mean, except when non-ancillarity of a design is completely ignored, the distribution of the sample mean has not been shown to be normal after adaptation. For our example, we show that conditioning on a non-ancillary statistic reduces the variance of the mean. Then to decrease reliance on normality assumptions, we develop bootstrap methods that condition on a non-ancillary statistic that defines the second stage design and show alternative methods that demonstrate the variance/bias tradeoff.
Clinical Trials 2
Presenter: Palash Ghosh
When: Tuesday, July 12, 2016 Time: 4:00 PM - 5:30 PM
Room: Salon C Carson Hall (Level 2)
Session Synopsis:
Comparison of Dynamic Treatment Regimes Embedded in A Sequential Multiple-Assignment Randomized Trial (SMART) with An Ordinal Outcome
Sequential multiple assignment randomized trials (SMART) are used to develop optimal dynamic treatment regimes (DTRs) for patients based on their medical histories in different branches of medical science like mental health and oncology, where a sequence of treatments are given to the patients. Precisely, DTRs are decision rules that recommend sequences of treatments based on an individual patients evolving treatment and covariate history. Once constructed based on data, these rules can be employed to give treatments to the patients to optimize the outcome, depending on the individual patients medical history. In the existing literature, typical SMART studies have been designed by assuming the primary outcome to be a continuous variable. However, the primary outcome can be ordinal as well, e.g. toxicity level (mild, moderate, severe) in a safety trial. In the current work, we define dynamic generalized odds-ratio (dGOR), following Agresti's (1980) generalized odds-ratio (GOR), to compare two or more dynamic regimes embedded in a two-stage SMART based on an ordinal primary outcome. More specifically, we compare two regimes that start with different initial treatments using dGOR. We derive the asymptotic distribution of dGOR and provide the sample size calculation formula based on that. Our proposed dGOR is distinct from Agresti's GOR due to the presence of response rates to the first-stage treatments in a SMART. Even though our main focus in this work is to define and use dGOR for comparing DTRs with ordinal outcomes, the defined dGOR can as well be used to compare DTRs based on continuous outcomes. A simulation study shows that the estimated power corresponding to the derived sample size formula achieves the nominal power. A real data application of the proposed methodology is illustrated using Sequenced Treatment Alternatives to Relieve Depression (STAR*D) clinical trial data.
Clinical Trials 2
Presenter: Tobias Mütze
When: Tuesday, July 12, 2016 Time: 4:00 PM - 5:30 PM
Room: Salon C Carson Hall (Level 2)
Session Synopsis:
A permutation-based approach for three-arm trials with active and placebo controls
The three-arm trial design including an arm for an experimental treatment, an active control, and a placebo control is often referred to as gold standard design. In this design the so-called retention of effect hypothesis can be tested, which relates the effect of the experimental treatment versus placebo to the effect of the active control versus placebo. Statistical inference for the retention of effect hypothesis has already been studied for various endpoint scales, e.g. normal [1], Poisson [2], binary [3], negative binomial [4], and time-to-event [5]. These procedures, however, lack the robustness needed to be applied to small sample sizes and are not robust with regards to deviations from the assumed endpoint distributions. In this contribution, we introduce a studentized permutation test for the retention of effect hypothesis as a robust alternative to the parametric approaches and prove few asymptotic properties. For planning a trial in the gold standard design, sample size and power formulas for the introduced studentized permutation test are presented. In an extensive Monte-Carlo simulation study the performance of the proposed permutation test is studied for continuous and discrete data. For comparison, a Wald-type test with a t-quantile is included in the simulation study. The simulation study shows that the presented studentized permutation test for the retention of effect hypothesis is a robust alternative to parametric procedures and outperforms the Wald-type test alternative with respect to meeting the target significance level. References [1] Hauschke, D. and Pigeot, I. (2005). Establishing efficacy of a new experimental treatment in the gold standard design. Biometrical Journal 47. [2] Mielke, M. and Munk, A. (2009). The assessment and planning of non-inferiority trials for retention of effect hypotheses - towards a general approach. arXiv:0912.4169v1. [3] Kieser, M. and Friede, T. (2007). Planning and analysis of three-arm non-inferiority trials with binary endpoints. Statistics in Medicine 26. [4] Mütze, T. et al. (2016). Design and analysis of three-arm trials with negative binomially distributed endpoints. Statistics in Medicine (in press). [5] Kombrink, K. et al. (2013). Design and semiparametric analysis of non-inferiority trials with active and placebo control for censored time-to-event data. Statistics in Medicine 32.
Clinical Trials 2
Presenter: Olympia Papachristofi
When: Tuesday, July 12, 2016 Time: 4:00 PM - 5:30 PM
Room: Salon C Carson Hall (Level 2)
Session Synopsis:
THE USE OF ROUTINE HOSPITAL RECORDS IN THE DESIGN OF TRIALS OF COMPLEX SURGICAL INTERVENTIONS: IDENTIFYING THE CONTRIBUTION OF MULTIPLE PROVIDERS
Surgical interventions are complex as they comprise a number of interacting components;this renders their formal evaluation in RCTs more challenging than that of simple medical treatments. Developing adequately-powered, duly designed trials requires understanding the effect of these components and their interactions on outcomes. Our study focuses on the use of historic data from routine hospital records to clarify the way in which different sources of variation can be identified and used at trial design Surgical procedures are delivered by multidisciplinary teams so that differential effects on outcomes result from differences in the main surgeon skill and experience, other medical team disparities and patient-specific attributes. Hierarchical models with cross-classifications are useful in order to establish the effect of, and interactions between, different components of surgery that are not necessarily in strict hierarchy. Methods are illustrated using two influential components, the surgeon and anaesthetist We start by accounting for surgeon or anaesthetist random effects whilst adjusting for patient heterogeneity. As surgeons typically operate with multiple anaesthetists, inducing non hierarchies, an extension to cross-classifications examines the composite effects of surgeon-anaesthetist combinations. When focus is in capturing correlations between interacting components, we propose Multiple Membership Multiple Classification models which delineate the impact of different personnel by partitioning the total variance into components according to their relative contribution to outcomes. We extend the proposed models to accommodate an additional Centre level in the hierarchy which concerns further variation due to infrastructure and policy differences When planning phase III surgical trials it is necessary to distinguish the differing procedure components between novel intervention and control. Design options depend on whether all arms involve multicomponent interventions (e.g surgery vs surgery or surgery vs drug), the variation induced by interacting components and the unit of randomisation. We discuss the inflation factors required for various designs in this context and how these may be estimated from routine data sources. All methods are illustrated on more than 100000 cardiac surgery patients from 10 centres
Clinical Trials 2
Presenter: Kentaro Takeda
When: Tuesday, July 12, 2016 Time: 4:00 PM - 5:30 PM
Room: Salon C Carson Hall (Level 2)
Session Synopsis:
Bayesian optimal interval design for dose finding based on both efficacy and toxicity outcome
One of the main purposes of a phase I dose-finding trial in oncology is to identify a tolerable dose with an indication of therapeutic benefit to administer to subjects in subsequent phase II and III trials. Therefore, it is favorable to consider dose-limiting toxicity events and responses indicative of efficacy in the dose-finding procedure. Several model based methods are suggested for incorporating both efficacy and toxicity responses in early phase dose-finding trials. However, treating patients with molecular, cytostatic, and biological agents are becoming a leading trend in oncology. Unlike cytotoxic agents, for which efficacy and toxicity monotonically increase with dose, molecular, cytostatic, and biological agents may exhibit non-monotonic patterns in their doseresponse relationships. Bayesian optimal interval (BOIN) design is proposed to find the maximum tolerated dose (MTD) and to minimize the posterior probability of inappropriate dose assignments for patients (Liu and Yuan, 2014). The BOIN design is nonparametric, which is thus robust and also does not require the assumption used in model-based designs. Another advantage of the BOIN design is that it is easy to implement in a simple way similar to the traditional 3+3 design. We develop a BOIN design for dose finding based on both efficacy and toxicity outcomes. To determine the next dose based on the cumulative data, we propose a dose assignment rule by minimizing the posterior probability of inappropriate dose assignments for patients in terms of both efficacy and toxicity. We present a simulation study of this proposed method to explore its properties. The simulation results compared with other standard approaches for a phase I dose-finding trial in oncology. References Liu S and Yuan Y. Bayesian optimal interval designs for phase I clinical trials. J R Stat Soc Ser C Appl Stat 2015; 64: 507523.
Clinical Trials 2
Presenter: Huaqing Zhao
When: Tuesday, July 12, 2016 Time: 4:00 PM - 5:30 PM
Room: Salon C Carson Hall (Level 2)
Session Synopsis:
Use of Propensity Scores to Adjust for Non-compliance in Randomized Clinical Trials
Introduction: Non-compliance in randomized clinical trials (RCT) is typically measured using self-report, pill counts and blood levels. In RCT the question arises as to how to define or analyze compliance when it may change over time. For intent-to-treat (ITT), the analytic method is to use all available data for randomized patients, regardless of compliance. In the per-protocol (PP), only patients compliant with the protocol are included. When non-compliance exists, ITT is not measuring the true biological effect of treatment, but rather a mixture of the full effect on the compliers with partial effect on the non-compliers. While propensity scores (PS) have been used to reduce bias in observational studies, PS has not been adapted to correct non-compliance bias. We define the PS of compliance to be the probability of being a completer (e.g. > 80% adherence) on an observational level conditional on the observed covariates. The calculated PSs are then used to adjust for non-compliance to reduce bias. A simulation study shows that results vary among different methods. Methods: For obtaining estimates of the main effects, a pseudo dataset is created by assigning repeated outcome, compliance and covariates at 5 timepoints to each subject and analyzed with linear mixed-effects models. We define PP population based on average patient level compliance (e.g. >80% adherence). We carry out ITT/PP analysis with or without PS adjustment. We apply our method to a RCT of medication adherence among adolescent kidney transplant recipients using the primary adherence outcome and secondary outcomes, adjusting PS based on adherence. Results: The main effect (true=0.5/time) estimated directly from PP is generally underestimated (estimate [95% confidence interval], 0.375 [0.11-0.73]. The robust estimator adjusting PS yields less biased estimation (0.462[0.17 0.75]). The PS adjustment (0.499[0.26 0.74]) is strongly suggested even when using the ITT population (0.492[0.22 0.76]). Conclusions: Our simulation results and real data example demonstrate that the use of the PS adjustment for non-compliance preserves the sample size of the ITT analysis, produces appropriate estimation of the main effect, and maintains a narrow confidence interval. In summary, this method provides a novel and robust tool for obtaining less biased estimates of treatment effect when dealing with non-compliance in RCT.