Survival Analysis 4
Presenter: Eleni-Rosalina Andrinopoulou
When: Thursday, July 14, 2016 Time: 9:00 AM - 10:30 AM
Room: Salon B Carson Hall (Level 2)
Session Synopsis:
Bayesian Joint Models of Longitudinal and Survival Outcomes with Time-Varying Effects using P-splines
It is common in clinical studies that longitudinal and time-to-event outcomes are collected together. A popular framework to analyze such datasets is the joint modeling of longitudinal and survival outcomes. The idea behind these models is that the longitudinal and the survival processes share common random effects, inducing correlation between the two processes. One of the characteristics of standard joint models is that they assume a constant regression coefficient for the effect of the longitudinal covariates. However, in some cases this may be a restrictive assumption. For instance, when treatment is initiated, the strength of the association between the longitudinal and survival outcomes may also change. To allow the effect of the coefficient to vary in time, the survival models can be extended to include interactions of the covariates with an appropriate pre-defined time function. Several approaches have been discussed in the literature regarding the choice of these time functions, such as the use of polynomials or B-splines. Even though the consideration of time-varying coefficients may be an attractive approach to analyzing longitudinal and survival data jointly, relatively little work has been done on this topic. The motivation comes from a study which includes patients who received a human tissue valve in the aortic position. These patients are followed prospectively over time by standardized echocardiographic assessment of valve function. Since the human tissue degenerates, the main focus is to investigate whether the effect of the echocardiographic measures on survival varies in time. Specifically, we develop a Bayesian joint model that allows for time-varying effects for the coefficients that link the longitudinal and the survival processes by assuming P-splines. Furthermore, to facilitate flexible modeling of the survival outcome, we use a P-splines approach also for the log baseline hazard.
Survival Analysis 4
Presenter: Olivier Bouaziz
When: Thursday, July 14, 2016 Time: 9:00 AM - 10:30 AM
Room: Salon B Carson Hall (Level 2)
Session Synopsis:
Cohort effect in survival analysis: a change-point perspective
In epidemiology, it is well known that survival data are subject to the so-called cohort effect. In a cohort study, individuals recruited or born at different dates might have heterogeneous hazard rates. While this phenomenon can be continuous in nature with a slow change of hazard rates over time, it is often due to a radical change in the treatment or prevention strategy. For instance, if one is interested in the time until first bacterial infection, the discover of penicillin might represent a breakpoint in the survival study. Patients born at an early date such that they could not benefit from the penicillin treatment would have a different survival distribution compared to other patients who might have had access to penicillin treatment. Other examples include tritherapy in HIV patients, national screening policy for patients with cancer such as breast cancer for instance. This cohort effect naturally lead us to consider breakpoint models that take into account the survival heterogeneity between patients. From a statistical point of view we consider this situation as a change-point model where abrupt changes occur either in terms of baseline hazard rates or in terms of proportional factors. In such a model, we aim at two objectives: first we want to estimate the hazard rates and the proportional factors in each homogenous region through a Cox model. Secondly, we want to accurately provide the number and location of the breakpoints. Recently a constrained Hidden Markov Model (HMM) method was suggested in the context of breakpoint analysis (see Luong et al, 2013). This method allows to perform a full change-point analysis in a segment-based model (one parameter by segment) providing linear EM estimates of the parameter and a full specification of the posterior distribution of change points. In this talk we adapt this method to the context of survival analysis with hazard rate estimates, where the estimation is performed through the EM algorithm to provide update of the estimates and the posterior distribution at each iteration step. In this talk the new method will be presented, then evaluated through a simulation study and applied on the survival of diabetic patients at the Steno Memorial hospital. On this dataset, the year of birth of the patients range from 1903 to 1971. A three breakpoint model is found from our method and survivals and hazard ratios are estimated on each birth cohort.
Survival Analysis 4
Presenter: Andrea Giussani
When: Thursday, July 14, 2016 Time: 9:00 AM - 10:30 AM
Room: Salon B Carson Hall (Level 2)
Session Synopsis:
Modeling dependence in bivariate multi-state processes: a frailty approach Giussani, A. and Bonetti, M. Bocconi University, Milan, Italy
Modern survival data analysis typically deals with complex sequences of time-to-event data, and among multivariate survival analysis techniques, multi-state models allow for the study of sequences of states experienced over time. In the last few years, several authors have focused on the analysis of multiple multistate processes. Diao and Cook (2014) proposed a copula-based model which enables joint analysis of multiple progressive multistate process. A similar approach is followed by Eryilmaz (2014), who models the global dependence between two multi-state components via copula. However, frailty models have not yet been considered in the context of multiple multi-state models. Our aim is to build a theoretical framework, allowing to study both the dependence across and within individual-specific events. As for the latter, copula-based models are employed, whereas dependence between multi-state models may be easily accomplished by means of frailties. In this paper, the well-known Marshall-Olkin Bivariate Exponential Distribution, MOBVE (Marshall and Olkin, 1967) is considered for the joint distribution of frailties. The reason is twofold. On the one hand, it allows one to model shocks that affect the two individuals-specific frailties. On the other hand, the MOBVE is the only bivariate exponential distribution with exponential marginals, which easily allows for the modeling of each multi-state process as a shared frailty model (Clayton, 1978; Oakes, 1989). Statistical inference aspects are discussed, with particular interest to the estimation of the conditional hazards. The proposed methodology is applied to the investigation of different-sex couples association in dementia onset and death.
Survival Analysis 4
Presenter: Johannes Krisam
When: Thursday, July 14, 2016 Time: 9:00 AM - 10:30 AM
Room: Salon B Carson Hall (Level 2)
Session Synopsis:
Optimal Subgroup Selection Rules in Adaptive Enrichment Designs with Time-to-event Outcome
When investigating the efficacy of a recently developed therapy, it is often plausible that the treatment might be more or even solely beneficial in a particular subgroup of patients as compared to the total patient population. Adaptive enrichment designs incorporating a mid-course efficacy assessment have been proposed as a potential solution (see, e.g., [1]). On the basis of interim results, it is decided whether to continue the trial with the total patient population or the subgroup only. As already shown for the situations of a normally distributed or a binary endpoint, the employed interim decision rule has a crucial impact on the probability of a correct interim decision and the statistical power (see, e.g., [2-4]). For the situation of a time-to-event variable as primary outcome of the trial, we present statistical methods that incorporate an evaluation of the performance of decision rules in terms of correct selection probability at interim, thus allowing to choose an appropriate subgroup selection strategy. Additionally, optimal decision rules are derived which incorporate the uncertainty about treatment effects by modeling knowledge gained from previous trials as a prior distribution. Exact formulae for the respective optimal decision thresholds are given, taking several trial characteristics such as sample size and subgroup prevalence into account. These optimal decision rules are evaluated regarding their performance in terms of correct selection probability, type I error rate, and power, and are compared to ad-hoc decision rules proposed in the literature. Our methods and their potential for application in oncology trials are illustrated by means of a clinical study example. [1] Jenkins M, Stone A, Jennison C (2011). An adaptive seamless phase II/III design for oncology trials with subpopulation selection using correlated survival endpoints. Pharm Stat 10:347-356. [2] Krisam J, Kieser M (2014). Decision rules for subgroup selection based on a predictive biomarker. J Biopharm Stat 24:188-202. [3] Krisam J, Kieser M (2015a). Performance of biomarker-based subgroup selection rules in adaptive enrichment designs. Statistics in Biosciences (e-published, ahead of print), doi: 10.1007/s12561-015-9129-5. [4] Krisam J, Kieser M (2015b). Optimal decision rules for biomarker-based subgroup selection for a targeted therapy in oncology. IJMS 16:10354-10375.
Survival Analysis 4
Presenter: Maral Saadati
When: Thursday, July 14, 2016 Time: 9:00 AM - 10:30 AM
Room: Salon B Carson Hall (Level 2)
Session Synopsis:
Regularized Regression Methods for Competing Risks Data
In various applications of time-to-event data the observation of an event of interest may be precluded by a competing event. The analysis of such data requires techniques for competing risks assessment and multistate modeling. Such techniques are not yet available for high-dimensional covariate information. Therefore, we now consider a novel approach to regularized regression in the competing risks setting assuming cause-specific proportional hazards models. As in the usual high-dimensional scenario, we aim to find strong explanatory variables for each competing event type. Furthermore, we have two goals which are particular to the competing risks structure, namely (i) identifying variables that have similar effect size for different event types and combining them (e.g. in a clinical setting a specific gene might have a strong positive effect associated with death as well as relapse) (ii) identifying variables that have moderate, but opposing effects on different event types (e.g. a gene may show moderate positive association with remission and moderate negative association with relapse) To this end we shall explore two penalization strategies: the lasso and the SCAD. Recently, Reulen and Kneib proposed a fused lasso approach for competing risks and multistate models. The idea is to use the regular lasso penalty as well as a penalty term on the absolute differences of estimated covariate effects that are desired to be the same across transitions. This approach attempts to find common effects across transitions by using regularization on the differences, thereby fulfilling aim (i); however, it shrinks opposing effects towards 0, which is not in accordance to our aim (ii). We suggest the use of a SCAD penalty to fulfill both aims. A main advantage of SCAD is its asymptotic unbiasedness. In our application for competing risks, we examine whether the use of a SCAD penalty is helpful to retain variables with moderate, opposing effects in the model. These methods will be compared in a simulation study as well as on a real-life data set of a prospective clinical trial on AML. The Brier score for competing risks is used to quantify the prediction error and to evaluate the models. Our ultimate goal is to understand the driving forces behind transitions from state to state and to predict individual patientsÂ’ disease course and outcome with high accuracy.
Survival Analysis 4
Presenter: Eric Kawaguchi
When: Thursday, July 14, 2016 Time: 9:00 AM - 10:30 AM
Room: Salon B Carson Hall (Level 2)
Session Synopsis:
The ABRIDGE Method for Cox Regression Models: Variable Selection for Cox's Proportional Hazards Model via the Adaptive Broken Ridge (ABRIDGE) method
We introduce a new variable selection algorithm for Cox's proportional hazards model which incorporates an approximate L0 penalty. The algorithm, Adaptive Broken Ridge (ABRIDGE), fits an iterative weighted ridge regression by reweighting the penalty term. Adaptively reweighting the ridge penalty allows unimportant coefficients to shrink to zero while retaining the important ones. Simulation results show that our method is superior to commonly used regularization methods such as Lasso, SCAD, and Adaptive Lasso in model recovery, and bias reduction.