Oral

Survival Analysis 3

Presenter: Per Andersen

When: Tuesday, July 12, 2016      Time: 4:00 PM - 5:30 PM

Room: Salon B Carson Hall (Level 2)

Session Synopsis:

Causal inference in survival analysis using pseudo-observations

Causal inference for non-censored response variables, such as binary or quantitative outcomes, is often based on one of the following two approaches: (1) inverse probability of treatment assignment weights ('propensity score'), (2) direct standardization ('g-formula'). To do causal inference in survival analysis one needs to address right-censoring and, often, special techniques are required for that purpose. We will show how censoring can be dealt with 'once and for all' by the means of so-called pseudo-observations when doing causal inference in survival analysis. Suppose that the target is the average causal effect expressed by the mean, E(f(T)), of some transformation f(.) of the survival time, T. Examples include f(T)=I(T>t) and f(T)=min(T, tau) leading to the average causal effect for the t-year survival probability S(t)=E(I(T>t)) and for the tau-restricted mean life time E(min(T; tau)), respectively. Without censoring, causal inference for such parameters could proceed as for other completely observed responses. In the presence of right-censoring we will replace the incompletely observed random variable f(T_i) by its pseudo-observation obtained as follows: Suppose that ? is a consistent estimator of E(f(T)) which may be calculated based on a right-censored sample of n independent subjects and that ?_i is the same estimator applied to the sample (of size n-1) obtained by eliminating subject i from the total data set. The i'th pseudo-observation is then n? - (n-1)?_i. We will then show how 'standard' causal inference techniques, such as (1) or (2) above, may be applied to the right-censored survival data. The same idea applies to competing risks settings. The methods will be illustrated via a study of patients with acute myeloid leukemia who received either myeloablative or non-myeloablative conditioning before allogeneic hematopoetic cell transplantation. We will estimate the average causal effect of the conditioning regime on outcomes such as the 3-year leukemia-free survival probability and the 3-year risk of acute graft-versus-host disease.

Survival Analysis 3

Presenter: Aurélie Bertrand

When: Tuesday, July 12, 2016      Time: 4:00 PM - 5:30 PM

Room: Salon B Carson Hall (Level 2)

Session Synopsis:

Robustness of Estimation Methods in a Survival Cure Model with Mismeasured Covariates

In time-to-event analysis, it may happen that a fraction of individuals will never experience the event of interest: they are considered to be cured. The promotion time cure model is one family of survival models taking this feature into account. An additional phenomenon which appears quite often in practice is the fact that some of the covariates are measured with error. This measurement error should in general be taken into account in the estimation of the model, to avoid biased estimators of the model. In this presentation, we first justify the need for correction methods in the promotion time cure model, by providing a numerical expression of the large-sample bias of the estimator obtained without taking the measurement error into account. We then describe both correction methods that exist in this context: a corrected score approach suggested by Ma and Yin (2008), and an alternative method based on the SIMEX algorithm, that we proposed in Bertrand et al. (2015) and which has the advantage of being very intuitive. However, while both approaches have good asymptotic properties, they rely on two main assumptions which are not always met in practice: the measurement error (i) should be normally distributed and (ii) must have a known and constant variance. We therefore present the results of an extensive simulation study investigating the robustness of both approaches with respect to their assumptions. Based on these simulations, we give practical recommendations about whether to correct for the measurement error, about which method to be used depending on the objective of the study, and about the consequences to be expected when misspecifying the distribution of the measurement error, or its variance. We conclude by illustrating these issues in the analysis of real medical data. References - Bertrand, A., Legrand, C., Carroll, R.J., De Meester, C., Van Keilegom, I. (2015) Inference in a Survival Cure Model with Mismeasured Covariates using a SIMEX Approach. Biometrika (under revision). - Ma, Y., Yin, G. (2008). Cure Rate Model With Mismeasured Covariates Under Transformation. Journal of the American Statistical Association, 103, 743-756.

Survival Analysis 3

Presenter: Lida Fallah

When: Tuesday, July 12, 2016      Time: 4:00 PM - 5:30 PM

Room: Salon B Carson Hall (Level 2)

Session Synopsis:

Study of Joint Progressive Type-II Censoring in Heterogeneous Populations

Time to event, or survival, data is common in the biological and medical sciences with typical examples being time to death and time to recurrence of a tumour. In practice, survival data is typically subject to censoring with incomplete observation of some failure times due to drop-out, intermittent follow-up and finite study duration. Many different probability models have been proposed for survival times. Extending these to mixture models allows the modelling of heterogeneous populations, e.g. susceptible/non-susceptible individuals. This allows the clustering of individuals to different groups together with parameter estimation. This becomes more complicated in the presence of censoring and requires care in model fitting and interpretation. Maximum likelihood estimation can be done by direct optimization or with the EM algorithm using a nested version to handle the two aspects of missing data, the mixture component labels and the censored observations. We focus on Type-II censoring, where follow-up is terminated after a pre-specified number of failures, with all other individuals censored at the largest failure time. The performance of the estimation procedure depends, not surprisingly, on the characteristics of the censoring scheme and the form of the component densities. Some experimentation with a mixture of exponential distributed components highlighted potential problems, however with normal components things are better behaved. In progressive Type-II censoring some individuals are removed randomly at each failure time. This has the effect of spreading out the censoring over the observation period with a consequent extension of the follow-up time. This censoring scheme seems to improve the efficiency of estimation for mixed populations. The above can be extended to two heterogeneous populations (e.g. male/female) applying Type-II or progressive Type-II censoring over the two populations, referred to as joint (progressive) Type-II censoring schemes. We focus on the above settings and conduct a simulation study to evaluate the impact of the form of censoring scheme on parameter estimation and study duration. We obtain the standard errors for parameter estimates and construct confidence intervals. Results will be discussed to show the benefits of the progressive regime. Finally, we illustrate with a real-data example.

Survival Analysis 3

Presenter: Yehenew Getachew Kifle

When: Tuesday, July 12, 2016      Time: 4:00 PM - 5:30 PM

Room: Salon B Carson Hall (Level 2)

Session Synopsis:

Distance from a hydroelectric dam and time to malaria, with distance confounded with the clustering structure

Malaria remains an important disease in terms of morbidity and mortality in many developing countries. Around hydro-electric dams, this risk might even increase due to the large water bodies available to the Anopheles mosquito. Time to malaria was followed up on a weekly basis from 2082 households that are located at different distances from the dam. Different standard techniques in survival analysis exist to model such clustered survival data, among them the marginal, fixed effects, stratified and frailty models. These time to malaria data have certain characteristics that makes the marginal and conditional approaches lead to quite diverse effects. Different models that cope with clustering in survival data can lead to contradictory results when the covariate of interest is confounded to a large extent with the clustering mechanism. The marginal model leads to quite different results compared to the other models, especially if the within village distance effect differs from the between village distance effect. In the marginal model, the overall effect of distance is studied, whereas in the fixed and stratified model, rather the within village effect of distance is investigated. The frailty model somehow combines these two approaches, but the way these two estimates are combined depends on factors that are hidden for the data analyst. The frailty model is often considered the standard model for clustered survival data. In a certain sense, it is the most efficient model under certain assumptions, in that it has the smallest standard error. The frailty model estimate is a weighted combination of the within and between village estimate of the distance effect. Such a weighted combination, however, makes only sense if the same relationship holds between and within clusters, i.e., village. This assumption, however, is questionable for the type of dataset that is considered in this study. Therefore, in such situation, we advise to split covariates into two orthogonal covariates, one referring to the covariate effect between clusters, and another referring to the covariate effect within clusters.

Survival Analysis 3

Presenter: Virginie Rondeau

When: Tuesday, July 12, 2016      Time: 4:00 PM - 5:30 PM

Room: Salon B Carson Hall (Level 2)

Session Synopsis:

A joint frailty-copula model between tumour progression and death for meta-analysis

Dependent censoring often arises in biomedical studies when time to tumour progression (e.g., relapse of cancer) is censored by an informative terminal event (e.g., death). For meta-analysis combining existing studies, a joint survival model between tumour progression and death has been considered under semicompeting risks, which induces dependence through the study-specific frailty. Our paper here utilizes copulas to generalize the joint frailty model by introducing additional source of dependence arising from intra-subject association between tumour progression and death. The practical value of the new model is particularly evident for meta-analyses in which only a few covariates are consistently measured across studies and hence there exist residual dependence. The covariate effects are formulated through the Cox proportional hazards model, and the baseline hazards are nonparametrically modeled on a basis of splines. The estimator is then obtained by maximizing a penalized log-likelihood function. We also show that the present methodologies are easily modified for the competing risks or recurrent event data, and are generalized to accommodate left-truncation. Simulations are performed to examine the performance of the proposed estimator. The method is applied to a meta-analysis for assessing a recently suggested biomarker CXCL12 for survival in ovarian cancer patients. We implement our proposed methods in R joint.Cox package.

Survival Analysis 3

Presenter: Cheng Zheng

When: Tuesday, July 12, 2016      Time: 4:00 PM - 5:30 PM

Room: Salon B Carson Hall (Level 2)

Session Synopsis:

Instrumental Variable with Competing Risk Model

In this paper, we discuss causal inference on the efficacy of a treatment or medication on a time to event outcome with competing risk. Although the treatment group can be randomized, there can be confoundings between the compliance and the outcome. Unmeasured confoundings might exist even after adjustment for measured covariates. Instrumental variable (IV) methods are commonly used to yield consistent estimations of causal parameters in the presence of unmeasured confoundings. Based on a semi-parametric additive hazard model for the subdistribution hazard, we propose an instrumental variable estimator to yield consistent estimation of efficacy in the presence of unmeasured confounding for competing risks setting. We derive the asymptotic properties for the proposed estimator. The estimator is shown to be well performed under nite sample size according to simulation results. We applied our method to a real transplant data example and showed that the unmeasured confounding could lead to signi cant bias in the estimation of the e ect (about 50% attenuated).