Oral

Epidemiology 1

Presenter: Alessio Crippa

When: Monday, July 11, 2016      Time: 2:00 PM - 3:30 PM

Room: Saanich 1-2 (Level 1)

Session Synopsis:

A new measure of between-studies heterogeneity in meta-analysis

The assessment of the magnitude of heterogeneity in a meta-analysis is a crucial step for determining the appropriateness of combining results or performing meta-regression. The most popular measure of heterogeneity, I2, was derived under an assumption of homogeneity of the within-study variances, which is unlikely to hold in many meta-analyses. The alternative measure, RI, uses the harmonic mean to estimate the average of the within-study variances, and it may be biased as well. Our aim is to present a new measure, Rb, does not depend upon the definition of a within-study variance term, and compare it with the earlier formulations. Rb quantifies the extent to which the variance of the pooled random-effects estimator is due to between-studies variation. We discuss definition, interpretation, properties, point and interval estimation, and formal relationships with the other two measures I2 and RI. Furthermore we evaluate the performances of all 3 estimators through an extensive simulation study covering a wide range of scenarios that occur in epidemiologic practice. The use of the aforementioned measures is illustrated in a re-analysis of three published meta-analyses. The proposed measure will is implemented in user-friendly functions available for routine use in R, SAS, and Stata.

Epidemiology 1

Presenter: Amanda Fernández-Fontelo

When: Monday, July 11, 2016      Time: 2:00 PM - 3:30 PM

Room: Saanich 1-2 (Level 1)

Session Synopsis:

Analysis of under-reported data by means of INAR-hidden Markov chains

Since the introduction of the Integer-Valued AutoRegressive (INAR) models by Al-Osh, M. A. and Alzaid, A. A. (1987), the interest in the analysis of count time series has been growing. The main reason for this increasing popularity is the limited performance of the classical time series analysis approach when dealing with discrete valued time series. With the introduction of discrete time series analysis techniques, several challenges appeared such as unobserved heterogeneity, periodicity, under-reporting,.... Many efforts have been devoted in order to introduce seasonality in these models (Moriña et. al (2011)) and also coping with unobserved heterogeneity. However, the problem of under-reported data is still in a quite early stage of study in many different fields. This phenomenon is very common in many contexts such as epidemiological and biomedical research. It might lead to potentially biased inference and may also invalidate the main assumptions of the classical models. For instance, Winkelmann (1996) explores a Markov chain Monte Carlo based method to study worker absenteeism where sources of under-reporting are detected. Also in the public health context, it is well known that several diseases have been traditionally under-reported (occupational realted diseases, food exposures diseases, ...). The model we will present here considers two discrete time series: the observed series of counts Yt which may be under-reported, and the underlying series Xt with an INAR(1) structure Xt = ? ? Xt-1+ Wt, where 0 < ? < 1 is a fixed parameter and Wt is Poisson(?). The binomial thinning ? operator is defined as ? ? Xt-1=?i=0Xt-1 Zi(?) where Zi are i.i.d Bernoulli random variables with probability of success equal to ?. The way we allow Yt to be under-reported is by defining that Yt is Xt with probability 1-? or is q ? Xt with probability ?. Obviously, this definition means that the observed Yt coincides with the underlying series Xt, and therefore the count at time t is not under-reported with probability 1-?. Several examples of application of the model in the field of public health will be discussed, using real data regarding incidence and mortality attributable to diseases related to occupational and environmental exposures and to known toxics and traditionally under-reported.

Epidemiology 1

Presenter: Sven Ove Samuelsen

When: Monday, July 11, 2016      Time: 2:00 PM - 3:30 PM

Room: Saanich 1-2 (Level 1)

Session Synopsis:

Correction for batch effects in nested case-control and case-cohort studies

Nested case-control studies (NCC) are traditionally analyzed by maximizing a partial likelihood under a proportional hazards model. Exposures, typically biological specimens, are often analyzed in batches. This is handled efficiently by the partial likelihood approach if cases and matched controls are placed on the same batch and batch effects are additive on the exposure measurements. Under certain circumstances, however, the partial likelihood approach may be inefficient, one instance being competing risk data where cases of different types are assigned separate controls. Then inverse probability weighting (IPW) methods allowing for reusing control data toward all types of cases can give more precise estimates. However, if the exposure measurements differ systematically between batches a naïve analysis can result in attenuated parameter estimates, as demonstrated by Støer and Samuelsen (2013). In this talk we discuss methods correcting for batch effects for NCC studies analyzed with IPW. The methods discussed are adjustment by including batch as a factor or stratification variable and a normalization approach where the batch effects are estimated and measurements corrected. The methods were compared using simulation studies and application to a real data set. With large batch sizes (>50) all approaches seemed to correct properly for the batch effects giving roughly unbiased estimates. However, with small batches sizes estimates could be biased often towards stronger effect than the true value, in particular with stratification. Case-cohort studies are also often analyzed with IPW and may be subject to bias due to batch effects. We also investigated similar approaches to correct for bias due to batch effects for case-cohort. The results were in line to those for NCC-studies.

Epidemiology 1

Presenter: Lorenz Uhlmann

When: Monday, July 11, 2016      Time: 2:00 PM - 3:30 PM

Room: Saanich 1-2 (Level 1)

Session Synopsis:

Hypothesis Testing for Treatment Arm Differences in Network Meta-Analyses

Background: Network meta-analysis (NMA) experienced substantial interest and development within the last years [1]. Numerous articles are available addressing different modeling techniques. However, there is only little known about how to test for differences between treatment arms within an NMA model. Methods: In our contribution, we address the problem of hypothesis testing when comparing treatment arms. Frequently, odds ratios for each treatment arm compared to a common baseline arm are estimated in NMAs. We present how to test for differences between these odds ratios and how p-values can be calculated using ideas provided in [2] and [3]. The one-sided (Bayesian) hypothesis can easily be extended to a non-inferiority setting. Results: A simulation study to investigate the type I error and the power of our approach was performed. We compared two different procedures, an effect-based and an arm-based one. Interestingly, the results suggest that in some superiority settings the first approach should be chosen while, especially, in non-inferiority settings the latter one leads to more favorable results. We also applied our approach to a real data example for which results will be presented and discussed. Conclusion: The proposed testing procedure turned out to be a useful and valid tool to test for differences between treatment arms in NMAs. The method is flexible and easy to implement. References: [1] Hoaglin DC, Hawkins N, Jansen JP, et al. (2011). Conducting indirect-treatment-comparison and network-meta-analysis studies: Report of the ISPOR task force on indirect treatment comparisons good research practices Part II. Value Health, 14:429–437. [2] Kawasaki Y and Miyaoka E (2012). A bayesian inference of p(?1 > ?2) for two proportions. J Biopharm Stat, 22(3):425–437. [3] Kawasaki Y, Miyaoka E (2013). A Bayesian non-inferiority test for two independent binomial proportions. Pharm Stat, 12:201–206.

Epidemiology 1

Presenter: Renata Yokota

When: Monday, July 11, 2016      Time: 2:00 PM - 3:30 PM

Room: Saanich 1-2 (Level 1)

Session Synopsis:

MULTINOMIAL ADDITIVE HAZARDS MODEL TO ASSESS THE DISABILITY BURDEN USING CROSS-SECTIONAL DATA

The global phenomenon of population ageing is accompanied by a growing proportion of older individuals living with chronic diseases and, consequently, disability. Chronic diseases are among the main causes of disability. Disability affects the quality of life of older individuals and is associated with increased health care costs. The identification of which chronic diseases contribute most to the disability burden is important to define strategies to reduce disability. Although longitudinal studies can be considered the gold standard to assess the causes of disability, they are costly and often with restricted sample size. Thus, the use of cross-sectional data under certain assumptions has become a popular alternative. Most of the existing methods using cross-sectional data are based on logistic regression, with focus on the effect of elimination of specific causes on disability. However, the results are affected by the order that a cause is removed, which can produce inconsistent results in the presence of multimorbidity. Furthermore, since these methods are based on a multiplicative model, they do not yield additive contributions of the causes. Analogous to the mortality analysis in the presence of competing risks, the attribution method proposed by Nusselder and Looman (2004) is an attractive option, as it enables the partition of disability into the additive contribution of chronic diseases, taking into account multimorbidity. Currently, the method is based on the binomial additive hazards model — a generalized linear model with a non-canonical link function (?i = log(1/(1–?i)), which requires a constraint on the parameter space (?i ? 0) to produce valid probabilities. In this study, we extended the attribution method to allow multinomial responses, as in most surveys the disability outcome is a multicategory variable that represents different severity levels (e.g., no disability, mild, and severe). We propose the use of the function “constrOptim” in R to maximize the multinomial log-likelihood function subject to the linear inequality constraint. For illustration, we assess the contribution of chronic diseases to the disability prevalence using the data from the Belgian Health Interview Surveys of 1997, 2001, 2004, and 2008. Reference Nusselder WJ, Looman CWN. Decomposition of differences in health expectancy by cause. Demography 2004;41:315-334.