Invited Sessions Details

A continuous approach to interpreting forensic DNA profiles

Presenter: David Balding

When: Tuesday, July 12, 2016      Time: 2:00 PM - 3:30 PM

Room: Oak Bay 1-2 (Level 1)

Session Synopsis:

Formulation of hypotheses in the evaluation of complex DNA evidence

For complex DNA profiles the genotype of the“contributor of interest“ (CoI) is unknown under the defence case. Often this is because there are other contributors, of known and/or unknown genotype, common to prosecution and defence hypotheses. Degraded and/or low template DNA may also contribute to uncertainty about the genotype of the CoI. Weight of forensic evidence is best conveyed by ratios of likelihoods for hypotheses corresponding to prosecution and defence scenarios. Recently computational approaches based on Markov chain Monte Carlo (MCMC) or Bayesian networks have been introduced for complex DNA profiles. Under these approaches a model is fitted under the defence case only, and the LR can be decomposed into the probability under the fitted model that the genotype of the CoI matches that specified under the prosecution case, times the inverse match probability, which is the LR that would apply if the CoI genotype were known (such as when there is a good-quality, single-contributor crime scene profile). This product form for the LR differs from what had been until recently the standard approach of computing likelihoods separately under each hypothesis and then taking their ratio. I have labelled these two approaches as, respectively, the “scientific viewpoint” (develop a probability model and compute the probabilities of relevant unknowns only under that model) and the “legal viewpoint“ (a competition between rival hypotheses). There can be fundamental differences in the two approaches. For example, nuisance parameters are fitted separately under each hypothesis for the legal viewpoint, but only under the defence hypothesis under the scientific viewpoint. This can make an important difference if the prosecution case is supported by the evidence, but for different values of the nuisance parameters than those that best fit the defence case. Moreover, under the scientific viewpoint there is a lebelling problem that requires the computed LR to be divided by the number of individuals with unknown genotype under the defence case. There is no corresponding issue for the scientific viewpoint. My talk will explore these issues and give some exploratory results using my recently-developed software likeLTD (version 6). I will also investigate the question of whether or not it is advantageous to subdivide a low-template DNA sample in order to produce replicate profiling runs. This is joint work with Chris Steele (UCL).

A continuous approach to interpreting forensic DNA profiles

Presenter: James Curran

When: Tuesday, July 12, 2016      Time: 2:00 PM - 3:30 PM

Room: Oak Bay 1-2 (Level 1)

Session Synopsis:

Statistical interpretation of DNA evidence: a brief history of theory and practise.

In this talk I will discuss the changes in technology for the collection of DNA evidence from crime scene samples and the parallel evolution of statistical methods for the interpretation. DNA evidence has been a mainstay of the modern forensic scientist's toolbox for over 30 years. Over this time, there has been a considerable change in both the techniques used to recover DNA information from biological samples and the sensitivity of the instrumentation. It has been necessary, therefore, for the statistical methods used to interpret this powerful evidence to change as well. I will give an overview in the changes in technology and the subsequent changes that we, as statisticians, have made to our methods of interepretation.

A continuous approach to interpreting forensic DNA profiles

Presenter: Catherine Grgicak

When: Tuesday, July 12, 2016      Time: 2:00 PM - 3:30 PM

Room: Oak Bay 1-2 (Level 1)

Session Synopsis:

Effects of laboratory and analysis decisions on the LR and its distribution

Findings of the NIST MIX 13 study highlight the need to produce forensic DNA processes that give rise to consistent results [1]. Efforts to reduce the level of variability have culminated in the development of probabilistic interpretation systems, with the expectation that these approaches will reduce uncertainty and subjectivity in forensic mixture interpretation. However, the entire forensic process is complex and there are numerous processing and analytic decisions that occur prior to interpretation; all of which may impact the final statistic. In this session, we examine the impact of the AT (analytical threshold) on interpretation. Typically, raw signal is analyzed using an allele detection program or module. At this point that signal is filtered through the application of an AT. Ways in which the AT should or could be set have been described, and a review on the subject is available in [2]. The resultant ATs are usually in the tens (i.e. ~50) of RFUs. However, with the advent of probabilistic systems, large ATs may no longer be necessary. Thus, we reassess the methods by which the AT could be determined within the continuous interpretation paradigm. We amplified a set of low-template samples containing approximately 8 and 15 pg of DNA. We plot the histogram of signal for each locus and observe at least three seemingly distinct peaks. It is suggested that the first signal-group (median 4) consists largely of instrumental noise. The second group (median 24) is the signal when one copy of DNA is amplified and the third (median 47) is the signal obtained when two copies of DNA are amplified. We then examine the signal when these same samples were injected for twice as long. We find the same multi-modal pattern, but in this instance the first and second signal-groups are at 4-11 and 36-65 RFU, respectively. These data suggest it may be possible for laboratories to choose an AT that ensures all amplicon signal is imported into the interpretation systems, while maintaining the ability to filter most of the noise. Last, we examine the impact of utilizing an optimized AT on the LR and its distribution obtained from CEESIt - a continuous probabilistic system available at [3]. [1] http://www.nist.gov/forensics/upload/coble.pdf. [2] Bregu J, Conklin D, Coronado E, Terrill M, Cotton RW, Grgicak CM. Analytical thresholds and Sensitivity: Establishing RFU thresholds for forensic DNA analysis. J Forensic Sci. 2012;58:120-9. [3] http://www.bu.edu/dnamixtures.

A continuous approach to interpreting forensic DNA profiles

Presenter: Simone Gittelson

When: Tuesday, July 12, 2016      Time: 2:00 PM - 3:30 PM

Room: Oak Bay 1-2 (Level 1)

Session Synopsis:

SEMI-CONTINUOUS VS FULLY CONTINUOUS MODELS FOR THE INTERPRETATION OF DNA MIXTURES

The interpretation of DNA mixtures involves uncertainty about the donors' genotypes. A probabilistic model is required to quantify the likelihood of each donor's genotype. Two main categories of models are currently available within a probabilistic DNA mixture interpretation framework: a semi-continuous model and a fully continuous model. A semi-continuous model considers an allelic signal to be either present or absent for each allele and assigns probabilities to the event of presence or absence of each allele. A fully continuous model characterizes each allelic signal by its peak height. It assigns probability densities to the observed peak heights based on a model of the expected peak heights. This presentation explains and compares the results obtained by these two approaches for the interpretation of DNA mixtures.