The PRIISM seminar series consists of research seminars of interest to an applied statistics audience, from innovative applications of applied statistics to novel statistical theory and methodology. Click here for information about the current seminar series, or for an archive of events from 2008 - 2015, or sign up for our mailing list to receive upcoming event reminders.

Previous Events: Fall 2015 - Spring 2018

Past Seminars
Date, Time, LocationTalk CategorySpeaker Name, AffiliationTopic (click for more info)
9/16/2015, 11-12
3rd Fl. Conf. Rm, Kimball
 Didactic Pat Shrout
(NYU)
Speaker: Pat Shrout is a Professor of Psychology at NYU. His methodologic research has been primarily in psychometrics, sampling, and multilevel models for analysis of growth and change.

Location: Kimball Hall, 246 Greene St, 3rd Floor Conference Room

Abstract: Nearly 50 years ago, Lord (1967) described a so-called paradox in statistical analysis whereby two reasonable analyses of pre-treatment/post-treatment data lead to different results. I revisit the issues, review some of the historical discussion, and present an analysis of the alternate analyses with a causal model that distinguishes treatment effects from trait, state, and error variation. In addition to comparing numerical results from difference score and ANCOVA adjustment for pre-treatment group differences, I consider results based on propensity score adjustment.

10/7/2015, 11-12
3rd Fl. Conf. Rm, Kimball

Data for Social Impact Charles Lang
(NYU)
Speaker: Charles Lang is a Postdoctoral Associate in the Department of Administration, Leadership & Technology at Steinhardt School of Culture, Education & Human Development, NYU. He recieved his doctorate in Human Development and Education from the Harvard Graduate School of Education and studies methodologies for capturing learning within the nascent field of learning analytics.

Location: Kimball Hall, 246 Greene St, 3rd Floor Conference Room

Abstract: For over a century educational measurement has developed analytical tools designed to maximize the inferential power of limited samples: a biannual state test, a regular accreditation exam, a once in a lifetime SAT. But can this methodology adapt to a world in which previous limitations on data collection have been dramatically reduced? A world with a greater variety of data formats, representing a larger number of conditions, on a finer timescale, with a larger sample of students. Starting from a methodological basis, Charles will discuss the implications that changes in data collection may have on how education is measured and the impact that this might have on the disciplines, institutions, and practitioners that utilize educational measurement.

10/14/2015, 10:30 - 11:30 
3rd Fl. Conf. Rm, Kimball

Statistical Methodology Bryan Keller
(TC, Columbia)
Speaker: Bryan Keller is Assistant Professor of Applied Statistics at Teachers College, Columbia University. His current research interests include causal inference and applications of data mining methods to social and education sciences. His scholarly work has been published in Structural Equation Modeling, Psychometrika, and Multivariate Behavioral Research.


Location: Kimball Hall, 246 Greene St, 3rd Floor Conference Room

Abstract: In an effort to protect against omitted variable bias, statisticians have traditionally favored an inclusive approach to covariate selection for causal inference, so long as covariates were measured before any treatment was administered. There are, however, three classes of variables, which, if conditioned upon, are known to degrade either the bias or efficiency of an estimate of a causal effect: non-informative variables (NVs), instrumental variables (IVs), and collider variables. The decision about whether to control for a potential collider variable must be based on theory about how the data were generated. In contrast, one need only establish a lack of association with the outcome variable in order to identify an NV or an IV. We investigate three empirical methods – forward stepwise selection, the lasso, and recursive feature elimination with random forests – for detection of NVs and IVs through simulation studies in which we judge their efficacy by (a) sensitivity and specificity in identifying true or near NVs and IVs and (b) the overall effect on bias and mean-squared error of the causal effect estimator, relative to inclusion of all pretreatment variables. Results and implications are discussed.

Thurs 11/19/2015,10:30 - 12 
5th Fl. Conf. Rm, Pless Hall

Statistical Methodology Russell Steele
(McGill)
Speaker: Russell Steele is an Associate Professor in the Department of Mathematics and Statistics at McGill University. Prof. Steele’s primacy statistical methodological interests lie in the areas of methods for analyzing data with missing values and model selection, although he is more broadly interested in statistical applications. He has a broad range of substantive interests in medicine, publishing work in rheumatology, sports medicine, and design and interpretation of meta-analyses.


Location: Pless Hall, 32 Washington Pl, 5th Floor Conference Room

Abstract: In randomized clinical trials, subjects often do not comply with their randomized treatment arm. Although one can still unbiasedly estimate the causal effect of being assigned to treatment using the common Intention-to-Treat (ITT) estimator, there is now potential confounding of the causal effect of actually *receiving* treatment. Basic alternative estimators such as the per protocol or as treated estimators have been used, but are generally biased for estimating the causal effect of interest. Balke and Pearl (1997) and Angrist, et al. (1996) independently proposed an instrumental variable (IV) estimator that would estimate the causal effect (the Complier Average Causal Effect — CACE) of receiving treatment in a subpopulation of people who would comply with treatment assignment (i.e. the compliers). In this talk, I will first review the CACE and the IV estimator. I will then dissect the instrumental variable estimator in order to compare it to the per protocol and as treated estimators. I will show that the basic IV estimator and its confidence interval can be computed from basic summary statistics that should be reported in any randomized trial. My formulation of the IV estimator will also allow for simple sensitivity analyses that can be done using a basic Excel spreadsheet. I will then describe future interesting directions for compliance research that I am currently working on. Most of this work appears in a recently published article at the American Journal of Epidemiology and is co-authored by Ian Shrier, Jay Kaufmann and Robert Platt.

12/2/2015, 11:00 - 12:00 
3rd Fl. Conf. Rm, Kimball

Statistical Methodology Barry Cohen
(NYU)
Speaker: Barry Cohen received his PhD in Experimental Psychology from NYU, and is currently a clinical associate professor in the (GSAS) department of psychology at NYU, where he teaches courses in statistics and research design at the graduate level.

Location: Kimball Hall, 246 Greene St, 3rd Floor Conference Room

Abstract: The arguments against null hypothesis significance testing (NHST) have been greatly exaggerated, and do not apply equally to all types of psychological research. I will discuss the conditions under which NHST serves several useful purposes, which may outweigh its undeniable drawbacks. In brief, NHST works best when the null hypothesis is rarely true, the direction of the results is more important than the magnitude, extremely large samples are not used, and tiny effects have no serious consequences. Priming studies in social psychology will be used as an example of this type of research. Part of the controversy over failures to replicate notable psychological studies is related to misunderstandings and misuses of NHST. I will conclude by discussing the resistance to banning NHST and its p values in favor of reports of effects sizes and/or confidence intervals, and describing some of the possible solutions to the drawbacks of NHST.

1/27/2016, 11:00 - 12:00 
3rd Fl. Conf. Rm, Kimball

Statistical Methodology Elizabeth Tipton
(TC)
Speaker: Elizabeth Tipton is an Assistant Professor of Applied Statistics in the Human Development Department at Teachers College, Columbia University. Her research focuses on the design and analysis of field experiments; issues of external validity and generalizability in experiments; and meta-analysis, particularly of dependent estimates.

Location: Kimball Hall, 246 Greene St, 3rd Floor Conference Room

Abstract: Data analysts commonly ‘cluster’ their standard errors to account for correlations arising from the sampling of aggregate units (e.g., states), each containing multiple observations. When the number of clusters is small to moderate, however, this approach can lead to biased standard errors and hypothesis tests with inflated Type I error. One solution that is receiving increased attention is the use of the bias-reduced linearization (BRL). In this paper, we extend the BRL approach to include an F-test that can be implemented in a wide range of applications. A simulation study reveals that that this test has Type I error close to nominal even with a very small number of clusters, and importantly, that it outperforms the usual estimator even when the number of clusters is moderate (e.g., 50 – 100).

Mon 2/1/2016, 10 - 11:30 
3rd Fl. Conf. Rm, Kimball

Statistical Methodology Tian Zheng
(Columbia)
Speaker: Tian Zheng is an Associate Professor of Statistics in the Statistics Department at Columbia University. Her research focuses on developing novel methods and improving existing methods for exploring and analyzing interesting patterns in complex data from different application domains.

Location: Kimball Hall, 246 Greene St, 3rd Floor Conference Room

Abstract: Measuring the impact of scientific articles is important for evaluating the research output of individual scientists, academic institutions and journals. While citations are raw data for constructing impact measures, there exist biases and potential issues if factors affecting citation patterns are not properly accounted for. In this talk, I present a new model that aims to address the problem of field variation and introduce an article level metric useful for evaluating individual articles’ topic-adjusted visibility. This measure derives from joint probabilistic modeling of the content in the articles and the citations amongst them using latent Dirichlet allocation (LDA) and the mixed membership stochastic blockmodel (MMSB). This proposed model provides a visibility metric for individual articles adjusted for field variation in citation rates, a structural understanding of citation behavior in different fields, and article recommendations which take into account article visibility and citation patterns. For this work, we also developed an efficient algorithm for model fitting using variational methods. To scale up to large networks, we developed an online variant using stochastic gradient methods and case-control likelihood approximation. Results from an application of our methods to the benchmark KDD Cup 2003 dataset with approximately 30,000 high energy physics papers will also be presented.

2/10/2016, 2:30 - 4:00
3rd Fl. Conf. Rm, Kimball

Statistical Methodology Tyler McCormick
(Seattle)
Speaker: Tyler McCormick is an Assistant Professor in the Departments of Statistics and Sociology at the University of Washington, Seattle.

Location: Kimball Hall, 246 Greene St, 3rd Floor Conference Room

Abstract: In regions without complete-coverage civil registration and vital statistics systems there is uncertainty about even the most basic demographic indicators. In such areas the majority of deaths occur outside hospitals and are not recorded. Worldwide, fewer than one-third of deaths are assigned a cause, with the least information available from the most impoverished nations. In populations like this, verbal autopsy (VA) is a commonly used tool to assess cause of death and estimate cause-specific mortality rates and the distribution of deaths by cause. VA uses an interview with caregivers of the decedent to elicit data describing the signs and symptoms leading up to the death. This paper develops a new statistical tool known as InSilicoVA to classify cause of death using information acquired through VA. InSilicoVA shares uncertainty between cause of death assignments for specific individuals and the distribution of deaths by cause across the population. Using side-by-side comparisons with both observed and simulated data, we demonstrate that InSilicoVA has distinct advantages compared to currently available methods.

Mon 2/22/2016, 12 - 1:30 
3rd Fl. Conf. Rm, Kimball

Statistical Methodology José González-Brenes
(Pearson)
Speaker: Dr. José González-Brenes is currently engaged in his research agenda as a scientist at Pearson. His work studies methods that enable faster, better, and less expensive education with principled quantitative methods.

Location: Kimball Hall, 246 Greene St, 3rd Floor Conference Room

Abstract: Seminal results from cognitive science suggest that personalized education is effective to improve learners’ outcomes. However, the effort for instructors to create content for each of their students can sometimes be prohibitive. Recent progress in machine learning has enabled technology for teachers to deliver personalized education. Unfortunately, the statistical models used by these systems are often tailored for ad-hoc domains and do not generalize across applications. In this talk, I will discuss my work towards the goal of a unified statistical framework of human learning. This line of work is more flexible, more efficient, and more accurate than previous technology. Moreover, it generalizes previous popular models from the literature. Additionally, I will outline recent progress on novel methodology to evaluate statistical models for education with a learner-centric perspective. My findings suggest that prior work often uses evaluation methods that may misrepresent the educational value of educational systems. My work is a promising alternative that improves the evaluation of machine learning models in education.

2/24/2016, 12 - 1:30 
CDS 726 Broadway

Statistical Methodology Michael Betancourt
(Warwick)
Speaker: Michael Betancourt earned his PhD in Physics from MIT and is currently a Postdoctoral Research Associate at Warwick.

Location: Center for Data Science, 726 Broadway, 7th floor

Abstract: The modern preponderance of data has fueled a revolution in data science, but the complex nature of those data also limits naive inferences. To truly take advantage of these data we also need tools for building and fitting statistical models that capture those complexities. In this talk I’ll discuss some of the practical challenges of building and fitting such models in the context of real analyses. I will particularly emphasize the importance of Hamiltonian Monte Carlo and Stan, state-of-the-art computational tools that allow us to tackle these contemporary data without sacrificing the fidelity of our inferences.

3/23/2016, 11:00 - 12:00
3rd Fl. Conf. Rm, Kimball

Statistical Methodology Adriana Crespo-Tenorio
(Facebook)
Speaker: Adriana Crespo-Tenorio, PhD is on a mission to connect people’s online behavior to their offline lives. As researcher in the Ads Research team at Facebook, her work focuses on finding the best ways for digital advertising to break through to audiences in a mobile world and link users' Feed experience to outcomes IRL. Adriana joined Facebook after working at The New York Times’s Customer Insights Group. She holds a PhD in political economy and applied statistics from Washington University in St Louis.

Location: Kimball Hall, 246 Greene St, 3rd Floor Conference Room

Abstract: Bivariate probit models are a common choice for scholars wishing to estimate causal effects in instrumental variable models where both the treatment and outcome are binary.However, standard maximum likelihood approaches for estimating bivariate probit models are problematic. Numerical routines in common software suites frequently generate inaccurate parameter estimates, and even estimated correctly, maximum likelihood routines provide no straightforward way to produce estimates of uncertainty for causal quantities of interest. In this article, we show that adopting a Bayesian approach provides more accurate estimates of key parameters and facilitates the direct calculation of causal quantities along with their attendant measures of uncertainty.

4/6/2016, 11:00 - 12:00 
CDS 726 Broadway

Statistical Methodology Ilya Shpitser
(JHU)
Speaker: Ilya Shpitser is an Assistant Professor in the Department of Computer Science at Johns Hopkins University. His research includes all areas of causal inference and missing data, particularly using graphical models. Much of the recent applications of his work involved teasing out causation from association in observational medical data.

Location: Center for Data Science, 726 Broadway, 7th floor

Abstract: Modern causal inference links the "top-down" representation of causal intuitions and "bottom-up" data analysis with the aim of choosing policy. Two innovations that proved key for this synthesis were a formalization of Hume's counterfactual account of causation using potential outcomes (due to Jerzy Neyman), and viewing cause effect relationships via directed acyclic graphs (due to Sewall Wright). I will briefly review how a synthesis of these two ideas was instrumental in formally representing the notion of "causal effect" as a parameter in the language of potential outcomes, and discuss a complete identification theory linking these types of causal parameters and observed data, as well as approaches to estimation of the resulting statistical parameters. I will then describe, in more detail, how my collaborators and I are applying the same approach to mediation, the study of effects along particular causal pathways. I consider mediated effects at their most general: I allow arbitrary models, the presence of hidden variables, multiple outcomes, longitudinal treatments, and effects along arbitrary sets of causal pathways. As was the case with causal effects, there are three distinct but related problems to solve -- a representation problem (what sort of potential outcome does an effect along a set of pathways correspond to), an identification problem (can a causal parameter of interest be expressed as a functional of observed data), and an estimation problem (what are good ways of estimating the resulting statistical parameter). I report a complete solution to the first two problems, and progress on the third. In particular, my collaborators and I show that for some parameters that arise in mediation settings, triply robust estimators exist, which rely on an outcome model, a mediator model, and a treatment model, and which remain consistent if any two of these three models are correct. Some of the reported results are a joint work with Eric Tchetgen Tchetgen, Caleb Miles, Phyllis Kanki, and Seema Meloni.

9/14/2016, 11:00 - 12:00 
3rd Fl. Conf. Rm, Kimball

Statistical Methodology Stephen H. Bell
(Abt Associates)
Speaker: Dr. Stephen Bell is an Abt Associates Fellow who holds a Ph.D. in Economics from the University of Wisconsin-Madison. He has designed and analyzed more than a dozen large-scale social experiments of policy interventions to assist disadvantaged Americans, with current work focusing on a slate of papers for IES and NSF on making findings of rigorous impact evaluations more generalizable to the nation and other inference populations. His research on methodologies for measuring social program impacts, both experimental and quasi-experimental econometric techniques, has been widely published. The work presented comes from collaborative work with Elizabeth A Stuart, Robert B. Olsen, and Larry L. Orr.

Location: Kimball Hall, 246 Greene Street, 3rd Floor conference room

Abstract: Randomized impact evaluations of social and educational interventions—while constituting the “gold standard” of internal validity due to the lack of selection bias between treated and untreated cases—usually lack external validity. Due to cost and convenience, or local resistance, they are almost always conducted in a set of sites that are not a probability sample of the desired inference population— the nation as a whole for social programs or a given state or school district for educational innovations. We use statistical theory and data from the Reading First evaluation to examine the risks and consequences for social experiments of non-representative site selection, asking when and to what degree policy decisions are led astray by tarnished “gold standard” evidence. We also explore possible ex ante design-based solutions to this problem and the performance of ex post methods in the literature for overcoming non-representative site selection through analytic adjustments after the fact.

9/21/2016, 10:00 - 12:00 
3rd Fl. Conf. Rm, Kimball

Didactic Vincent Dorie
(NYU)
Speaker: Vince is a postdoc in NYU PRIISM program working on causal inference and nonparametrics. His recent work includes the causal inference competition at the 2016 Atlantic Causal Inference Conference and software to perform semiparametric sensitivity analyses evaluating the validity of the ignorability assumption in causal inference.

Location: Kimball Hall, 246 Greene Street, 3rd Floor conference room

Abstract: This two hour session is focused on getting started with Stan and how to use it in your research. Stan is an open-source Bayesian probabilistic programming environment that takes a lot of the work out of model fitting so that researchers can focus on model building and interpretation. List of topics will include: overview of Bayesian statistics, overview of Stan and MCMC, writing models in Stan, and a tutorial session where participants can write a model on their own or develop models that they have been working on independently. Stan has interfaces to numerous programming languages, but the talk will focus on R.

NOTE: Please bring a laptop with RStudio and RStan installed to this session

10/5/2016, 11:00 - 12:00 
3rd Fl. Conf. Rm, Kimball

Data for Social Impact Leanna House
(Virginia Tech)
Speaker: Leanna House is an Associate Professor of Statistics at Virginia Tech (VT), Blacksburg, Virginia and has been at VT since 2008. Prior to VT, she worked at Battelle Memorial Institute, Columbus, Ohio; received her Ph.D. in Statistics from Duke University, Durham, North Carolina in 2006; and subsequently served as a post-doctoral research associate for two years in the Department of Mathematical Sciences at Durham University, Durham, United Kingdom. Dr. House has authored or co-authored 25 journal papers and has been a strong statistical contributor to successful grant proposals including, "NRT-DESE: UrbComp: Data Science for Modeling, Understanding, and Advancing Urban Populations", “Usable Multiple Scale Big Data Analytics Through Interactive Visualization” , "Critical Thinking with Data Visualization", ``Examining the Taxonomic, Genetic, and Functional Diversity of Amphibian Skin Microbiota", and ``Bayesian Analysis and Visual Analytics''.

Location: Kimball Hall, 246 Greene Street, 3rd Floor conference room

Abstract: Datasets, no matter how big, are just tables of numbers without individuals to learn from the data, i.e., discover, process, assess, and communicate information in the data. Data visualizations are often used to present data to individuals, but most are created independently of human learning processes and lack transparency. To bridge the gap between people thinking critically about data and the utility of visualizations, we developed Bayesian Visual Analytics (BaVA) and its deterministic form, Visual to Parametric Interaction (V2PI). BaVA and V2PI transform static images of data to dynamic versions that respond to expert feedback. When applied iteratively, experts may explore data progressively in a sequence that parallels their personal sense-making processes. BaVA and V2PI have shown useful in both industry settings and the classroom. For example, we merged V2PI with motion detection software to create Be the Data. In Be the Data students physcially move in a space to communicate their expert feedback about data projected overhead. The idea is that participants have an opportuntiy to explore analytical relationships between data points by exploring relationships between themselves. This talk will focus on presenting the BaVA paradigm and its education applications.

10/19/2016, 11:00 - 12:00 
3rd Fl. Conf. Rm, Kimball

Statistical Methodology Paul De Boeck
(Ohio State University)
Speaker: Paul De Boeck is a professor of quantitative psychology at the Ohio State University. Before moving to OSU in 2012 he was a professor of psychological methods at the University of Amsterdam (Netherlands) and a professor of psychological assessment at the KULeuven (Belgium). He was president of the Psychometric Society in 2008 and he is the founding editor of the Applied Research and Case Studies section of Psychometrika. His research interests are generalized linear mixed models and explanatory item response theory, and applications of these approaches in the domains of individual differences in cognition, emotion, and psychopathology. More recently he tries to get his work published on the credibility crisis in psychology and feasible but perhaps uncommon methods that may be useful as a response to the crisis.

Location: Kimball Hall, 246 Greene Street, 3rd floor

Abstract: From a recent Science article with a large number of replications of psychological studies the base rate of the null hypothesis of no effect can be estimated. It turns out to be extremely high, which implies that many research hypotheses are false. As I will explain they are perhaps not fully false but mostly false. A possible explanation for why unlikely hypotheses tend to be selected for empirical studies can be found in expected utility theory. It can be shown that for low to moderately high power rates, the expected utility of studies increases with the probability of the null hypothesis being true. A high probability of the null hypothesis being true can be understood as reflecting a contextual variation of effects that are in general not much different from zero. Increasing the power of studies has become a popular remedy to counter the replicability crisis but this strategy is highly misleading if effects vary. Meta-analysis is considered another remedy but it is a suboptimal and labor-intensive approach and it is only long-term method. Two more feasible methods will be discussed to deal with contextual variation.

11/2/2016, 11:00 - 12:00 
3rd Fl. Conf. Rm, Kimball

Statistical Methodology Catherine Calder
(Ohio State University)
Speaker: Catherine (“Kate”) Calder is professor of statistics at The Ohio State University, where she has served on the faculty since 2003. Her research interests include spatial statistics, Bayesian modeling and computation, and network analysis, with application to problems in the social, environmental, and health sciences.

Location: Kimball Hall, 246 Greene Street, 3rd Floor conference room

Abstract: An affiliation network is a particular type of two-mode social network that consists of a set of `actors' and a set of `events' where ties indicate an actor's participation in an event. Methods for the analysis of affiliation networks are particularly useful for studying patterns of segregation and integration in social structures characterized by both people and potentially shared activities (e.g., parties, corporate board memberships, church attendance, etc.) One way to analyze affiliation networks is to consider one-mode network matrices that are derived from an affiliation network, but this approach may lead to the loss of important structural features of the data. The most comprehensive approach is to study both actors and events simultaneously. Statistical methods for studying affiliation networks, however, are less well developed than methods for studying one-mode, or actor-actor, networks. In this talk, I will describe a bilinear generalized mixed-effects model, which contains interacting random effects representing common activity pattern profiles and shared patterns of participation in these profiles. I will demonstrate how the proposed model is able to capture forth-order dependence, a common feature of affiliation networks, and describe a Markov chain Monte Carlo algorithm for Bayesian inference. I then will use the latent space interpretation of model components to explore patterns in extracurricular activity membership of students in a racially-diverse high school in a Midwestern metropolitan area. Using techniques from spatial point pattern analysis, I will show how our model can provide insight into patterns of racial segregation in the voluntary extracurricular activity participation profiles of adolescents. This talk is based on joint work with Yanan Jia and Chris Browning.

12/7/2016, 11:00 - 12:00 
3rd Fl. Conf. Rm, Kimball

Statistical Methodology Kathryn Vasilaky
(Columbia University)
Speaker: Kathryn is a postdoc at Columbia University's Earth Institute. Her PhD is in applied economics with interests in development economics, and applied statistics.

Location: Kimball Hall, 246 Greene Street, 3rd Floor conference room

Abstract: An iterative method is introduced for solving noisy, ill-conditioned inverse problems, where the standard ridge regression is just the first iteration of the iterative method to be presented. In addition to the regularization parameter, lambda, we introduce an iteration parameter k, which generalizes the ridge regression. The derived noise damping filter is a generalization of the standard ridge regression filter (also known as Tikhonov). Application of the generalized solution performs better than the pseudo-inverse (the default solution to OLS in most statistical packages), and better than standard ridge regression (L-2 regularization), when the covariate matrix or design matrix is ill-conditioned, or highly collinear. A few examples are presented using both simulated and real data.

2/28/2017, 9:30 - 10:30 
3rd Fl. Conf. Rm, Kimball

Statistical Methodology Sam Pimentel
(UPenn)
Abstract: How do health outcomes for newly-trained surgeons' patients compare with those for patients of experienced surgeons? To answer this question using data from Medicare, we introduce a new form of matching that pairs patients of 1252 new surgeons to patients of experienced surgeons, exactly balancing 176 surgical procedures and closely balancing 2.9 million finer patient categories. The new matching algorithm (which uses penalized network flows) exploits a sparse network to quickly optimize a match two orders of magnitude larger than usual in statistical matching, and allowing for extensive use of a new form of marginal balance constraint.

3/1/2017, 12:30 - 1:30 
3rd Fl. Conf. Rm, Kimball

Statistical Methodology Patrick Perry
(NYU Stern)
Abstract: Probabilistic methods for classifying texts according to the likelihood of class membership form a rich tradition in machine learning and natural language processing. For many important problems, however, class prediction is either uninteresting, because it is known, or uninformative, because it yields poor information about a latent quantity of interest. In scaling political speeches, for instance, party membership is both known and uninformative, in the sense that in systems with party discipline, what is interesting is a latent trait in the speech, such as ideological position, often at odds with party membership. Predictive tools common in machine learning, where the goal is to predict a black-or-white class--such as spam, sentiment, or authorship--are not directly designed for the measurement problem of estimating latent quantities, especially those that are not inherently unobservable through direct means.

In this talk, I present a method for modeling texts not as black or white representations, but rather as explicit mixtures of perspectives. The focus shifts from predicting an unobserved discrete label to estimating the mixture proportions expressed in a text. In this "shades of gray" worldview, we are able to estimate not only the graynesses of texts but also those of the words making up a text, using likelihood-based inference. While this method is novel in its application to text, it be can situated in and compared to known approaches such as dictionary methods, topic models, and the wordscores scaling method. This new method has a fundamental linguistic and statistical foundation, and exploring this foundation exposes implicit assumptions found in previous approaches. I explore the robustness properties of the method and discuss issues of uncertainty quantification. My motivating application throughout the talk will be scaling legislative debate speeches.

3/9/2017, 11:30 - 12:30 
3rd Fl. Conf. Rm, Kimball

Data for Social Impact Ravi Shroff
(NYU CUSP)
Abstract: Doctors, judges, and other experts typically rely on experience and intuition rather than statistical models when making decisions, often at the cost of significantly worse outcomes. I'll present a simple and intuitive strategy for creating statistically informed decision rules that are easy to apply, easy to understand, and perform on par with state-of-the art machine learning methods in many settings. I'll illustrate these rules with two applications to the criminal justice system: investigatory stop decisions and pretrial detention decisions.

3/22/2017, 2:00 - 3:30 
3rd Fl. Conf. Rm, Kimball

Statistical Methodology Sharif Mahmood
(KSU)
Location: Kimball Hall, 246 Greene Street, 3rd Floor conference room

Abstract: Finding treatment effects in observational studies is complicated by the need to control for confounders. Common approaches for controlling include using prognostically important covariates to form groups of similar units containing both treatment and control units (e.g. statistical matching) and/or modeling responses through interpolation. Hence, treatment effects are only reliably estimated for a subpopulation under which a common support assumption holds--one in which treatment and control covariate spaces overlap. Given a distance metric measuring dissimilarity between units, we use techniques in graph theory to find common support. We construct an adjacency graph where edges are drawn between similar treated and control units. We then determine regions of common support by finding the largest connected components (LCC) of this graph. We show that LCC improves on existing methods by efficiently constructing regions that preserve clustering in the data while ensuring interpretability of the region through the distance metric. We apply our LCC method on a study of the effectiveness of right heart catheterization (RHC). To further control for confounders, we implement six matching algorithms for analyses. We find that RHC is a risky procedure for the patients and that clinical outcomes are significantly worse for patients that undergo RHC.

3/23/2017, 11:00 - 12:30 
3rd Fl. Conf. Rm, Kimball

Statistical Methodology Winston Lin
Location: Kimball Hall, 246 Greene Street, 3rd Floor conference room

Abstract: This talk will be mostly based on my 2013 Annals of Applied Statistics paper, which reexamines David Freedman's critique of ordinary least squares regression adjustment in randomized experiments. Random assignment is intended to create comparable treatment and control groups, reducing the need for dubious statistical models. Nevertheless, researchers often use linear regression models to adjust for random treatment-control differences in baseline characteristics. The classic rationale, which assumes the regression model is true, is that adjustment tends to reduce the variance of the estimated treatment effect. In contrast, Freedman used a randomization-based inference framework to argue that under model misspecification, OLS adjustment can lead to increased asymptotic variance, invalid estimates of variance, and small-sample bias. My paper shows that in sufficiently large samples, those problems are either minor or easily fixed. Neglected parallels between regression adjustment in experiments and regression estimators in survey sampling turn out to be very helpful for intuition.

4/5/2017, 11:00 - 12:00 
3rd Fl. Conf. Rm, Kimball

Statistical Methodology Jared Murray
(CMU)
Abstract: Bayesian additive regression trees (BART) have been applied to nonparametric mean regression and binary classification problems in a range of applied areas. To date BART models have been limited to models for Gaussian "data", either observed or latent, and with good reason - the Bayesian backfitting MCMC algorithm for BART is remarkably efficient in Gaussian models. But while many useful models are naturally cast in terms of observed or latent Gaussian variables, many others are not. In this talk I extend BART to a range of log-linear models including multinomial logistic regression and count regression models with zero-inflation and overdispersion. Extending to these non-Gaussian settings requires a novel prior distribution over BART's parameters. Like the original BART prior, this new prior distribution is carefully constructed and calibrated to be flexible while avoiding overfitting. With this new prior distribution and some data augmentation techniques I am able to implement an efficient generalization of the Bayesian backfitting algorithm for MCMC in log-linear (and other) BART models. I demonstrate the utility of these new methods with several examples and applications.

4/19/2017, 11:00 - 12:00 
3rd Fl. Conf. Rm, Kimball

Statistical Methodology Carlos Carvalho
(UT Austin)
Location: Kimball Hall, 246 Greene Street, 3rd Floor conference room

Abstract: This paper develops a semi-parametric Bayesian regression model for estimating heterogeneous treatment effects from observational data. Standard nonlinear regression models, which may work quite well for prediction, can yield badly biased estimates of treatment effects when fit to data with strong confounding. Our Bayesian causal forests model avoids this problem by directly incorporating an estimate of the propensity function in the specification of the response model, implicitly inducing a covariate-dependent prior on the regression function. This new parametrization also allows treatment heterogeneity to be regularized separately from the prognostic effect of control variables, making it possible to informatively “shrink to homogeneity”, in contrast to existing Bayesian non- and semi-parametric approaches. Joint work with P. Richard Hahn and Jared Murray.

4/26/2017, 11:00 - 12:00 
3rd Fl. Conf. Rm, Kimball

Statistical Methodology Jay Verkulien
(CUNY)
Location: Kimball Hall, 246 Greene Street, 3rd Floor conference room

Abstract: Tukey's mean-difference transformation and the Bland-Altman plot (e.g., Bland & Altman, 1986) are widely used in method comparison studies throughout the sciences, particularly in the health sciences. While intuitively appealing, easy to compute, and giving some notable advantages over simply reporting coefficients such as the concordance coefficient or intraclass correlations, they exhibit unusual behavior. In particular, one often observes systematic trends in the BA plot and they are very subject to outliers, among other issues. The purpose of this talk is to propose and study a generative model that lays out the logic of the mean-difference transformation and hence the BA plot, indicating when and why systematic trend may occur. The model provides insight into when users should expect problems with the BA plot and suggests that it should not be applied in circumstances when a more informative design such as instrumental variables is necessary. I also suggest some improvements to the graphics based on semi-parametric regression methods and discuss how putting the BA plot in a Bayesian framework could be helpful.

5/3/2017, 11:00 - 12:00 
3rd Fl. Conf. Rm, Kimball

Statistical Methodology Mireille Schnitzer
(University of Montreal)
Abstract: Causal inference practitioners are routinely presented with the challenge of wanting to adjust for large numbers of covariates despite limited sample sizes. Collaborative Targeted Maximum Likelihood Estimation (CTMLE) is a general framework for constructing doubly robust semiparametric causal estimators that data-adaptively reduce model complexity in the propensity score in order to optimize a preferred loss function. This stepwise complexity reduction is based on a loss function placed on a strategically updated model for the outcome variable, assessed through cross-validation. New work involves integrating penalized regression methods into a stepwise CTMLE procedure that may allow for a more flexible type of model selection than existing variable selection techniques. Two new algorithms are presented and assessed through simulation. The methods are then used in a pharmacoepidemiology example of the evaluation of the safety of asthma mediation during pregnancy.

5/10/2017, 11:00 - 12:00 
3rd Fl. Conf. Rm, Kimball

Statistical Methodology Mariola Moeyaert
(University at Albany)
Location: Kimball Hall, 246 Greene Street, 3rd Floor conference room

Abstract: There has been a substantial increase in the use of single-subject experimental designs (SSEDs) over the last decade of research to provide detailed examination of the effect of interventions. Whereas group comparison designs focus on the average treatment effect at one point of time, SSEDs allow researchers to investigate at the individual level the size and evolution of intervention effects. In addition, SSED studies may be more feasible than group experimental studies due to logistical and resource constraints, or due to studying a low incidence or highly fragmented population.

To enhance generalizability, researchers replicate across subjects and use meta-analysis to pool effects from individuals. Our research group was one of the first to propose, develop and promote the use of multilevel models to synthesize data across subjects, allowing for estimation of the mean treatment effect, variation in effects over subjects and studies, and subject and study characteristic moderator effects (Moeyaert, Ugille, Ferron, Beretvas, & Van den Noortgate, 2013a, 2013b, 2014). Moreover, multilevel models can handle unstandardized and standardized raw data or effect sizes, linear and nonlinear time trends, treatment effects on time trends, autocorrelation and other complex covariance structures at each level.

This presentation considers multiple complexities in the context of hierarchical linear modeling of SSED studies including the estimation of the variance components, which tend to be biased and imprecisely estimated. Results of a recent simulation study using Bayesian estimation techniques to deal with this issue will be discussed (Moeyaert, Rindskopf, Onghena & Van den Noortgate, 2017).

9/14/2017 (Thurs.), 12:30-2:00pm, 295 Lafayette Street, 2nd Floor, The Rudin Forum* (NYU Wagner)

Statistical Methodology  Don Rubin (Harvard)
Abstract: Consider a statistical analysis that draws causal inferences using an observational data set, inferences that are presented as being valid in the standard frequentist senses; that is an analysis that produces (a) point estimates, which are presented as being approximately unbiased for their estimands, (b) p-values, which are presented as being valid in the sense of rejecting true null hypotheses at the nominal level or less often, and/or (c) confidence intervals, which are presented as having at least their nominal coverage for their estimands. For the hypothetical validity of these statements (that is, if certain explicit assumptions were true, then the validity of the statements would follow), the analysis must embed the observational study in a hypothetical randomized experiment that created the observed data, or a subset of that data set. This effort is a multistage effort with thought-provoking tasks, especially in the first stage, which is purely conceptual. Other stages may often rely on modern computing to implement efficiently, but the first stage demands careful scientific argumentation to make the embedding plausible to thoughtful readers of the proffered statistical analysis. Otherwise, the resulting analysis is vulnerable to criticism for being simply a presentation of scientifically meaningless arithmetic calculations. In current practice, this perspective is rarely implemented with any rigor, for example, completely eschewing the first stage. Instead, often analyses appear to be conducted using computer programs run with limited consideration of the assumptions of the methods being used, producing tables of numbers with recondite interpretations, and presented using jargon, which may be familiar but also may be scientifically impenetrable. Somewhat paradoxically, the conceptual tasks, which are usually omitted in publications, often would be the most interesting to consumers of the analyses. These points will be illustrated using the analysis of an observational data set addressing the causal effects of parental smoking on their children’s lung function. This presentation may appear provocative, but it is intended to encourage applied researchers, especially those working on problems with policy implications, to focus on important conceptual issues rather than on minor technical ones.

10/18/17 (Weds.), 10:30am-12:00pm, 3rd Fl. Conf. Rm, Kimball

Didactic Chuck Huber (Stata Corp.)
Abstract: Bayesian analysis has become a popular tool for many statistical applications. Yet many data analysts have little training in the theory of Bayesian analysis and software used to fit Bayesian models. This talk will provide an intuitive introduction to the concepts of Bayesian analysis and demonstrate how to fit Bayesian models using Stata. No prior knowledge of Bayesian analysis is necessary and specific topics will include the relationship between likelihood functions, prior, and posterior distributions, Markov Chain Monte Carlo (MCMC) using the Metropolis-Hastings algorithm, and how to use Stata's Bayes prefix to fit Bayesian models.

11/3/17 (Fri.) ALL DAY (9-5, tent.), 3rd Fl. Conf. Rm, Kimball

Mixed Leading experts in SSD, Causal & Bayesian Inference
Abstract: This will be a 1-day symposium on the topic of Single Subject Design (SSD) and methods for their analysis.  It will bring together leading researchers in the areas of multilevel models, Bayesian modeling, and meta-analysis to discuss best practices with leading practitioners who utilize SSDs as well as how to use results from single case designs to better inform larger scale clinical trials in this field.   These practitioners will be drawn from the fields of special education and rehabilitation science.  In particular, the areas of Physical Therapy, Occupational Therapy and Communication Science Disorders will be invited.

Panel discussions will be convened in which methodologists are paired with practitioners to discuss each phase of the science, from exploratory data analysis (related to designs employing graphical methods), more general design aspects, and analysis.  Particular emphasis will be given to research supporting Individualized Treatment Protocols.  In addition, there will be individual presentations representing new methodology for these designs, and reports from practitioners on their ongoing clinical trials to spur additional discussion of appropriate methodology.
2/7/2018, (Weds.) 11:00 am - 12:00 pm 
3rd Fl. Conf. Rm, Kimball
Statistical Methodology  Howard Wainer
(NMBE)
Abstract: Visual displays of empirical information are too often thought to be just compact summaries that, at their best, can clarify a muddled situation. This is partially true, as far as it goes, but it omits the magic. We have long known that data visualization is an alchemist that can make good scientists great and transform great scientists into giants. In this talk we will see that sometimes, albeit too rarely, the combination of critical questions addressed by important data and illuminated by evocative displays can achieve a transcendent, and often wholly unexpected, result. At their best, visualizations can communicate emotions and feelings in addition to cold, hard facts.
2/28/2018, (Weds.) 11:00 am - 12:00 pm 
3rd Fl. Conf. Rm, Kimball
Didactic  Keith Goldfeld
(NYUMC)
Abstract: In so many ways, simulation is an extremely useful tool to learn, teach, and understand the theory and practice of statistics. A series of examples (interspersed with minimal theory) will hopefully illuminate the underbelly of confounding, colliding, and marginal structural models. Drawing on the potential outcomes framework, the examples will use the R simstudy package, a tool that is designed to make data simulation as painless as possible.
4/25/2018, (Weds.) 11:00 am - 12:00 pm 
3rd Fl. Conf. Rm, Kimball
Didactic  Jennifer Hill
(NYU)
Abstract: There has been increasing interest in the past decade in use of machine learning tools in causal inference to help reduce reliance on parametric assumptions and allow for more accurate estimation of heterogeneous effects. This talk reviews the work in this area that capitalizes on Bayesian Additive Regression Trees, an algorithm that embeds a tree-based machine learning technique within a Bayesian framework to allow for flexible estimation and valid assessments of uncertainty. It will further describe extensions of the original work to address common issues in causal inference: lack of common support, violations of the ignorability assumption, and generalizability of results to broader populations. It will also describe existing R packages for traditional BART implementation as well as debut a new R package for causal inference using BART, bartCause.
5/2/2018, (Weds.) 11:00 am - 12:00 pm 
3rd Fl. Conf. Rm, Kimball
Data for Social Impact  Alejandro Ganimian
(NYU)
Abstract: We present experimental evidence on the impact of a personalized technology-aided after-school instruction program on learning outcomes. Our setting is middle-school grades in urban India, where a lottery provided winning students with a voucher to cover program costs. We find that lottery winners scored 0.36σ higher in math and 0.22σ higher in Hindi relative to lottery losers after just 4.5-months of access to the program. IV estimates suggest that attending the program for 90 days would increase math and Hindi test scores by 0.59σ and 0.36σ respectively. We find similar absolute test score gains for all students, but the relative gain was much greater for academically-weaker students because their rate of learning in the control group was close to zero. We show that the program was able to effectively cater to the very wide variation in student learning levels within a single grade by precisely targeting instruction to the level of student preparation. The program was cost effective, both in terms of productivity per dollar and unit of time. Our results suggest that well-designed technology-aided instruction programs can sharply improve productivity in delivering education.