You are here

Fall 2016 Louisiana ASA Meeting - Print Version

Friday, 18 November 2016
220 Kirschman Hall
University of New Orleans
New Orleans, Louisiana

Schedule

Time Title and Speaker
9:30-10:00 Reception
10:00-11:00 Keynote Address
Statistical methods for cost-effectiveness analysis: A selected review
Thomas Mathew
Department of Mathematics and Statistics
University of Maryland Baltimore County
Baltimore, Maryland
11:00-11:45 Blind source separation for operator self-similar processes
Hui Li
Department of Mathematics
Tulane University
New Orleans, Louisiana
11:45-1:15 Lunch
1:15-2:00 Visualizing Louisiana Educational Data to Inform Administrators and the Public
Amy Gray, Jack Hogan, Everett Moser, Billy Musgraves, Wayne Picou, Jewell Simon, Laura Vavrek, and Zhaoxia Mary Wang
Department of Mathematics
Louisiana State University
Baton Rouge, Louisiana
2:00-2:45 A Revisit to Test the Equality of Variances of Several Populations
Nabendu Pal
Department of Mathematics
University of Louisiana at Lafayette
Lafayette, Louisiana

Abstracts

Statistical methods for cost-effectiveness analysis: A selected review
Thomas Mathew
Department of Mathematics and Statistics
University of Maryland Baltimore County
Baltimore, Maryland

Identifying treatments or interventions that are cost-effective (more effective at lower cost) is clearly important in health policy decision making, especially in the allocation of health care resources. Various measures of cost-effectiveness that are informative, intuitive and simple to explain have been suggested in the literature, along with statistical inference concerning them. Popular and widely used measures include the incremental cost-effectiveness ratio (ICER), defined as the ratio between the difference of expected costs and the difference of expected effectiveness in two populations receiving two treatments. Although very easy to interpret as the additional cost per unit of effectiveness gained, being a ratio, the ICER presents difficulties regarding interpretation in certain situations, for example, when the difference in effectiveness is close to zero, and it also presents challenges in the statistical inference. Yet another measure proposed in the literature is the incremental net benefit (INB), which is the difference between the incremental cost and the incremental effectiveness after multiplying the latter with a "willingness-to-pay parameter". Both ICER and INB are functions of population means, and inference concerning them has been widely investigated under a bivariate normal distribution, or under a log-normal/normal distribution for the cost and effectiveness measures. In the talk, we will briefly review these, focusing on recent developments. An alternative probability-based approach will also be introduced, referred to as cost-effectiveness probability (CEP), which is the probability that the first treatment will be less costly and more effective compared to the second one. Inference on the CEP will also be discussed. Numerical results and illustrative examples will be given.

Blind source separation for operator self-similar processes
Hui Li
Department of Mathematics
Tulane University
New Orleans, Louisiana

Fractional Brownian motion (fBm) is the only Gaussian, self-similar, stationary increment process, and it has been used to model a wide range of phenomena in hydrology, Internet traffic and turbulence. In this talk, we will consider the blind source separation problem of demixing a multivariate stochastic process whose unobservable entries are independent fBms. The observable, mixed signal, is then an operator self-similar stochastic process. The latter is parameterized by a vector of Hurst parameters and a mixing matrix. We combine a classical joint diagonalization procedure and wavelet analysis to produce a consistent and asymptotically Gaussian estimator of the mixing matrix. Monte Carlo simulations in dimension 4 will be used to show that the finite-sample estimation performance for both the mixing matrix and the individual Hurst parameters is quite satisfactory. We will also comment on the non-Gaussian case as well. This work is joint with P. Abry (ENS-Lyon) and G. Didier (Tulane University).

Visualizing Louisiana Educational Data to Inform Administrators and the Public
Amy Gray, Jack Hogan, Everett Moser, Billy Musgraves, Wayne Picou, Jewell Simon, Laura Vavrek, and Zhaoxia Mary Wang
Department of Mathematics
Louisiana State University
Baton Rouge, Louisiana

The Louisiana Department of Education is required by law to calculate and assign school performance scores to all public schools. The scores are highly consequential for administrators, so they would like to be able to use all available data sources to inform strategies for raising scores. Likewise, the public is interested in school performance and in all information that illuminates the scores. The Department publishes a vast amount of data at its web site, including much of the data on which performance scores are based. However, it is difficult to extract useful information from this source. For example, administrators would like to be able to identify schools similar to their own that have recently made significant gains in order to learn from them about productive strategies. Presently, they do not have an efficient, user-friendly way to do this. Similarly, the public would like to compare schools for numerous reasons, and would like to be able to see more detail than school scores typically provide.

In this talk, we report on a project being conducted by the Math Consultation Clinic, and undergraduate research seminar in the LSU Department of Mathematics. The goals of this project are to: a) inventory the school data available, b) store it is a format that facilitates analysis, c) devise methods for viewing and interpreting data that increase its meaning to potential users, d) make these utilities available at a public web site. This presentation will consist of four short talks by students working on the project concerning specific problems and issues that we have encountered. The LSU faculty advisers for the project are Ameziane Harhad and James J. Madden.

A Revisit to Test the Equality of Variances of Several Populations
Nabendu Pal
Department of Mathematics
University of Louisiana at Lafayette
Lafayette, Louisiana

We revisit the problem of testing homoscedasticity (or, equality of variances) of several normal populations which has applications in many statistical analyses, including design of experiments. The standard text books and widely used statistical packages propose a few popular tests including Bartlett's test, Levene's test and a few adjustments of the latter. Apparently, the popularity of these tests have been based on limited simulation study carried out a few decades ago. The traditional tests, including the classical likelihood ratio test (LRT), are asymptotic in nature, and hence do not perform well for small sample sizes. In this paper we propose a simple parametric bootstrap (PB) modification of the LRT, and compare it against the other popular tests as well as their PB versions in terms of size and power. Our comprehensive simulation study bursts some popularly held myths about the commonly used tests and sheds some new light on this important problem. Though most popular statistical software/packages suggest using Bartlett's test, Levene's test, or modified Levene's test among a few others, our extensive simulation study, carried out under both the normal model as well as several non-normal models clearly shows that a PB version of the modified Levene's test (which does not use the F-distribution cut-off point as its critical value), and Loh's exact test are the "best" performers in terms of overall size as well as power.