eCommons

DigitalCollections@ILR
ILR School
 

Faculty Publications - Statistics and Data Science

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 3 of 3
  • Item
    On a Proper Meta-Analytic Model for Correlations
    Erez, Amir; Bloom, Matthew C.; Wells, Martin T. (1995-06-01)
    Combining statistical information across studies is a standard research tool in applied psychology. The most common approach in applied psychology is the fixed effects model. The fixed-effects approach assumes that individual study characteristics such as treatment conditions, study context, or individual differences do not influence study effect sizes. That is, that the majority of the differences between the effect sizes of different studies can be explained by sampling error alone. We critique the fixed-effects methodology for correlations and propose an advancement, the random-effects model, that ameliorates problems imposed by fixed-effects models. The random-effects approach explicitly incorporates between-study differences in data analysis and provides estimates of how those study characteristics influence the relationships among constructs of interest. Because they can model the influence of study characteristics, we assert that random-effects models have advantages for psychological research. Parameter estimates of both models are compared and evidence in favor of the random-effects approach is presented.
  • Item
    Reporting of Sexual Assault: Institutional Comparisons, 2013
    Karns, M. E. (2015-06-30)
    How well are colleges counting sexual assaults that occur on their campuses? This paper provides two measures, the Assault Reporting Ratio (ARR) and the Reporting Rate per 10,000 students (R10K), that address this question. The ARR and R10K are benchmarks that identify institutions that are leading in this area. The measures facilitate comparisons across institutions and over time. The measures enable administrators and researchers to evaluate the effectiveness of institutional policies and practices that govern the reporting of sexual assault. The Clery Act requires institutions of higher education to notify the Department of Education annually about the number of crimes reported on their campuses. The present analysis uses Clery Act data on forcible and non-forcible sexual offenses to create measures that allow a standardized comparison of institutions. The analysis includes adjustments for gender ratio and institution size. National survey results are used to calculate expected assault numbers, which are then compared to institutional reporting numbers to create the ARR. The ARR is expressed as a percentage. An ARR of 100% indicates that the school is counting all of the assaults predicted by national surveys. The R10K is the reported number of assaults per 10,000 students, calculated from the data provided by the institution. A total of 1,230 schools were used in the analysis; of those 30.7% reported no sexual offenses. The mean Assault Report Ratio (ARR) was 2.54% (7.4) with a median of 0.93%. The mean Reporting Rate per 10,000 (R10K) was 7.47 (22.22) with median of 2.94. Ranking tables of the Top 20 institutions, overall and stratified by enrollment, are given. The standardized measures can be used to evaluate institutional policies, changes in programs, and procedures for reports. Attachments include ranking of all institutions in analysis by each measure, Excel, and csv delimited data files.
  • Item
    ILR Impact Brief - Evidence, Police Credibility, and Race Affect Juror First Votes
    Wells, Martin T.; Garvey, Stephen P.; Hannaford-Agor, Paula; Hans, Valerie P.; Mott, Nicole L.; Munsterman, G. Thomas (2006-08-01)
    The question of why jurors decide to acquit or convict the defendant in criminal trials has long intrigued researchers. Earlier studies found only weak ties between jurors' views of the case and juror demographics (gender, age, race), although some researchers noted a possible exception for the effect of race. The influence of jurors' attitudes/values is not well understood, but some researchers have suggested that opinions about capital punishment may affect jurors' votes in murder trials. There is consensus, however, that the strength of the evidence is a critical variable in jury verdicts. Researchers also generally agree that a jury's final vote is affected by the dynamics of deliberation and by the size of the initial majority (tally of the first votes). Using data supplied by the National Center for State Courts, this study isolated the effects of juror demographics and attitudes and case characteristics on jurors' preliminary verdict preferences in criminal trials held in Los Angeles, CA, Maricopa County (Phoenix), AZ, Bronx, NY, and Washington, D.C. during the period June, 2000 to August, 2001. The demographic data included the age, gender, and race of the jurors and the race of the defendant; the attitudinal data concerned juror perceptions about the fairness of the law, the harshness of the consequences for the defendant, and the credibility of police testimony; the case characteristics centered on the presence or absence of a victim. Researchers controlled for the influence of the initial majority and the effect of deliberations on the final outcome by focusing on the first (pre-deliberation) vote. They controlled for the strength of the evidence by comparing jurors' assessment of the proof presented against that of the presiding judge; in fact, the study found that the judge's evaluation of the evidence was strongly associated with jurors' first vote.