Playing with science

Aslib Journal of Information Management Emerald 68:3 (2016) 306-325

Authors:

Anita Greenhill, Kate Holmes, Jamie Woodcock, Chris Lintott, Brooke D Simmons, Gary Graham, Joe Cox, Eun Young Oh, Karen Masters

BROAD [C Pi] LINE WINGS AS TRACER OF MOLECULAR AND MULTI-PHASE OUTFLOWS IN INFRARED BRIGHT GALAXIES

ASTROPHYSICAL JOURNAL 822:1 (2016) ARTN 43

Authors:

AW Janssen, N Christopher, E Sturm, S Veilleux, A Contursi, E Gonzalez-Alfonso, J Fischer, R Davies, A Verma, J Gracia-Carpio, R Genzel, D Lutz, A Sternberg, L Tacconi, L Burtscher, A Poglitsch

A generalized approach for producing, quantifying, and validating citizen science data from wildlife images

Conservation biology : the journal of the Society for Conservation Biology Wiley 30:3 (2016) 520-531

Authors:

Alexandra Swanson, Margaret Kosmala, Chris Lintott, Craig Packer

Abstract:

Citizen science has the potential to expand the scope and scale of research in ecology and conservation, but many professional researchers remain skeptical of data produced by nonexperts. We devised an approach for producing accurate, reliable data from untrained, nonexpert volunteers. On the citizen science website www.snapshotserengeti.org, more than 28,000 volunteers classified 1.51 million images taken in a large-scale camera-trap survey in Serengeti National Park, Tanzania. Each image was circulated to, on average, 27 volunteers, and their classifications were aggregated using a simple plurality algorithm. We validated the aggregated answers against a data set of 3829 images verified by experts and calculated 3 certainty metrics鈥攍evel of agreement among classifications (evenness), fraction of classifications 91探花ing the aggregated answer (fraction 91探花), and fraction of classifiers who reported 鈥渘othing here鈥 for an image that was ultimately classified as containing an animal (fraction blank)鈥攖o measure confidence that an aggregated answer was correct. Overall, aggregated volunteer answers agreed with the expert-verified data on 98% of images, but accuracy differed by species commonness such that rare species had higher rates of false positives and false negatives. Easily calculated analysis of variance and post-hoc Tukey tests indicated that the certainty metrics were significant indicators of whether each image was correctly classified or classifiable. Thus, the certainty metrics can be used to identify images for expert review. Bootstrapping analyses further indicated that 90% of images were correctly classified with just 5 volunteers per image. Species classifications based on the plurality vote of multiple citizen scientists can provide a reliable foundation for large-scale monitoring of African wildlife.

Planet Hunters IX. KIC 8462852-where's the flux?

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY 457:4 (2016) 3988-4004

Authors:

TS Boyajian, DM LaCourse, SA Rappaport, D Fabrycky, DA Fischer, D Gandolfi, GM Kennedy, H Korhonen, MC Liu, A Moor, K Olah, K Vida, MC Wyatt, WMJ Best, J Brewer, F Ciesla, B Csak, HJ Deeg, TJ Dupuy, G Handler, K Heng, SB Howell, ST Ishikawa, J Kovacs, T Kozakis, L Kriskovics, J Lehtinen, C Lintott, S Lynn, D Nespral, S Nikbakhsh, K Schawinski, JR Schmitt, AM Smith, G Szabo, R Szabo, J Viuho, J Wang, A Weiksnar, M Bosch, JL Connors, S Goodman, G Green, AJ Hoekstra, T Jebson, KJ Jek, MR Omohundro, HM Schwengeler, A Szewczyk

Science learning via participation in online citizen science

Journal of Science Communication Scuola Internazionale Superiore di Studi Avanzati 15:3 (2016) A07

Authors:

Karen Masters, Eun Y Oh, Joe Cox, Brooke Simmons, Christopher Lintott, Gary Graham, Anita Greenhill, Kate Holmes

Abstract:

We investigate the development of scientific content knowledge of volunteers participating in online citizen science projects in the Zooniverse (www.zooniverse.org). We use econometric methods to test how measures of project participation relate to success in a science quiz, controlling for factors known to correlate with scientific knowledge. Citizen scientists believe they are learning about both the content and processes of science through their participation. We don鈥檛 directly test the latter, but we find evidence to 91探花 the former - that more actively engaged participants perform better in a project-specific science knowledge quiz, even after controlling for their general science knowledge. We interpret this as evidence of learning of science content inspired by participation in online citizen science.