High resolution in three dimensions with SWIFT and PALM3K
3rd AO4ELT Conference - Adaptive Optics for Extremely Large Telescopes (2013)
Abstract:
SWIFT is a visible light (650-1000nm) integral field spectorgaph fed by the Palomar extreme adaptive optics system PALM3K. With a subaperture spacing of 8cm, PALM3K is capable of delivering diffraction limited performance even in the visible. With SWIFT providing spatially resolved spectroscopy at R=4000, this provides a truly unique facility for high resolution science in three dimensions. We present here some results from the first year of PALM3K+SWIFT science. We also report on our experience of operating a small field of view instrument (1"x0.5") with a high performance AO system, and hope the lessons learned will provide valuable input to designing successful and productive AO plus Instrument combinations for ELTs.Crowd-Sourced Assessment of Technical Skills: a novel method to evaluate surgical performance
Journal of Surgical Research (2013)
Abstract:
Background: Validated methods of objective assessments of surgical skills are resource intensive. We sought to test a web-based grading tool using crowdsourcing called Crowd-Sourced Assessment of Technical Skill. Materials and methods: Institutional Review Board approval was granted to test the accuracy of Amazon.com's Mechanical Turk and Facebook crowdworkers compared with experienced surgical faculty grading a recorded dry-laboratory robotic surgical suturing performance using three performance domains from a validated assessment tool. Assessor free-text comments describing their rating rationale were used to explore a relationship between the language used by the crowd and grading accuracy. Results: Of a total possible global performance score of 3-15, 10 experienced surgeons graded the suturing video at a mean score of 12.11 (95% confidence interval [CI], 11.11-13.11). Mechanical Turk and Facebook graders rated the video at mean scores of 12.21 (95% CI, 11.98-12.43) and 12.06 (95% CI, 11.57-12.55), respectively. It took 24聽h to obtain responses from 501 Mechanical Turk subjects, whereas it took 24聽d for 10 faculty surgeons to complete the 3-min survey. Facebook subjects (110) responded within 25聽d. Language analysis indicated that crowdworkers who used聽negation words (i.e., "but," "although," and so forth) scored the performance more equivalently to experienced surgeons than crowdworkers who did not (P聽<聽0.00001). Conclusions: For a robotic suturing performance, we have shown that surgery-naive crowdworkers can rapidly assess skill equivalent to experienced faculty surgeons using Crowd-Sourced Assessment of Technical Skill. It remains to be seen whether crowds can discriminate different levels of skill and can accurately assess human surgery performances. 漏 2013 Elsevier Inc. All rights reserved.Human Computation in Citizen Science
Chapter in Handbook of Human Computation, Springer Nature (2013) 153-162
Morphology in the era of large surveys
ASTRONOMY & GEOPHYSICS 54:5 (2013) 16-19
Participating in Online Citizen Science: Motivations as the Basis for User Types and Trajectories
Chapter in Handbook of Human Computation, Springer Nature (2013) 695-702