• Skip to main content
  • Skip to primary sidebar
AAAI

AAAI

Association for the Advancement of Artificial Intelligence

    • AAAI

      AAAI

      Association for the Advancement of Artificial Intelligence

  • About AAAIAbout AAAI
    • News
    • Officers and Committees
    • Staff
    • Bylaws
    • Awards
      • Fellows Program
      • Classic Paper Award
      • Dissertation Award
      • Distinguished Service Award
      • Allen Newell Award
      • Outstanding Paper Award
      • AI for Humanity Award
      • Feigenbaum Prize
      • Patrick Henry Winston Outstanding Educator Award
      • Engelmore Award
      • AAAI ISEF Awards
      • Senior Member Status
      • Conference Awards
    • Partnerships
    • Resources
    • Mailing Lists
    • Past Presidential Addresses
    • AAAI 2025 Presidential Panel on the Future of AI Research
    • Presidential Panel on Long-Term AI Futures
    • Past Policy Reports
      • The Role of Intelligent Systems in the National Information Infrastructure (1995)
      • A Report to ARPA on Twenty-First Century Intelligent Systems (1994)
    • Logos
  • aaai-icon_ethics-diversity-line-yellowEthics & Diversity
  • Conference talk bubbleConferences & Symposia
    • AAAI Conference
    • AIES AAAI/ACM
    • AIIDE
    • EAAI
    • HCOMP
    • IAAI
    • ICWSM
    • Spring Symposia
    • Summer Symposia
    • Fall Symposia
    • Code of Conduct for Conferences and Events
  • PublicationsPublications
    • AI Magazine
    • Conference Proceedings
    • AAAI Publication Policies & Guidelines
    • Request to Reproduce Copyrighted Materials
    • Contribute
    • Order Proceedings
  • aaai-icon_ai-magazine-line-yellowAI Magazine
  • MembershipMembership
    • Member Login
    • Chapters

  • Career CenterAI Jobs
  • aaai-icon_ai-topics-line-yellowAITopics
  • aaai-icon_contact-line-yellowContact

  • Twitter
  • Facebook
  • LinkedIn
Home / Proceedings / AAAI Workshop Papers 2011 /

Human Computation

Papers

  • Robust Active Learning Using Crowdsourced Annotations for Activity Recognition
    PDF
  • MobileWorks: A Mobile Crowdsourcing Platform for Workers at the Bottom of the Pyramid
    PDF
  • MuSweeper: An Extensive Game for Collecting Mutual Exclusions
    PDF
  • Beyond Independent Agreement: A Tournament Selection Approach for Quality Assurance of Human Computation Tasks
    PDF
  • CollabMap: Augmenting Maps Using the Wisdom of Crowds
    PDF
  • PulaCloud: Using Human Computation to Enable Development at the Bottom of the Economic Ladder
    PDF
  • CrowdLang — First Steps Towards Programmable Human Computers for General Computation
    PDF
  • On Quality Control and Machine Learning in Crowdsourcing
    PDF
  • Turkomatic: Automatic, Recursive Task and Workflow Design for Mechanical Turk
    PDF
  • Improving Consensus Accuracy via Z-Score and Weighted Voting
    PDF
  • Making Searchable Melodies: Human versus Machine
    PDF
  • Developing Scripts to Teach Social Skills: Can the Crowd Assist the Author?
    PDF
  • Towards Task Recommendation in Micro-Task Markets
    PDF
  • Beat the Machine: Challenging Workers to Find the Unknown Unknowns
    PDF
  • Human Intelligence Needs Artificial Intelligence
    PDF
  • Honesty in an Online Labor Market
    PDF
  • Pricing Tasks in Online Labor Markets
    PDF
  • CrowdSight: Rapidly Prototyping Intelligent Visual Processing Apps
    PDF
  • Programmatic Gold: Targeted and Scalable Quality Assurance in Crowdsourcing
    PDF
  • An Iterative Dual Pathway Structure for Speech-to-Text Transcription
    PDF
  • Error Identification and Correction in Human Computation: Lessons from the WPA
    PDF
  • What’s the Right Price? Pricing Tasks for Finishing on Time
    PDF
  • Digitalkoot: Making Old Archives Accessible Using Crowdsourcing
    PDF
  • Labor Allocation in Paid Crowdsourcing: Experimental Evidence on Positioning, Nudges and Prices
    PDF
  • An Extendable Toolkit for Managing Quality of Human-Based Electronic Services
    PDF

Primary Sidebar