Log in
Enquire now
‌

Measuring Progress on Scalable Oversight for Large Language Models

OverviewStructured DataIssuesContributors

Contents

Is a
‌
Academic paper
0

Academic Paper attributes

arXiv ID
2211.035400
arXiv Classification
Computer science
Computer science
0
Publication URL
arxiv.org/pdf/2211.0...40.pdf0
Publisher
ArXiv
ArXiv
0
DOI
doi.org/10.48550/ar...11.035400
Paid/Free
Free0
Academic Discipline
‌
Human–computer interaction
0
Computer science
Computer science
0
Artificial Intelligence (AI)
Artificial Intelligence (AI)
0
Submission Date
November 4, 2022
0
November 11, 2022
0
Author Names
Samuel R. Bowman0
Stanislav Fort0
Timothy Telleen-Lawton0
Tom Brown0
Tom Henighan0
Tristan Hume0
Yuntao Bai0
Zac Hatfield-Dodds0
...
Paper abstract

Developing safe and useful general-purpose AI systems will require us to make progress on scalable oversight: the problem of supervising systems that potentially outperform us on most skills relevant to the task at hand. Empirical work on this problem is not straightforward, since we do not yet have systems that broadly exceed our abilities. This paper discusses one of the major ways we think about this problem, with a focus on ways it can be studied empirically. We first present an experimental design centered on tasks for which human specialists succeed but unaided humans and current general AI systems fail. We then present a proof-of-concept experiment meant to demonstrate a key feature of this experimental design and show its viability with two question-answering tasks: MMLU and time-limited QuALITY. On these tasks, we find that human participants who interact with an unreliable large-language-model dialog assistant through chat -- a trivial baseline strategy for scalable oversight -- substantially outperform both the model alone and their own unaided performance. These results are an encouraging sign that scalable oversight will be tractable to study with present models and bolster recent findings that large language models can productively assist humans with difficult tasks.

Timeline

No Timeline data yet.

Further Resources

Title
Author
Link
Type
Date
No Further Resources data yet.

References

Find more entities like Measuring Progress on Scalable Oversight for Large Language Models

Use the Golden Query Tool to find similar entities by any field in the Knowledge Graph, including industry, location, and more.
Open Query Tool
Access by API
Golden Query Tool
Golden logo

Company

  • Home
  • Press & Media
  • Blog
  • Careers
  • WE'RE HIRING

Products

  • Knowledge Graph
  • Query Tool
  • Data Requests
  • Knowledge Storage
  • API
  • Pricing
  • Enterprise
  • ChatGPT Plugin

Legal

  • Terms of Service
  • Enterprise Terms of Service
  • Privacy Policy

Help

  • Help center
  • API Documentation
  • Contact Us
By using this site, you agree to our Terms of Service.