OpenAI is a non-profit artificial general intelligence research company. The company was founded in 2015 by Elon Musk, Greg Brockman, Ilya Sutskever, Sam Altman, and Wojciech Zaremba, in San Francisco, California, United States.
OpenAI's mission is to build safe AGI, and ensure AGI's benefits are as widely and evenly distributed as possible. OpenAI was founded with the belief that the first AGIs will have impact that greatly exceeds that of preceding AI technologies.
OpenAI is a research company with a full-time staff of 60 researchers and engineers who are dedicated to working towards the organization's mission. Their focus is on long-term research - working on problems that require making fundamental advances in AI capabilities. By being at the forefront of the field, OpenAI wants to influence the conditions under which AGI is created. As Alan Kay said, "The best way to predict the future is to invent it." Being concerned with how AGI is created, the organization plans to slowly spend their supported commitments that total $1billion.
Much research by OpenAI is published at top machine learning conferences. The organization also contributes open-source software tools for accelerating AI research and releases blog posts to communicate their research to others in the field. OpenAI does not believe in keeping information private for private benefit, although they do expect to create formal processes for keeping technologies private when there are safety concerns. The company is known for their openness to collaboration with other organizations and researchers, and keep their patents and findings accessible to the public.
OpenAI has developed artificial intelligence that has trained to play hide-and-seek against itself (involves hiders and seekers). The AI was trained without any predetermined explicit incentives, other than the hiders are to avoid the seekers line of sight, and the seekers are meant to keep the hiders in their line of sight. The hiders eventually learned to find objects, and use them to block seekers, and continued to learn new strategies as seekers did to accomplish their tasks. Emergent strategies were the result of autocurriculum. The company believes that these multi-agent dynamics can lead to complex behavior, similar to human behavior.
MuseNet is a deep neural network that is capable of composing musical pieces with up to ten different instruments, and write them in any genre, lasting four minutes. The software was not programmed with our understanding of music, but learned from predicting the next note in a MIDI file thousands of times. To train, the organization used data from ClassicalArchives and BitMidi.
GPT-3 is an autoregressive language model with 175 billion parameters made by OpenAI that launched on May 29, 2020 .The model builds on the transformer-based language model previously made by OpenAI called GPT-2 that had 1.5 billion parameters. GPT-3 improves on GPT-2 by adopting and scaling features present in GPT-2 such as modified initialization, pre-normalization, and reversible tokenization. GPT-3 training can improve scaling up language models primarily through improved task-agnostic and few-shot performance compared to the GPT-2 model. Open AI claims GPT-3 is approaching the performance of SOTA fine-tuned systems for generating quality performance and samples for defined tasks.
According to the GitHub depository of GPT-3, it can achieve strong performance when dealing with NLP datasets such as translation, question-answering, and cloze tasks. It can also perform on-the-fly reasoning/domain adaptation tasks like unscrambling words, using novel words in sentences, and performing 3-digit arithmetic. The performance of GPT-3 in few-shot learning and training on large web corpora datasets face methodological issues which produce results which are undesirable. GPT-3 was founded to be capable of producing news articles that are difficult for humans to distinguish from news articles written by humans.
OpenAI attended Grace Hopper Celebration of Women in Computing 2019 as a Sponsor
OpenAI raises a $100,000,000 corporate funding round from Microsoft.
OpenAI forms OpenAI LP as a "capped" for-profit with $1 billion in funding from Microsoft.
OpenAI attended Y Combinator Summer 2016 Demo Day as an Exhibitor
Documentaries, videos and podcasts
San Francisco, California, United States
Researching Artificial General Intelligence
- OpenAI BaselinesA set of high-quality implementations of reinforcement learning algorithms created and shared by the OpenAI non-profit AI research company
- Reinforcement LearningAn area of machine learning focusing on how machines and software agents react in a specific context to maximize performance and achieve reward known as reinforcement signal.
- NeuralinkNanobiotechnology company founded in 2016 by Elon Musk and Max Hodak developing brain-machine interface to connect humans and computers based in San Francisco, California. Neuralink develops devices aimed to help people with brain diseases and injuries, with a long term vision of developing devices that merge humans with AI.
- Machine learningA field of computer science enabling computers to learn.
- Elon MuskElon Musk is a serial entrepreneur known for creating high-tech, science fiction inspired ventures.