Organization attributes
Other attributes
Founded by Elon Musk, xAI has a goal to "understand the universe" with a focus on artificial intelligence. Team members include those who previously worked at DeepMind, OpenAI, Google Research, Microsoft Research, Tesla, and the University of Toronto in artificial intelligence. This includes Igor Babuschkin, a former Google DeepMind engineer; Tony Wu, who previously worked at Google; Christian Szegedy, formerly of Google Research; Greg Yang, formerly of Microsoft; and Jimmy Ba, an AI researcher and assistant professor at University of Toronto. Dan Hendrycks, the director of the Center for AI Safety, which advocates for greater awareness around AI and the risks associated with AI, is an advisor for the company, taking a $1 salary to remain unbiased in his criticism, if necessary.
Founding of xAI came after reports that Elon Musk had secured thousands of Nvidia GPU processors, roughly 10,000 graphics processing units, which can be used to run state-of-the-art AI systems. Musk had, according to those same reports, discussed receiving funding for xAI from investors in SpaceX and Tesla.
Part of xAI's mission has been hinted at by Elon Musk in interviews, in which he has aired his concerns over the potential for current generative AI systems to lie and the focus xAI will have on truth-seeking and understanding the universe. Further, in a discussion on Twitter Spaces, Musk explained that rather than explicitly programming morality into xAI's eventual AI, the company will seek to create a maximally curious AI that is going to be pro-humanity from the standpoint that humanity is more interesting than non-humanity. That same Twitter Spaces conversation, held on July 14, 2023, was used in part to launch the xAI company.
xAI is a separate company from Elon Musk's other X Corp, the umbrella company of Twitter, but is expected to work closely with Musk's other companies, including Twitter and Tesla, where the applications of AI include self-driving cars. xAI is expected to collaborate with Tesla in the development of new semiconductors and AI software for increased capabilities in Tesla vehicles. Further, for Twitter, Elon Musk has suggested xAI's AI will work closely in the management and development of the Twitter platform. The Twitter platform and its data are expected to be a training vector for xAI's AI, as well as using driving data from Tesla for training data.
One of the goals of the development of xAI is to create an artificial intelligence capable of advanced mathematical reasoning, which is beyond other competitor models at the time of launch, and this is expected, by Musk, to lead to the AI being capable of solving complex scientific and math questions towards a greater understanding of the universe.
Elon Musk was a founding member of OpenAI, which at the time was a nonprofit organization; he stepped step down from the role in 2018. The reasons for his departure from OpenAI are not clear; however, according to an OpenAI blog post and later remarks by Musk, he left OpenAI to prevent conflicts of interest.
In 2023, Musk was a signer of an open letter published by the Future of Life Institute, which called for AI companies to pause the development of their systems in order to allow society and regulation to catch up with the technology. Further, since his departure from OpenAI, Musk has been critical of OpenAI and the direction the company has taken: notably, shifting to be for-profit and closed-source. With this history, Musk's interest and potential development of an artificial intelligence company has been anticipated for a while.
Musk's concerns over AI have informed his movement toward developing a "maximum truth-seeking" AI that, in Musk's construction, stands in opposition to other AI platforms that are closed and coded in a manner that follows the interests of the developer, which can, according to Musk, be detrimental to the larger society. This includes not attempting to instill a specific set of values or morals into an AI, also known as the "Waluigi Effect," which describes a situation in which instilling these practices can lead to unintended consequences and actions or behaviors that contradict the intended moral framework.