Here is the revised version of your article with unnecessary spaces removed, while maintaining the original content and structure:
**Sam Altman, OpenAI’s CEO, presented an introspective and comprehensive insight into his thoughts and experiences during his speech at the World Economic Forum in Davos.**
**Altman, who has had an extraordinary and somewhat controversial couple of years — launching ChatGPT, a brief removal from his position at the company, and a legal dispute with the New York Times over using articles to train Large Language Models (LLMs) — is at the forefront of a technology that is undeniably transforming the world at a rapid pace.**
**Thus, Davos served as a platform for world and tech leaders, as well as the global community, to reflect, discuss, and even debate the path being paved by artificial intelligence.**
Key Insights
- During the World Economic Forum in Davos, Sam Altman, OpenAI’s CEO, pondered the profound influence of artificial intelligence (AI) on society.
- Altman recognized the growing stress and tension surrounding advancements in artificial general intelligence (AGI) and stressed the importance of caution and readiness.
- Nonetheless, he defended the widespread utilization of the tool, expressing confidence in human ability to make ethical decisions regarding AI use, while acknowledging its limitations and risks.
- Panelists debated trust in AI, the changing role of large language models (LLMs), and ethical concerns related to training data, particularly in light of OpenAI’s legal battle with the New York Times.
Addressing the strides towards artificial general intelligence (AGI), Altman described the progress as so significant that it is impacting everyone involved:
“I’ve noticed for some time that as we move closer to powerful AI, everyone’s character seems to gain an extra dose of craziness.
“It’s a highly stressful situation, and rightly so, because we’re trying to responsibly handle very high stakes.
“As we inch closer to AGI, the stakes, the stress, the level of tension, all of that is going to increase.”
Altman referred to this stress while responding to a question about his “absurd” temporary expulsion from OpenAI’s leadership.
“This was a small aspect of it, but… as we approach powerful AI, I anticipate more odd occurrences. Having a higher degree of preparation, more resilience, and spending more time contemplating all the unusual ways things can go awry, that’s really crucial.”
Addressing AI Safety Concerns
Altman, who finds himself at the epicenter of the intensifying discussion about AI’s safety for humankind, holds an optimistic outlook. He describes generative AI in relatively harmless terms as “a system that is sometimes correct, sometimes creative, often completely off the mark — you certainly wouldn’t want that at the wheel of your car.
“However, you’d be glad to have it aid you in brainstorming writing topics or assist with code that you have the option to review.”
He insists that humans have the capacity to make ethical decisions regarding AI use:
“Humans tend to understand tools and their limitations more than we generally acknowledge, and they have discovered ways to make ChatGPT beneficial and understand what it’s not suitable for.”
Even with the advent of sophisticated AI models, humans “will determine what should transpire in the world… The OpenAI model excels at certain things, but not life or death situations.
“The more complex question, beyond the technical one, is who gets to decide what those values are — what the defaults are, what the boundaries are — and how it operates in one country versus another?
“What am I permitted to do with it and what am I not? That’s a colossal societal question, one of the largest.
“I believe it’s a promising indicator that, despite its relatively limited current capabilities and significant flaws, people are finding ways to use it for considerable productivity — or other — gains and understand its limitations.
“AI has been somewhat demystified as people are now using it, and I believe that’s always the best method to advance the world with a new technology.”
In response to worries about the degree to which AI could supplant human tasks, Altman remarked, “It does feel different this time. General purpose cognition feels so close to what we all value about humanity.”
However, “humans genuinely care about what other humans think. That seems deeply ingrained in us.”
Marc Benioff, CEO of Salesforce, who was also on the Davos panel, stated that current AI technology is not at a stage where it can replace humans, but rather, it is at a stage where it can enhance them. However, he cautioned about the technology’s future trajectory:
“We just want to ensure that people don’t get hurt. We don’t want something to go terribly awry… We’ve witnessed technology go terribly wrong and we saw Hiroshima—we don’t want to see an AI Hiroshima, we want to ensure that we’ve got a handle on this now.
“That’s why I believe these discussions, this governance, and gaining
clarity about our core values are so crucial. Yes, our customers will see increased margins — those CEOs
The Trust Factor in Large Language Models
Benioff pointed out that the swift advancement of AI capabilities is prompting questions about trust.
“Trust rapidly climbs the hierarchy — we’re going to have digital doctors, digital people, and these digital entities are going to merge, necessitating a level of trust.
“We are at a critical juncture because we’re all using Sam’s products and other products and going ‘Wow!’. We’re having this extraordinary experience with AI, an interactivity we’ve not quite encountered before — but we’re not entirely trusting it yet.
“We also need to appeal to regulators and say that if you consider the last decade of social media, it’s been pretty bad — we don’t want that in our AI industry, we want a healthy partnership with these regulators.”
In a CNBC interview at Davos, Intel CEO Pat Gelsinger said, “We’ve now reached the limit of AI’s current utility. This next phase of AI, in my opinion, will be about incorporating formal correctness into the underlying models.
“There are certain problems that AI solves well today, but there are many more that it doesn’t.
“How do you verify that a large language model (LLM) is actually correct? There are a lot of mistakes being made today.
“You still need to know, essentially, ‘I’m enhancing the productivity of a knowledge worker’. But at the end of the day, I need the knowledge worker to confirm whether it’s correct.”
Salesforce AI’s CEO, Clara Shih, told CNBC that the best way to improve the accuracy of LLMs is through experimentation and co-piloting tests. As users grow confident that the technology can be trusted in high-stakes situations, AI systems can adapt.
Shih outlined three phases of AGI that will guide its adoption:
1. Actively utilizing the technology as a work assistant
2. Observing the technology in autopilot mode to ensure its accuracy
3. Finally, trusting the technology to function properly.
“You can instruct the AI to be cautious for higher stakes until a human co-pilot essentially graduates it to autopilot,” said Shih.
The Potential for Things to Go Very Wrong
Returning to Altman’s conversation at Davos, even as someone optimistic about AI, he acknowledges that those who caution about the possible harmful impact AI could have on humanity aren’t necessarily “guaranteed to be wrong.”
“There’s a truth to it, in the sense that this is clearly a very powerful technology, and we can’t say with certainty exactly what will happen.
“This is true for all major technological revolutions, but it’s easy to envision with this one that it’s going to have tremendous effects on the world and that things could go very wrong.
“Not exercising caution, not understanding the weight of the potential stakes would be very bad, so I appreciate that people are nervous about it.
Speaking about the OpenAI team, he said, “We have our own apprehensions, but we believe we can navigate through them.
“The only way to do this is to give the technology to the people — let society and technology co-evolve and, step by step, with a very tight feedback loop and course correction, build systems that deliver immense value while meeting safety requirements.
“The technological path we’ve been striving to follow is one we believe we can make safe, and that encompasses a lot of things.
“We believe in iterative deployment, so we release this technology to the world along the way, allowing people to adapt to it, giving society and our institutions time to have these discussions, figure out how to regulate it, and put some safeguards in place.
“It’s good that people are apprehensive about this technology’s downsides — it’s good that we’re discussing it — it’s good that we and others are being held to high standards.
“We can learn a lot from past experiences on how technology has been made safe and how different stakeholders in society have negotiated what safety means and what is safe enough.
“But I have a lot of empathy for the general nervousness and unease the world feels towards companies like ours and others doing similar things.
“It is our responsibility to figure out how to get input from society about how we’re going to make these decisions — not just about what the system’s values are, but what the safety thresholds are, and what kind of global coordination we need to ensure that actions in one country don’t negatively affect another.”
OpenAI vs New York Times: The Ethical Challenges in Managing Training Content
One of the aspects of generative AI that will necessitate novel approaches is remunerating content owners for using their content in training data, Altman shared with the Davos panel.
OpenAI and Microsoft are being sued by The New York Times, which alleges that they duplicated millions of Times articles to train the Large Language Models (LLMs) that power ChatGPT and Microsoft Copilot. The lawsuit argues that these models “threaten high-quality journalism” by impacting
news outlets’ ability to protect and monetize their content.
“There’s a great need for new economic models,” said Altman. “I think the current conversation is a bit misplaced, and I believe the meaning of training these models will change significantly in the next few years.”
Altman explained that OpenAI’s goal with data from The New York Times and other publishers is to use them as sources of real-time information in response to user queries, rather than using them to train the model.
“We could also use it to train the model, but… we’re okay not doing that with any specific [provider]. However, if you don’t train on any data, you lack any facts [to train the model on],” Altman added.
OpenAI had hoped to train on Times data, “but it’s not our priority; we actually don’t need to train on their data,” Altman said. “This is something people don’t understand — any one particular training source doesn’t significantly affect us.”
Altman suggests the next stage of LLM development will involve the ability to reason based on smaller, high-quality datasets.
“The next thing I expect to start changing is these models will be able to take smaller amounts of higher-quality data during their training process and think more deeply about it and learn more… As our models begin to work more in this way, we won’t require the same massive amounts of training data.
“But what we want in any case is to find new economic models that work for the whole world, including content owners.
“It’s evident that if you read a physics textbook, you get to practice physics later with what you learned — and that’s generally considered acceptable.
“If we’re going to teach someone else physics using your textbook and your lesson plans, we’d like to find a way for you to get paid for that.
“If you teach our models, if you help provide human feedback, I’d love to find new models for you to get paid based on the success of that.”
In Summation
History is replete with instances of humans making errors — whether innocent or deliberate, we often take a few steps forward and then a few steps back in our quest for safety, freedom, opportunity, and curiosity.
Whether or not AI achieves a form of consciousness, its simulated version of intelligence is already drastically changing the world — and it has accomplished that in less than 18 months since the public was able to use it on a large scale.
In certain situations, we are already quite content to hand over control to AI, or at least seek and follow its advice.
However, we have a great deal to contemplate and a limited timeframe in which to do so.
While world leaders discuss and debate the genie that is currently out of the bottle at Davos, one thing is clear: When they reconvene next year, it will feel like a decade has passed in the AI landscape.