In terms of how they’re going to use these systems, that’s all going to be… difficult, to put it lightly, for society to agree on.”
Like, people don’t just want handouts of money from an AGI.
Even if that means people are going to use it for things that we might not always feel are the best things to do with it.”
Far Ranging Interview With Sam Altman
And, figuring out how to do that while addressing all of this sort of, let’s call them disruptive challenges, I think that’s going to be very important but very difficult.”
Sam Altman: How Many Years Until Artificial General Intelligence
Alignment is the scientific field concerned with ensuring that AI does what it is supposed to do and not go off the rails with unintended consequences.
AGI describes the concept of an AI that thinks and learns in manner similar to humans.
Altman expanded his thoughts to the global stage, reflecting how even how it’s not clear what “global governance” will be like.
There is a growing concern that there isn’t anything being done right now about developing policies to help workers who will no longer have jobs.
Early in the interview Altman was asked what the future of OpenAI was.
It’s a thorny issue because ethics and values are subjective, differing not just by country but at the level of person to person.
While Sam Altman declined to guess at how many years it will be until AGI arrives, he did say that “powerful” artificial intelligence systems are coming that will transform how the world works.
“…how do we decide whose values we align to, who gets to set the rules for this…
We’re very excited about… we can imagine both the science and technology but also the product a few years out.
OpenAI’s Sam Altman sat for an interview and shared what’s next for ChatGPT, AGI and his thoughts about the enormity of changes AI will bring on a global level.
OpenAI Has A Roadmap For Future Of AI
They want increased agency, they want to be able to able to be architects of the future, they want to be able to do more than they could before.
AGI is basically the powerful AI that many had in mind in science fiction novels and movies, where an artificial intelligence had agency to learn in an independent manner.
The next interesting question asked Altman what the biggest challenge is for ensuring that artificial intelligence benefits humanity.
Altman also said that he was unable to put a specific number of years of when AGI will be available.
Altman said that there’s still disagreement on a definition of AGI but that we’re getting closer.
Altman expressed that he was “reasonably optimistic” about solving AI alignment.
But he also acknowledged that the science will have progressed so much in two years.
“The problem with AI is not the technology. The problem is not even the technology’s potential effect on the labor market.
Altman was later asked how many years it’ll be until Artificial General Intelligence (AGI) is developed
…Say hello to the universal basic income, a 500-year-old policy idea whose time has perhaps finally come.”
Sam Altman:
Altman later expressed the importance of making GPT-4 widely available, what he called “globally democratizing” it.
“There’s a lot of people who are excited about things like UBI (and I’m one of them) but, I have no delusion that UBI is a full solution or even the most important part of the solution.
An article in The Atlantic suggests that universal basic income is a solution:
Sam Altman responded:
“…we’ve got to decide what… global governance over these systems as they get super powerful is going to look like, and everybody’s got to play a role in that.”
What To Do About Workers Displaced By AI?
“I’m reasonably optimistic about solving the technical alignment problem. We still have a lot of work to do but you know I feel …better and better over time, not worse and worse.”
Whose Values & Morality Defines Ethics Of AI?
That kind of goes back to his ambivalence on defining whose morality and ethics will define the boundaries of artificial intelligence in terms of safety.
A conversation happening right now is the concept of Universal Basic Income (UBI) for helping the thousands of workers who may be displaced by AI.
There are difficult questions to be answered about the future of AI. OpenAI CEO Sam Altman offers his opinions on the problems and solutions, admitting that not everything is settled or easy to answer. The interview offers an idea of the staggering impact AI will have on humans globally and what Altman does to make sense of it all.
The conversation then turned to the near future in relation to what to do about workers who jobs disappear because of AI.
Altman commented:
He reflected on this issue:
Altman continued discussing the safety of AI from the angle of ethics, asking whose ethics and values will be the guide for what AI safety is.
He said that OpenAI has its own definition and that he had his personal definition, which is that AGI is an intelligence that can make scientific discoveries that scientists could never have accomplished on their own.
He responded that they have a clear roadmap for the next few years about the next models to be developed.
The problem is that we do not have any policies in place to support workers in the event that AI causes mass job loss.
“We kind of know where these models are going to go. We have a roadmap.
And beyond that, you know, we’re going to learn a lot, we’ll be a lot smarter in two years than we are today.”
Challenge Of Making Sure AI Benefits Humanity
He said that it’s important to make it widely available globally, even if it ends up being used in ways that he personally disagrees with.
“…I would say that I expect by the end of this decade for us to have extremely powerful systems that change the way we currently think about the world.”
Make AI Available
It doesn’t need to be trained to solve specific problems but can lean on what it already knows to figure out solutions, just like a human.
“One of the things that we think is important is that we make GPT-4 extremely widely available.
Sam Altman said:
Watch the interview, there’s so much more in it that reveals who Sam Altman is and what the future of AI will be.
Altman commented: