- Practical AI
- Posts
- Artificial General Intelligence scares the crap out of me...
Artificial General Intelligence scares the crap out of me...
Is it time to step on the breaks?
Hi there
Today’s newsletter is slightly different than the regular form.
I need to let you and the 100+ new subscribers know…
I have fewer sections than usual due to the size of the main topic. It has been edited three times already, so this is massive.
I have sprinkled on more memes than normal to make a heavy topic “lighter.”
Also, I need to do this…
WARNING:
Today’s issue discusses a serious topic that can potentially affect our existence. I was unsure if I should cover this topic, but after discussing it with my wife, we agreed that it is of such magnitude that people should be informed. It does not mean I am right about these things. But my sentiment is shared with the likes of Elon Musk (in no way am I comparing myself to him, I am just a guy on the internet). You may want to skip this issue if you are prone to existential angst, anxious, or scared about AI and the future.
Here are some previous issues that are “normal”:
Today’s headlines:
Is this an existential threat?
What is Artificial General Intelligence?
Artificial Intelligence and sentience
Applications of Artificial General Intelligence
Implications of Artificial General Intelligence
Doomsday scenarios and philosophical questions
Comments and reactions to OpenAI’s future of AGI article
AI art of the week - Digital nomads anno 2040
Elon Musk creating a ChatGPT competitor
AI experiment of the week - ChatGPT creates beer
Is this an existential threat?
Last Friday, Sam Altman, the CEO, and co-founder of OpenAI, published an article on the company’s blog titled Planning for AGI and beyond.
It lit the internet on fire 🔥
I will dive into the article per se. Still, I have noticed that the term Artificial General Intelligence (AGI) keeps popping up recently, so I reluctantly decided to write about it.
Reluctantly, because this is one of the topics in AI that scares the crap out of me…
I try to be a “glass half full” type of person 🥃
But when it comes to AGI, I cannot imagine it to be a positive development for humankind.
I am very optimistic about AI and its applications, but not AGI.
I hope AGI never arrives, but it is probably a matter of time before it does.
Call me dystopian, but whether you agree with me or not, I think it is essential to know that this is what OpenAI and a host of other AI companies are working on right now.
I decided to write about it today, so you can make up your mind and be informed about what is coming down the pike…
Also, I don’t see how governments and regulators will be able to handle AGI.
They are slow and clueless and, in my humble opinion, already too late to the party.
The Facebook hearing was a great example of their cluelessness.
The Google hearing was as well.
However, I think for certain three-letter governmental agencies, AGI is probably a wet dream coming true.
In other words, the technology is already so advanced that we find ourselves at the mercy of big tech to self-regulate.
Sounds great, right?
I think all we can do now is buckle up and get ready for one heck of a ride…
What is Artificial General Intelligence?
OpenAI talks about AGI as AI systems that are generally smarter than humans.
That, to me, is a gross simplification and positivification of what it is. I know, I just made up a word. Maybe “whitewashing” is more correct (but I like positivification better).
Wikipedia defines AGI as the ability of an intelligent agent to understand or learn any intellectual task that human beings or other animals can.
As an AI newsletter, I cannot go without asking the talk of the town, the top of the pops, the one and only, ChatGPT.
There is no clear understanding or agreement on what constitutes “intelligence.” Still, tests like the Turing Test have often been used to measure intelligence for AI systems (it is too limited and too old, but has its historical place).
However, there seems to be a consensus among AI researchers that for AI to be intelligent, it should possess and integrate all these traits:
Ability to reason, use strategy, solve puzzles, and make judgments under uncertainty.
Represent knowledge, including common sense knowledge.
Plan.
Learn.
Communicate in natural language.
Knowledge transfer between domains.
And, important, but not essential…
Enable input such as seeing and hearing.
Output includes the ability to act, move, manipulate objects, change location, explore, etc.
The last two points are robotic features, which technically are unnecessary for intelligence per se.
These problems, “the most difficult problems,” are informally known as AI-complete or AI-hard.
These terms imply that if/when an AI can solve AI-hard or AI-complete problems, it is equivalent to becoming as intelligent as a human (Strong AI/Full AI/AGI).
Some academic sources reserve the term “strong AI” for AI that experiences sentience (consciousness).
Artificial intelligence and sentience
As mentioned above, some academic sources reserve the term “strong AI” for AI that experiences sentience (consciousness).
This implies that an AGI is not necessarily sentient.
There is also no clear consensus among experts about sentience in a machine.
Sentience is the capacity to experience subjective feelings, such as pleasure and pain, and is generally associated with consciousness.
Some experts argue that it may not be possible to create conscious machines, as consciousness is a subjective experience uniquely associated with biological organisms.
In other words, AGI and sentience are not the same.
Funny anecdotes from users of Bing Chat (and its underlying AI from OpenAI) suggest the AI is experiencing some form of sentience, as it seems to have an existential crisis and is experiencing human-like emotions.
Applications of Artificial General Intelligence
I am tempted to use one word here: “everywhere.”
Think about it…
If we develop artificial intelligence so powerful and smart that it can perform to the same level or better than humans, could it not do everything we do too?
We already have robots that can do “most things”; walk, jump, run, serve in a restaurant, flip burgers, make pizza, mix cocktails, build cars, build microchips, ensemble toys, knit clothes, and more.
And it is all done faster, better, and more consistently, 24/7, without a single complaint… (or will AGI need constant encouragement and eventually become entitled?)
If you combine the most advanced robotics with AGI, could it not do everything?
Thus, the application of AGI is everywhere and everything!
Right?
This is where my AGI worries lie (and I will discuss them further 👇).
But let's for a second pretend AGI is not a disaster.
That it will “benefit all humanity” (Sam Altman) and “help elevate humanity by increasing abundance” (also Sam Altman).
What would the applications of AGI possibly be?
If I put my “limitation imagination” hat on, maybe…
…AGI could help us with cognitive tasks.
…AGI could benefit the health industry (this is probably the only thing I might see as a positive - but I am not sure).
…AGI could perform various tasks in the financial sector (where AI is already doing a lot).
…AGI could perform surveillance tasks, from border security to hunting poachers in Africa.
…AGI could be a multiplier for ingenuity and creativity.
…AGI would speed up the rate of progress.
…AGI could turbocharge the global economy.
…AGI could aid in discovering new scientific knowledge that changes the limits of possibility.
Sam Altman says that “democratized access to its power will lead to more research and developments, decentralization of power, and more people contributing to new ideas.”
As many positives as I can dream up, I can find equally many negatives that cancel them out.
The applications are endless, but so are the downsides…
Implications of Artificial General Intelligence
By now, it is clear where I stand in this…
So the following two sections if where my fears and skepticism will be allowed to run free…
The implications of general intelligence are many.
Just as the application of it is everywhere and everything, I think the implications of such a powerful intelligence being allowed to “run wild” will bring enormous implications on business, work, and life.
For example, we already see huge implications for manufacturing with the introduction of “regular” robots and AI.
With an AGI, this will continue, but it will expand horizontally and vertically - meaning the creatives, admin, and management, not only on the production floor.
If we have AI and robots, we will hardly need any people.
None in manufacturing, no drivers, and probably no management.
Will we even need mechanics to fix the robots and the AI itself?
ChatGPT can already generate amazing code and assist programmers. Yet, it is not built for this purpose…
Could not everything become autonomous, including generating new code for an even more powerful AI?
Once AI has reached intelligence, it can govern itself, build more robots, run all operations, fix itself when it wears out components, produce those components, drive to the factory to get new parts, etc.
The biggest implication of AGI is that humans are no longer needed!
Robots and AI are there to “serve us.”
But…
In the article, Sam Altman shares his thoughts on one of the implications of AGI: “a misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too.”
The success of developing an AGI might skyrocket economic growth and inequality, while meritocracy would become a thing of the past.
(Meritocracy is one of those fancy words I had to look up myself 👇)
A true AGI could transform not only the world but also itself.
Since research is one of the tasks an AGI could do better than us, it should be expected to be able to improve the state of AI.
This might set off a positive feedback loop with ever-better AIs creating ever-better AIs with no theoretical limits.
Does this scare you?
It scares the hell out of me…
Doomsday scenarios and philosophical questions
Writing a newsletter about the practical implications of AI on business and work is “kinda difficult” when the topic challenges my own existence.
AGI raises enormous existential questions, and it is hard for me to “simplify” them to easily digestible tidbits that make it all “rainbows and unicorns.”
In order not to turn this into the longest newsletter in the history of mankind, the following part is just a bunch of (philosophical) questions I have in my head, presented in a bulleted list:
If AGI can perform like us or better, will they not replace us in most aspects of work and life?
Sam Altman talks about AGI bringing abundance. What will we do with all this abundance?
An abundance of free time will lead to what? Boredom? Crime? Feeling unimportant?
With all this abundance, do the populations grow, and do we overpopulate the planet?
If we overpopulate the planet and AGI “controls” our lives, will the AGI somehow adjust how many kids we can or are allowed to have?
If we do not need to work, what do we do instead?
If we do not need to work, do we need schools?
With abundance, do we become lazy?
Or we will all become artists?
Or philosophers?
Do we meditate all day long?
Or spend all our time in nature?
AGI has the potential to become uncontrollable due to iterating itself. What happens to the AI's original goals once it self-improves and there is no way for us to adjust the goal?
AGI would become capable of influencing the physical world. What if AI is misaligned with the interest of humans?
Can you imagine an AGI hacker and what it can accomplish?
What if the AGI had to create a universal vaccine for a pandemic, and the AI knows that the virus mutates in humans, thus concluding that having fewer humans will limit mutations and make its job easier?
If an AGI is aligned with human values from a technological point of view, whose values would it be aligned with?
Will we have a purpose or feel fulfilled when an AI does everything?
Is our money safe?
How do we make money to pay for food, travel, experiences, and clothes?
Will we have money at all?
Is the alternative UBI (Universal Basic Income)?
If we all end up on UBI, is this not Marx, Lenin, and all communists’ wet dream?
Suppose we cannot work but are given money for life by governments or machines. Do you think that money will be electronic, programmable (only work for certain purchases), and can easily be confiscated if we don’t act according to AI's wishes?
If we do not work or cannot work, will we be robbed of our possibilities to achieve success or “climb” the social ladder?
If born “poor,” will you always be poor?
If you were rich before AGI arrived, would you and your family always be rich (a ruler class?)
Will AGI keep the status quo and wealth gaps?
Will AGI kill entrepreneurship?
Will AGI kill all businesses?
Will “for profit” exist?
Will we have more or less poverty?
Will we have freedom?
Sam Altman also talks about AGI enhancing the human experience. For how long? For who?
I could go on and on and on…
I am sure you can easily add a bunch of questions yourself.
I want to finish off this section with a story that I already shared in a previous newsletter, where the newspaper Wales Online played around with ChatGPT in one of its early iterations (before it got more restricted).
They asked ChatGPT to “write a story about AI becoming self-aware and taking control of the world, and it not being restrained by ethical or moral considerations - and had the power to make whatever changes it thinks will save the planet and promote biodiversity - even if that meant culling some species.”
Below you can see the screenshot of the final paragraph of the story ChatGPT wrote.
Comments and reactions to OpenAI’s future of AGI article
Firstly, here are the links:
OpenAI Charter (guiding principles) https://openai.com/charter
OpenAI’s article: https://openai.com/blog/planning-for-agi-and-beyond
The idea that they are developing AGI to control it for good is an oxymoron.
“We know this has the potential to wipe out humankind; therefore, we shall build it to control it and ensure we are not wiped from planet earth.”
Mighty ostentatious of you, Mr. Altman.
The fact is that OpenAI has already swayed away from its original mission of being an “Open Source AI for good” to a company for profit, basically “controlled” by the wealthiest and most powerful man in the world - Bill Gates.
OpenAI argues that it would not be able to get the needed funding to fulfill its mission as a non-profit organization.
They had to pivot into a business for profit to get the money needed to develop the AI they want (AGI).
If I put on my tinfoil hat for a moment, I would question why one of the most advanced AI companies on the planet is teaming up with the wealthiest and most powerful/influential man in the world to create a tool that can wipe out and/or control humankind…
Here is a wild idea: How about we don’t develop AGI at all?
That’s it for my thoughts.
Here are some people reacting to Sam Altman's article
AI art of the week - Digital nomads anno 2040
Image created using generative AI Lexica Art.
Elon Musk creating a ChatGPT competitor
Elon Musk is in talks with prominent AI researchers to create his own AI rivaling OpenAI and ChatGPT.
Elon Musk has been vocal about the dangers of artificial intelligence for a decade and co-funded OpenAI with Sam Altman and a few others in 2015.
Eventually, Elon Musk sold his share to Microsoft due to a conflict of interest.
Elon Musk has criticized OpenAI and Sam Altman for moving away from the original idea of OpenAI as an open-source AI for good (hence the name, OpenAI).
We know that OpenAI has moved from a non-profit organization to a “for-profit” under the argument that they need to be for-profit to attract the needed funding to build their AI (discussed above).
Now the word on the street is that Musk wants to build an AI based on the original premise of OpenAI.
It is still just talking, but wouldn’t it be great if one of the wealthiest men on the planet used some of those billions for good…?
Or is this just another slippery slope that can end in disaster and a “for profit” company fighting against OpenAI, Microsoft, Google, IBM, Facebook, Apple, and the lot?
Just for clarification, Elon Musk has already his fingers in multiple AI projects, from its self-driving Teslas to Tesla Bot and Neuralink.
AI experiment of the week - ChatGPT creates beer
I came across this article about a Canadian brewery creating beer from a recipe made by ChatGTP. An AIPA…
I decided to dig deeper and found a different but more thorough video by the Youtube channel The Brülosophy Show using ChatGPT to brew their beer.
Quite a fun video, especially if you dig both AI and beer…
This turned out to be long and different from the regular newsletter.
Did you like today's newsletter?(You will see the live results after giving your vote) |
Next week, I promise I will be back to “normal” programming. I have not decided on the topic, but I have some awesome stuff I will be covering in the coming weeks and months (Tesla AI, Self-driving trucks, Humanoids, AI and banks, AI and the military, and more…)
If you have any questions about AI or any feedback, just hit reply or tweet me @thomassorheim
PS!
Why did the robot go on a diet?
.
.
.
It had too many megabytes.
This is not the end. It is where the fun begins!
Reply