• Practical AI
  • Posts
  • How a Youtube cat finder tool changed AI forever

How a Youtube cat finder tool changed AI forever

One company is powering 80% of all AIs

Hi there!

It's Thursday and time for another issue of the newsletter that pulls back the curtains on AI to give you a better look at what's really going on. 

Today’s headlines:

  • The worlds largest AI company you never heard about

  • AI art of the week: Fear

  • AI in schools: should we fight them, or adapt to them?

  • AI experiment of the week

  • Another Deep Fake

  • The robots are coming…

  • AI definition of the week: Supercomputer

  • Quick bites: interesting AI news

  • An ask

Let's jump in!

The world's largest AI company you never heard about

Did you know that there is one company more dominant than any other in the AI industry? 

I am not talking about OpenAI or ChatGPT. 

I am talking about NVIDIA. 

NVIDIA who? 

Some of you will recognize the company as the market leader in graphics cards for the gaming industry. 

What you probably did not know is that NVIDIA is also the global leader in AI, specifically in the hardware that powers AI, the processors, and the supercomputers. 

NVIDIA holds about 80% of the AI processor market (70-95% depending on how you look at it). 

Rumor has it that NVIDIA’s foray into AI was a result of a researcher at Google trying to create an AI to identify cats in Youtube videos. He used thousands of CPUs (Central Processing Units) to train the AI with 10 million videos. 

The process was slow, required an enormous amount of energy, and was very expensive.

A researcher's friend said that he was sure that NVIDIA would be able to power the AI and train it faster and more accurately with only a few GPU processors (Graphics Processing Units) versus the thousands of CPU processors Google was using. 

It turns out he was right.

NVIDIA only needed 12 of their best GPU processors to train and run the same AI better and faster. 

This is the urban legend behind how NVIDIA pivoted from a graphics card processor and became the leading hardware manufacturer powering 80% of the AI world. 

The truth is that NVIDIA had worked strategically towards building GPU processors for AI long before the “cat project” at Google in 2010. 

NVIDIA realized that a GPU is very well suited for complex and big computations requiring multiple concurrent tasks - like AI - versus the traditional CPUs, which are great at doing one or a few things at a time, like running a desktop computer. 

So NVIDIA made a strategic decision to work towards becoming the leading AI chip manufacturer. 

In addition, they developed CUDA, a programming language used to work with AI hardware and supercomputers. 

Today most of the AI tools we know, like ChatGPT and Midjourney, use CUDA as their programming language to interact with NVIDIA supercomputers. 

NVIDIA GPUs are deployed by almost all the biggest cloud providers: AWS (Amazon), Google, Alibaba, and Azure (Microsoft). 

Azure is the one that is powering ChatGPT and all the OpenAI AIs. 

NVIDIA, in other words, managed to conquer both the hardware and the software side of AI - which many say is the reason behind the company's strong position. 

But competitors are coming in hot left and right.

OpenAI has developed an open-source programming interface called Triton, which is gaining popularity and is said to be “better.” 

AMD is working hard towards building new and advanced GPUs for data centers and supercomputers after acquiring Xilinx. 

Intel has bought two AI chip startups, Nervana and Habana Labs.

Google has started making its own chips.

Amazon bought Annapurna Labs in 2016 and now develop its own chips called Inferentia. 

Baido has Kunlun.

Qualcomm has Cloud AI 100. 

And then there are startups like Graphcore, SambaNova, Cerebras, Mythic AI, and Blai. 

We are in for one heck of a dogfight for a market share of the estimated US$ 1,6 billion AI chip market by 2030. 

But NVIDIA is not standing still and taking punches. 

Back in 2016, NVIDIA realized that they were losing the fight for the best AI researchers and developers to Google and Facebook because they did not have a Supercomputer. 

So NVIDIA decided to build supercomputers like nobody else. 

And boy did they go to town…

Today NVIDIA owns many supercomputers, including Selene, which until recently was the largest privately owned supercomputer in the world. 

If it is one thing the top talent of AI wants more than their fat salaries, it is to work on the largest and best supercomputers available. 

Until recently, this meant that if you are the best of the best in the world of AI, your choice was basically between NVIDIA, US Government (two supercomputers), Japan (one), and China (two). 

I just checked the recent numbers, and we now have to add both the EU (two supercomputers) and IBM (two NVIDIA Supercomputers) to that list. 

I asked BlueWillow AI to create an image of a futuristic supercomputer.

Applications 

In every newsletter, I talk about the applications of the technology or the tool of the main story. 

Today I am talking about hardware (primarily), so it is a bit different this time. 

The applications of GPUs in AI are the obvious ones talked about above. 

But the development of these GPUs also matters for EVs, self-driving cars, The Metaverse/VR/AR, and humanoids, and I am sure it has loads of military applications as well. 

Warning: strong language!

Implications

Normally I have more to talk about in the applications section of the story. 

But when it comes to the AI chips, the implications part takes it today. 

A concern that is growing with the current wave in AI is the cost. 

Not only in terms of money but also energy, environment, and even war. 

Let’s unpack.

It is estimated that the cost to train GPT-3 is around $5-8 million. 

That is one time. 

GPT-3 is the AI running most of the “regular” AI tools you know and use, including ChatGPT (technically, ChatGPT’s AI is GPT-3.5). 

GPT-3 is trained on 175 billion parameters. 

Google’s newest AI - Pathways Language Model - has 540 billion parameters. 

China supposedly has the largest AI in the world - WuDao 2.0. It has 1,75 trillion parameters. 

Feel free to get out the calculator and do some cost estimations… 

There are thousands of AIs around the world, many are being trained daily.

Energy. 

Danish researchers have calculated that the energy required to train GPT-3 one time could have the carbon footprint of driving 700,000 km/435,000 mi.

Researchers at the University of Massachusetts Amherst have estimated that training a single AI model can emit as much carbon as five cars in their lifetimes. 

Meta (Facebook) is one of the many companies exploring AI's environmental impact. If you like scientific reports, you can read a report from Meta here.

But Thomas, you mentioned war… 

I read somewhere that the next war may be over AI resources. 

It sort of makes sense (to the extent war makes any sense). 

I don’t really want to discuss politics here. 

But think about it. 

Most wars in modern times are/were about resources. 

TSMC is the largest chip manufacturer in the world. 

TSMC is in Taiwan. 

Most American tech companies are producing their chips in Taiwan.

Given the recent years of geopolitical tension with China getting more aggressive about “taking back” Taiwan, what happens if China invades Taiwan? 

Taiwan is a close ally of the USA, and Taiwan's chips are the “heart and blood” of the American tech industry - the largest in the world... 

A final thought experimentMoore’s law states that the number of transistors on a microchip doubles every two years. The law claims that we can expect the speed and capability of our computers to increase every two years. 

With all the supply chain issues and factory closedowns we have seen in the world over the past 2-3 years, would AI have been twice as big and twice as advanced as it already is if we did not have the pandemic? 

-

I wrote a longer article on this topic over at my blog, The Future Handbook, if you want to read even more… 

AI art of the week - Fear

Image created using generative AI BlueWillow.

It’s not perfect, but I created this image in less than a minute without making additional adjustments. 

I am not sure if this is just a kid screaming, or judging by his hair, maybe he is on a rollercoaster, and it is very scary? 

AI in schools: should we fight them or adapt to them? 

After the explosive growth of ChatGPT, which is barely two months old now, schools around the globe are “panicking.”

The media is flooded with stories of teachers and professors catching students using AI to write their papers. 

Now schools are expressing concerns about plagiarism - which I think is just a creative angle to try to shut down the use of generative AI text tools like ChatGPT. 

The schools are afraid of the new tech, and I think they are trying to “contain” its usage while it is still “small.” 

Last week I wrote about how ChatGPT passed the United States Medical Licensing Exam (USMLE) without training. 

I guess they are right to be concerned about AI.

In a recent interview, CEO Sam Altman of OpenAI said that they are working on ways to identify ChatGPT content but pointed out that creating tools that perfectly detect AI plagiarism is fundamentally impossible. 

Sam Altman, screenshot StrictlyVC interview (source)

A determined student (or writer) will always find ways around such tools, as it is all based on algorithms that operate in a certain way. 

Just as you can use algorithms to detect it, you can also use algorithms to detect how the detector works and change the text just enough to pass said tools. 

Sam Altman warns schools and policymakers to avoid relying on plagiarism detection tools and that the schools need to adapt to the new reality. 

I think the only way forward is to adapt. 

AI is here to stay, and there is nothing schools, universities, or policymakers can do about it. 

AI is in our lives now. 

It is not a fad. 

In a matter of years, I think we all will have AI assistants helping us in work and life. 

As I have said in previous newsletters and articles, I think “AI’ing” or “ChatGPT’ing” will soon turn into verbs, just like "to google" did in 2006. 

AI will be an integral part of how we live our lives. 

The sooner schools and policymakers accept this fact, the sooner they can develop meaningful measures and include them correctly in learning.

In the interview, Sam Altman said, “We adapted to calculators and changed what we tested for in math class, I imagine. This is a more extreme version of that, no doubt, but also the benefits of it are more extreme, as well.” 

I agree 100% with Sam on this. 

What do you think about AI entering the classroom? 

Do you think we should fight it, or should we adapt to it? 

Let me know by hitting reply to this email. 

The full interview here (1 hour) :

Part 1

Part 2

AI experiment of the week

This week's experiment is super simple. 

I came across many tweets saying that you can manipulate ChatGPT into making simple false claims. 

One example I came across was people telling ChatGPT that their answer to a math problem was wrong, even though it was mathematically correct. 

By telling ChatGPT it is wrong, it would apologize and correct itself to the new “truth,” - which is wrong. 

This worked when I performed it Wednesday, 25th of January 2023. 

It may not work for long. 

However, it shows that not everything is 100% when it comes to AI. 

Another Deep Fake

I have spoken about Deep Fake AI technology in two issues already, so I will not go deep into this. 

But while preparing for an AI experiment (for next week), I came across another Deep Fake. 

This is by previously featured Deep Fake company Synthesia

Have a look:

How they made it:

The robots are coming…

I found this on Twitter… It’s going viral!

What do you think? 

Cool, or scary? 

AI definition of the week: Supercomputer

A supercomputer is a type of computer that is designed for performing complex calculations and simulations quickly. They are typically used for tasks that require a lot of processing power and memory, such as weather forecasting, scientific research, and national security. Supercomputers are made up of many processors, which work together to perform calculations in parallel. They also have a large amount of memory, which allows them to store and access large amounts of data quickly. Additionally, supercomputers are connected to high-speed networks, which allows them to share data and communicate with other computers. This allows them to perform even more complex calculations and simulations.

Explained and simplified by ChatGPT

Quick bites - interesting AI news 

MSG Entertainment uses facial recognition AI to bar lawyers involved in litigation against the company from stepping foot on its properties. This includes Madison Square Garden, home to New York Knicks (NBA) and New York Rangers (NHL). New York Attorney General Letitia James suggests the practice is against the law

Google is reportedly scrambling to put out AI tools amidst OpenAI and Microsoft taking center stage in AI development and news. Supposedly Google is working on no less than 20 new AI tools that may get introduced this year. 

A Google researcher and the creator of the deep learning system Keras is skeptical of the current AI craze and says it has many parallels to the 2021 web3 madness (crypto madness). Read the full story here.  

Deepmind, an AI company owned by Google, develops AlphaCode, an AI coder, and enters it into a highly competitive coding challenge. The result? It basically crushes around 50% of the human coders in the competition. AlphaCode is far from perfect, but it gives us an indication of where the future of computer programming is heading.  

An ask

I really want to get this newsletter out to more people. 

I think that knowledge about the commercial use of AI beyond ChatGPT is important for businesses and humans in general. 

I would appreciate it a lot a lot if you would share this newsletter with someone. 

Forward it, share it on Twitter, Facebook, LinkedIn, or whichever is your preferred social media. 

Thank you so much in advance 🤖

If you have any questions about AI or any feedback, just hit reply or tweet me @thomassorheim 

That's all for this time! 

Until next week, may the force be with you! 

Thomas

PS! What do you get when you cross a computer and a lifeguard?

.

.

.

A screensaver.

This is not the end. It is where the fun begins!

Reply

or to participate.