4 Factors That Could Stop AI

13-03-2023
#tech
Instagram logo for Matt Bristow's blog LinkedIn logo for Matt Bristow's blog Logo to click to give feedback on Matt Bristows blog.
Brain icon to indicate ability to summarise blog with AI.

Summarise with AI

AI summary

AI is undoubtedly the hottest technology development right now, with giant companies like Microsoft, Google and Meta all piling in enough money to solve world hunger and have change for a meal deal to this exciting technological frontier. 

Right now, a large part of that investment is speculative at best, with commercial AI very much being in the early stage of development. Things like ChatGPT are amazing tools, but the true business cases of these tools are only beginning to be explored. Currently, it is like the start of the internet, a lot of hype, with only some vague speculations on exactly where the cards are going to fall. 

I, for one, am super excited at the prospects of commercial AI, and giving in to my scientific tendencies, decided to look into ways to disprove that commercial AI is going to completely revolutionise the way we live and work. Much like there was with the internet, there are a tonne of ways that companies who wade into the depths of AI can sink, just ask pets.com. I’ve broken down the five biggest banana peels that commercial AI may slip on, and also this nicely highlights some adjacent business cases as well. Wherever there is a problem, you can sell a solution.  Let’s start with one of the biggest : 

Computational costs making AI development unprofitable

To create a functional AI, you first need to “train” it. Training an AI involves inputting an absolutely gargantuan amount of data into the model so that it can learn the appropriate responses. We aren’t talking one or two essays, ChatGPT was trained on 570GB of data, which is more than you can get through on your lunch break.

To process and learn from this data, an AI model uses something called a GPU to process the data. If you tried to just use one GPU however, it would take 355 years to train ChatGPT, which is probably too long to stretch a business plan for commercial use. This means that multiple GPUs in parallel are used to train the machine, which ramps up costs as GPUs are expensive, and the less you use/try and cut back cost, the longer the training period. For something as complex as ChatGPT, researchers estimate you would need 1,024 GPUs, and when GPUs go at around $15,000, you are looking at $15 million to develop the AI to begin with.

So development and training take time, but once it’s built it is probably fine right? Wrong. LLMs like ChatGPT are so large that they can’t just be neatly hosted on a single server like my site is. They require a cluster of servers to even be able to store the size of the model, and servers are also expensive to run as they consume electricity like nobody's business. OpenAI have actually outsourced this server renting, so they don’t have to build and maintain a physical server room, but it isn’t exactly saving them Bookoo bucks as they’re reportedly paying $100k a day to keep it up and running.

The software and science is simply outpacing the hardware development. From 2018 - 2020, Nvidia GPU memory storage increased from 32 to 40GB, way too slow to deal with the rapid increase in memory needed for more and more complex AI. And getting computation costs down to a manageable level so that companies can, you know, turn an actual profit, is one of the largest roadblocks for commercial AI development.

Adjacent businesses to look out for that may solve the computatio: 

Check out more info here!

Lack of accuracy impacting business applications

So say in the interest of turning a profit, you keep costs low and don’t input too much data into training your AI, so you don’t have to splurge on too many GPUs and servers. Well, then you are going to run into a problem with accuracy, which is already being widely seen with ChatGPT and other models at the moment. 

A core problem with LLMs is a large misunderstanding of how they actually work. They’re not thinking, living things, nor are they massive web crawlers that return indexed information. What they are actually doing is using the data they are trained on to statistically work out the best string of words as a response to an input. The less data that the model is trained on, the less it is able to statistically “guess” the next word in any sentence it is spitting out. 

Little blips in the AI’s ability to correctly respond to enquiries are so common now that they actually have a word for it : “hallucinations”. This explains why ChatGPT thinks that 2 pounds of bricks is equal in weight to 1 pound of feathers and how sometimes it gets basic mathematical questions wrong. The more complex the question, the less comparable data it can use and the accuracy falls. That’s why currently ChatGPT has a less than 10% accuracy rate of basic arithmetic when the numbers involved have more than five digits.

This is all well and good when it comes to just a standard, fun chatbot, and Reinforced Learning through Human Feedback (RLHF) goes some way to mitigating this but when it comes to commercial use, a lot of cases will need to be accurate straight away, rather than allowing the system to learn from its own mistakes a bunch of times. This means that AI developers are going to want to use more and more and more data into their models during the training cycle, which feeds back into my first point, which creates a vicious cycle. 

Adjacent businesses to look out that could solve the accuracy problem for AI:

Get more info here!

Public backlash causing regulatory capture for AI companies

Despite the many adulatory LinkedIn posts and Twitter threads, the world is not all-in on AI just yet, and not least because they are worried that it’s going to reduce the world to a smouldering heap of ash.

One of the main concerns around current AI products is that they are being heavily used to disrupt creative industries like copywriting and art production. Famed Sci-Fi short story publisher Clarkesworld recently had to stop accepting submissions due to being inundated with thousands of AI generated stories, as “hustlers” tried to score a quick lazy buck through publishing automated articles. This in turn affects actual human creatives trying to put food on the table in an already hard economic climate. 

Now you could argue that the relentless onward march of technology comes for us all at some point, but one of the strongest anti-AI positions is that these AI’s are trained on millions of human created works, and these works are not given any credit whatsoever or reimbursed for their (usually unwilling) contribution. 

OpenAI founder Sam Altman even admitted this point when discussing his company’s DallE AI, saying that there were no plans to monitor/credit the works that were being used to train the model.

Legislation is laughably slow to catch up with the lightning quick movements of private innovation, but class action lawsuits are now starting to begin, where copyright owners of art, writing and even code are suing AI companies for using their works without consent or credit. 

As AI becomes more and more commercially implemented, these issues are only going to grow, as courts and legislation inevitably become more involved when work is stolen for commercial gain, rather than to build a not for profit chatbot (which ChatGPT originally was).

Adjacent businesses to look out for in the regulatory space:

Check out more here!

Data monopolies affecting supply of data to AI companies

Data is to AI what oil is to cars. Massive quantities of data is needed to train AI’s for commercial use, and any attempt to skimp on the amount of this data can result in a staggering lack of accuracy. But you already knew this, mostly because I just told you. However, it’s often overlooked exactly where this data comes from. 

Access to large, labelled data sets is actually harder to come by than you may think, and this skews the favour massively into already established data giants like Google, Meta, Microsoft and Amazon. And it’s not just the pure size of data sets that is giving these monoliths an edge. For data to be worth anything, it needs to be refined just like oil, well not just like oil but you get what i mean, don’t get snarky with me on my own website. Refinement here means correct labelling, so the AI’s understand what is being fed to them. The large workforce, endless cash reserves and access to data on pretty much every person alive means that tech giants can build a monopoly on exactly the kind of data that commercial AI needs to be trained on. 

Case in point,  the reason Google constantly asks you for road/vehicle themed Captchas is because they use that labelled data to train their self-driving cars to recognise obstacles. 

Imagine you want to create an AI application that helps users predict their future needs, and you have to compete with Amazon who have access to billions of user’s data, shipping routes, access to the floorplans of millions of homes, billions of seconds of test recordings of conversations, recordings from doorbell cameras and the hosting details of around 50% of the world’s online infrastructure. And you have an Excel spreadsheet of how many times your buddy Dave orders Deliveroo over a six month period. You’re going to lose, and lose hard. This innate lack of access to quality data will naturally limit competition, and if you’re a free market capitalist, that means a lack of quality.

Until we level the playing field in access to data, the field as a whole is going to suffer, which means less public benefit from commercial AI.

Adjacent businesses to look out that could solve data shortages:

Logo to click to leave a comment on this blog.

Load comments

Comments

No comments yet, be the first!

Name

20
Message

250
Post comment