The power of AI is getting stronger every minute, with billions upon billions of dollars pouring in and knowing that LLMs like ChatGPT and Bard can outperform even the very best employee, it has caused total disruptions in every industry sector; but how can AI get killed by itself?
What the heck is AI?
It's simple! AI stands for Artificial Intelligence. and when it is artificial, you can't just use it without teaching (supervision) or examples (dataset), just like everything else in the world. But here's the catch, when you are working in the real world, it might not work just as expected. Everything is a factor in making it happens or collapsing itself to its doom, but how do you know which information is critical? What exactly does AI need to consider when you are in a perfect storm?
Take a look at COVID-19 era. Foreseeing a disaster before hands by shutting doors and borders around the globe has made significant benefits and a 180-degree turn in how we survive, even us still can't figure out how to deal with any of those changes. Everything dies and grows so fast! It has changed society, especially the airline industry forever. Here's how.
People's work needs people, not AI
On a normal day at the headquarter, each airline deploys its automatic pricing algorithms trying to pull as much cash as possible while trying to stay profitable even if you are selling some of the tickets at a loss. This makes sure everyone can fly and flight attendants have a job to do.
During the early covid outbreak, those ticket fares are sinking fast. It reached the point where everyone should be able to afford it, but even if the price was that low, no one is taking the deal. Do you know why? Because they can't!
Long-list of requirements from both departure and arrival countries on COVID-19 locks down all travellers from having a wonderful summertime, thus making the demand for a ticket stay low. Too low to be exact.
The system doesn't get informed of the current situation and the prices keep coming down until someone has to manually do the price in all affected routes. This is where the AI falls short. Unexpectancy and unfed data.
The whole point of this conversation is data they've been fed is controlling what AI will think and execute. When you feed in bad information or no critical criterion, it generates bad results; thus why you need to hire people to understand people.
But can Generative AI save the world with its technology to understand natural language input from the customer and become your AI assistant? Let's all start with the revolutionary Generative AI called "ChatGPT".
Taking a look back at ChatGPT
The OpenAI's masterpiece in generative AI, the almighty ChatGPT. Last time I've checked, the latest version is 3.5 (or they like to call it GPT3.5 Turbo) which is the next breakthrough update after GPT-3 and it got faster (that comes with the suffix 'Turbo') and GPT-4 is soon to be generally available (GA) to the public.
What is a Generative AI even mean? Let's just say it is an AI that will generate content, such as images, music, or text, based on patterns and examples it has learned from existing knowledge and the task given.
But users with their premium tier "ChatGPT Plus", which costs a hefty $20 a month, get early access to the GPT-4 and other bleeding-edge features. Expectedly, some of the users are not happy about it. Here are some of the complaints and negative feedback:
- Generates slower feedback, compared to GPT3.5
- Still generates biases, hallucinations, and problems with adversarial prompts
- It gets stupid-er
I have to say that every negative feedback is completely understandable, and it's a thing that you need time and experience to understand and perfect itself. But the tech market has taught me this one thing: you can't perfect it.
Those AI are trained to slay
Both GPT-3.5 and GPT-4 are trained from data up to September 2021, but why can't they train those AI with data from 2023 or search the Internet with Google (or Microsoft Bing)? How can a competitor like Google Bard (a generative model made by Google) access the latest data, while OpenAI's GPT3.5 can't?
But with every data in the world, how can GPT-4 is getting dumber? Where exactly do we evaluate the performance?
- How useful is the answer to the user's intent?
- How overperforming can be made by vague prompting?
- How can broad knowledge generate creativity?
- How task can be described, designed, or executed by generative AI?
We still don't know how to evaluate multiple LLMs because the nature of usage is too broad, even if we can compare them to humans with tests like Turing Test or medical school, it still needs something to explain why it is getting complained about "getting dumber".
Rise of generated content era
Love it or not, it's everywhere now. How can you distinguish whether this content was made fully by ChatGPT or by an idiot who defy the Skynet itself? It's likely you can't, and it's okay.
Somehow the content will have to be prompted (or given an instruction) to do something, in which this case, generate a blog post about ChatGPT. This is like you were given a task, and with your knowledge and experience, what can you come up with?
To understand the real-world context and stays up to date, ChatGPT and other generative AI are required to connect to the Internet, at least the latest news anyways, and might learn how to write a blog from this content.
This content was not written by generative AI.
Let's say that it needs to learn how to write a blog. It now goes through a lexicon of the wonderful Internet and found the content relevant to a blogging class. Sadly, that content was auto-generated and it looks so real, AI didn't know it was auto-generated content. So what the generative AI do is learn by its true/false source of truth, and considered it a 'truth'.
"Those who cannot remember the past are condemned to repeat it." – George Santayana, The Life of Reason, 1905
Innocently learned that this wasn't a mistake is not a good idea from the start. How can you become better if you can't even admit or understand why and what you were wrong? Even to yourself, how will you get better if the feedback was absent? The same goes for generative AI.
One reason behind that is trained data. When you train someone to do their work, you train it from a source of truth, which in this case was the Internet. And those data need to be checked, cleaned and polished before being fed to the model.
Plague of knowing too much.
We have discussed how Generative AI can get "dumber" in a few ways:
- It can be trained on a dataset that is biased or inaccurate
- It can be given too much / too little data
- It could become outdated
- It can be inaccurate or misleading.
- It can be repetitive or unoriginal.
But what we need to be aware of is called Model Collapse. Model collapse occurs when an AI model is trained on its own generated output, rather than on the original training data. This can lead the AI to become increasingly repetitive and less creative, as it learns that what it is doing is good enough. In which case, not even close.
"One of the main authors of the paper, Ilia Shumailov, conveyed in an email to VentureBeat, the adverse consequences of accumulating errors within generated data. He explained that these mistakes gradually build up, eventually leading models that rely on such data to develop an increasingly distorted perception of reality. What came as a surprise was the speed at which model collapse occurs, as observed during our research. Models can swiftly lose significant portions of the original data from which they originally acquired knowledge."
Your part (not) to become dumb
Patterns, patterns, patterns. This is one of the distinctive marks the writer intentionally weaves into every post, and deep down, you find yourself drawn to it. However, in an era where content is often automatically generated, it's crucial to
Make it human. talk to humans, create conversations like humans, make go-by-your-heart decisions like humans.
Aware that generative AI exists and it is imperfect. always review the content carefully and make sure that it is accurate, original, and easy to understand.
Use it like a tool. Sometimes it is there for you, and sometimes it doesn't. Make yourself a low-tech backup like talking to a real person, using an old generative AI model, or going to a library. Embrace alternatives and complement your process.
Another thing called 'biases' can be generated from AI at any time. This is also an issue that every AI model owners are aware, of and trying their best not to. Even humans can't be unbiased (i.e. Confirmation Bias, Anchoring Bias, Attribution Bias), so how can things like generative AI can be 100% neutral? Even if you are sure it is 100% fair, how can you be sure that you did not miss a leaning response now or in the future?
Verdict
The technology exceeds my expectations, and ChatGPT has become a powerful tool for solving programming challenges. It feels like I already know what I need, but it's just at the tip of my tongue.
If you are reading this, I want a programming rubber duck who can talk like ChatGPT. Sometimes, me as a software developer, need some love and understanding from stakeholders and my self ego. And to achieve that, I need ChatGPT programming rubber duck.
Remember, to overcome these generative AI challenges, it is important to use AI with a human touch. Carefully reviewing AI-generated content for accuracy, originality, and clarity is crucial.