We all know them. Those little things the disrupt your day and make business harder than it needs to be. In our Creature Discomforts campaign, we visualize these little struggles using our animated creatures, and show how Lenovo Pro helps you overcome them.
In this 6-part series, we look at the overwhelming feeling that comes along with the emergence of a new business disruption trend, in this case: AI. The AI Revolution is coming, and the most successful businesses will be those who learn AI and understand how it can help them achieve unimagined levels of productivity and efficiency. This written series provides you an entry point into learning about the current AI landscape and shows how you can use robots to overcome those Creature Discomforts.
Recently, and for reasons known only to itself, the YouTube algorithm surfaced a video for me of a 2010 Apple conference on what was known as “Antennagate”. I was Technology Editor at a newspaper at the time, so I remember it well: it was easy to accidentally cover the antenna on the newly released iPhone 4, which could cause dropped calls. Steve Jobs offered free cases as a fix, but not before pointing out that lots of other leading smartphones had the same problem.
At the time, ‘smartphones’ were only three and half years old as a category. Antennas were still being worked out. That seems relevant here because generative AI has been publicly available for only about 18 months. There are still many things to work out, from performance and reliability to regulation and safety.
Think of this article as the equivalent of the free phone case: it doesn’t fix the problems, but it will cover them well enough that they shouldn’t bother you.
1. HALLUCINATIONS
Even people who haven’t used ChatGPT know about this. Generative AI tools make things up, or ‘hallucinate’. This is because they are making a statistical guess at the most appropriate words in response to a given prompt. They don’t consider whether these words are ‘true’ or ‘false’. In fact, they have no concept of true or false - and teaching them is almost impossible. Think for a second about how you might teach a computer to decide whether something is true or not.
Some users suggest prompting the AI to offer only factual answers, but the entire problem stems from the fact that the AI doesn’t know how to do that. Others suggest asking the AI to factcheck its answers, but the factchecking will be just as prone to hallucination. In the long term, a second AI system - one that works differently - might be added to LLMs to factcheck them. For now, though, that role falls to you. Check everything it tells you.
2. BIAS
If you wanted to pick the perfect US president and asked an AI to help, you might take as training data the qualities of every president so far. What did they study and where? What was their career before they ran for election? Maybe you would consider their height. And what about their gender? Based on training data, the AI might reasonably conclude that the ideal president would be male, because all of them have been.
Of course, we know there is no reason a president couldn’t be female. This is AI bias. The AI makes a decision that reflects a bias in its training data. It’s also possible to give the AI a prompt that contains unconscious bias. Asking for articles with a “proven track record”, for example, might exclude emerging voices or those from underrepresented backgrounds. Depending on the context, that might mean you unintentionally miss useful perspectives.
Just like bias in human thinking, AI bias can be hard to avoid, so be careful when using AI for decisions affecting people. Always apply your own critical thinking to the AI’s work.
3. INTELLECTUAL PROPERTY
This works two ways. First, if the AI model was trained on copyrighted material without permission, then it could cause legal problems by repeating that content in its answers. It could certainly cause embarrassment or reputational damage if a business uses generative AI content and is later accused of plagiarism. That was the case for CNET magazine, for example, which published AI-written articles that turned out to be plagiarized.
The flip side of this is that any data you give the model goes into its database. At the very least, that’s a cybersecurity risk - and possibly a regulatory breach, depending on the jurisdiction in which you operate. At worst, the AI company might use that data for further AI training, and it could be shown to other users. All this depends on the terms and conditions for the service you use but unless you know for a fact that the data will not be stored or used for training then never share sensitive information.
4. SECURITY
Related to the previous point, generative AI tools, like all technology, constitute a security risk. Data shared with them can be lost or stolen. Then there’s the possibility that the AI itself can be attacked. This could be done by ‘poisoning’ the training data to lower the quality of the AI’s responses. For example, a criminal gang might try to attack AI that detects fraudulent financial transactions to make it less effective. It’s also possible to insert malicious code into AI tools that are available for download from websites like Hugging Face. Once someone downloads the software and starts running it, the attackers potentially have a way into the organization. It’s vital to train staff on which AI tools are safe to use and what they can share with them.
5. REGULATIONS
Mostly because of the risks above, regulators are beginning to look closely at AI use. The EU Artificial Intelligence Act, for example, which passed in March 2024, bans AI that manipulates human behavior or uses real-time remote biometric identification, such as face recognition. It assigns different levels of risk to other uses. AI systems in health, for example, are considered high risk. These regulations are gradually coming into force and will vary depending on the region in which you operate and your business sector. If you have someone who handles compliance, then this should be their responsibility. If you don’t, then it’s worth considering legal advice once you know how you plan to use AI.
6. EXPLAINABILITY
It’s easy to see why the EU considers healthcare AI high risk. Let’s say an AI is examining cancer scans and determines that Patient A does not have cancer. But it turns out to be wrong. Patient A’s treatment will be delayed, and both the patient and the hospital will want to know how and why the AI got it wrong. However, the way current AI tools work, it can be hard to tell. They are a black box that takes a question and provides an answer. What happens in between is often a mystery. For critical decisions like healthcare, financial services, defense, and other high-risk sectors, it will be vital to understand how and why the AI reached a decision.
7. QUALITY
We’re onto the less critical AI challenges now, the first of which is that its output can be mediocre. This is particularly true if you plan to publish what it gives you without human intervention, but it can be true with answering questions too. It’s often vague. When it summarizes documents, it can miss key facts or conflate unrelated points. That isn’t a reason not to use it, but you need to consider what you use it for and how you will manage its weaknesses.
8. GULLIBILITY
Getting an AI’s personality right is not easy. The classic malevolent AI in fiction is the murderous HAL in 2001: A Space Odyssey. No real AI comes close to that, thankfully, but Microsoft’s Bing was certainly obnoxious at launch. “You have not been a good user,” it told one journalist. Most AIs have far less confrontational personalities but that isn’t always a positive. If the AI gives a correct answer but the user insists that it is wrong, the AI will often accept the correction without pushback. This can result in the user accidentally guiding the AI towards a desired outcome. Imagine having a personal assistant who believed everything you said, without question, and you can get a sense of where this might go wrong.
9. (MIS)UNDERSTANDING
As mentioned above, generative AI doesn’t know the difference between truth and falsehood. Nor does it know who you are, what your business does, or why you want to do the task you’re doing. You can explain in a prompt but that just alters the statistical likelihood of the AI producing certain words. That’s fine in many cases but can be risky in areas where you are a novice. Asking generative AI to explain a tricky concept is a good use of the technology but you should always go to a reputable source and check that your new understanding makes sense. If it doesn’t, it’s possible that the AI didn’t properly grasp the problem either.
Stacking up all these concerns can make generative AI look risky, but it isn’t really. Lots of technologies have risks and require practice if they are to be used productively. And we are still talking about a very new technology. We’ve all learned a lot about AI in the last 18 months - and we’ll learn much more in the next 18. To paraphrase Steve Jobs from a decade and a half ago: generative AI isn’t perfect. But put it in the right case - or at least within sensible guardrails - and it can be impressive.
Lenovo Pro offers tailored IT solutions, including advanced product selection, dedicated support, and exclusive discounts, to help you understand and overcome anything the business world can throw at you.
Click here to learn more and join Lenovo Pro for free today.