Is ChatGPT AI's iPhone Moment or just a new toy for nerds?
You can make AI your robust ally, or it will become your competitors'.
Happy Brief Wednesday/Thursday!
The era of humans working with AI is approaching, despite ongoing resistance in the form of legal actions, such as lawsuits accusing AI content creators of copyright infringements. If you were scared by GPT-3’s article “A robot wrote this entire article. Are you scared yet, human?” in 2020, you would probably be petrified now by its ChatGPT, an AI assistant trained by OpenAI.
Although addressing legal and ethical concerns related to AI is crucial, I want to share that I am having a good time with ChatGPT. I believe the advancement of AI and related technology will make our lives easier and happier in general, outweighing the risks that may arise from it.
At a glance:
Featured: Is ChatGPT AI's iPhone Moment or just a new toy for nerds?
One Question this Week
Is ChatGPT AI's iPhone Moment or just a new toy for nerds?
There has been a wide public discussion such as Wall Street Journal, Harvard Business Review, and Bloomberg, most of which pointed out that it is AI’s iPhone moment or tipping point. I was hesitant to discuss this topic. I didn’t want to write it simply because it’s popular. But ChatGPT has gradually become a critical part of my learning tools, which is worth this week’s Brief.
The Impact of AI on Jobs: A Real-Life Example
Whenever AI has a breakthrough, it reminds me of the quote from Dr. Warner Slack, who was once asked whether doctors can be replaced by a computer.
He responded, “any doctor who can be replaced by a computer deserves to be replaced by a computer.”
His quote highlights the potential for technology to replace human jobs, and this is exactly what we see happening with ChatGPT:
On an ordinary working day, Annie came to Jackson for proofreading and translation support regarding her public presentation slides.
“Have you heard of ChatGPT?” Jackson asked. “No, what is it?” She replied. “OMG! You haven’t heard of it? I’m not only going to give you a fish, but also teach you how to fish today.” he said. “What do you mean by fishing? I don’t get it.” She seemed confused.
“Wait a sec.——You’re going to love it.” Then Jackson showed her the ChatGPT website, which he has been visiting too much. “Type your text for proofreading, and of course remove all sensitive information. And enter!— see, you don’t need me anymore. Now you can fish for yourself and get your fish for today and beyond! “
The Capabilities of ChatGPT: A Personal Experience
The story is adapted from a real case, though I suspect thousands of stories like this are happening now worldwide. Like many tech enthusiasts, I’ve been fascinated by ChatGPT’s capacity. For example, as one of my plans in 2023 is to explore the history of the Roman Empire, I learned that the Roman Empire defended the Germanic tribes by building trenches. I wonder, why ancient China built the Great Wall to defend against Xiongnu, rather than trenches? If you google this question, using the old-school way of finding answers online, you are unlikely to find the answer because Google just sends back information about "why China built the Great Wall." On the other hand, ChatGPT within seconds gave me a plausible explanation: The Roman Empire defended against the Germanic, which was infantry, whereas ancient China fought against Xiongnu, the nomadic peoples with excellent horseriding skills on the battlefield — trenches were not as effective as the Great Wall.
The Limitations of ChatGPT: Misinformation
ChatGPT is not perfect by any means. I still don’t know for sure whether the Germanic at the time of the Roman Empire lacked horses. Misinformation is apparently a serious issue for ChatGPT. ChatGPT, from time to time, makes up some responses instead of admitting that it lacks proper information to answer my inquiries. For example, it once recommended a Solidity online course taught by Vitalik Buterin on Coursera. Unfortunately, such a course does not exist. Examples of misinformation from ChatGPT can go on and on.
However, Wharton School Professor Ethan Mollick in his HBR article argues that AI artist’s errors are more tolerable because they do not pose a significant risk to society, compared to occasional car accidents caused by AI failures. Although his AI artists’ errors are much more harmless than misinformation, it is still comparable here since I wasn’t really harmed by the fictitious course taught by Vitalik Buterin, except for a waste of time seeking that course.
But don’t get me wrong—misinformation by ChatGPT and other AI can be dangerous, especially when people blindly rely on its response to their critical questions that have high stakes in life. Even in less harmful situations like learning a new language, learners may internalize unnoticed errors and require a high cost (of time and effort) to correct them afterward.
ChatGPT as a Learning Tool: The Future of AI and Learning
I believe that AI, including ChatGPT, won’t replace you, but those skilled in working with AI will replace you. So, let’s make ChatGPT and AI technology our excellent learning partners: They can help review your essays, provide detailed, line-by-line explanations of sample code answers when you are learning programming, and give timely feedback to your questions or summary notes - an invaluable part of any learning process. I have no doubt ChatGPT and other similar AI are (or are going to become) essential supplements for my learning, and I hope for yours too.
A Side Note: Ethical and Legal Issues Surrounding AI
Finally, on a side note, many ethical and legal issues surrounding AI still deserve another Brief. If interested, you can read the White House Office of Science and Technology Policy’s “AI Bill of Rights” or “Artificial Intelligence for a Better Future” by Professor Bernd Carsten Stahl.
See you (probably) next Wednesday!
One Question this Week
“As always, [the] machine’s triumph was a human triumph.” World Chess Champion Garry Kimovich Kasparov once said in his Ted Talk. Do you agree or disagree?
Stay safe & sharp,
Jason J. Lai