Move over, humans
Everyone is talking about artificial intelligence (AI) chatbots and how it is going to change our lives moving forward.
AI chatbots can do almost everything. And understandably, it has become a source of excitement and fear.
CNN earlier reported that ChatGPT, a powerful new AI chatbot tool, recently passed law exams in four courses at the University of Minnesota and another exam at University of Pennsylvania’s Wharton School of Business.
Although the chatbot passed these tests, it did not always do so with flying colors.
The test results, CNN said, come as a growing number of schools and teachers have expressed concerns about the immediate impact of ChatGPT on students and their ability to cheat on assignments.
Since it was made available in late November last year, ChatGPT has been used to generate original essays, stories and song lyrics in response to user prompts. It has drafted research paper abstracts that has fooled some scientists while some CEOs have used it to write emails or do accounting work, according to the news report.
And more recently, AI company OpenAI announced that thanks to a new update, its ChatGPT chatbot is now smart enough to not only pass the bar exam but score in the top 10 percent.
ChatGPT is an artificial intelligence trained on enormous troves of data, from webpages to scholarly texts, which then uses a complex predictive mechanism to generate human-sounding text, as explained by cnet.com. It knows around 175 billion pieces of information and is able to recall any of them almost instantly.
Academics and test administrators have cautioned against the risks of students utilizing ChatGPT to submit plagiarized essays or cheat on exams, while businesses are developing a variety of methods to identify the usage of AI-based technologies in submitted works.
There is also a great deal of speculation about how it will impact a huge number of human job roles, from customer service to computer programming.
An article in forbes.com also cited some of the possibilities that a malicious party may have at their disposal using ChatGPT.
These include writing more official or proper-sounding scam and phishing emails as well as automate the creation of many such emails all of which are personalized to target different groups or even individuals, automating communication with scam victims, creating malware, building language capabilities into the malware itself that could potentially read and understand the entire contents of a target’s computer system or email account to determine what is valuable and what should be stolen, to name a few.
On the other hand, AI also has the potential for defense, the article pointed out. It can identify phishing scams so that by analyzing the content of emails and text messages, ChatGPT can predict whether they are likely to be attempts to trick the user into providing personal or exploitable information. Because it can write computer code in a number of popular languages, it can potentially be used to assist in creating software used to detect and eradicate viruses and other malware.
The article likewise mentioned other potential benefits. ChatGPT can spot vulnerabilities in existing codes, can potentially be used to authenticate users by analyzing the way they speak, write, and type, and it can be used to automatically create plain-language summaries of attacks and threats that have been detected or countered or those that an organization is most likely to fall victim to, according to forbes.com contributor Bernard Marr.
Marr emphasized that thanks to AI, we are entering a world where machines will replace some of the more routine thinking work that has to be done.
According to OpenAI CEO Sam Altman, as of last December, ChatGPT has already reached the one-million-user mark less than a week after its launch. OpenAI is a San Francisco-headquartered AI research lab co-founded by Elon Musk. Musk resigned from the board in 2018 but has remained a donor. In 2019, Open AI LP received a $1 billion investment from Microsoft.
In an article, Musk was quoted as saying that “there is always some risk that in actually trying to advance friendly AI, we may create the thing we are concerned about”… but the best defense is “to empower as many people as possible to have AI.”
He said that AI was one of the biggest risks to the future of civilization, noting that “it’s both positive or negative and has great, great promise, great capability, but with that comes great danger.”
Even ChatGPT creator Altman has warned that the technology comes with some real dangers and that society must be very cautious.
He said that one thing he is particularly worried about is that these models could be used for large-scale disinformation and now that they’re getting better at writing computer code, they could be used for offensive cyberattacks. But he quickly added that the technology could also be the greatest humanity has yet developed.
Meanwhile, as pointed out by Musk, there was no regulatory oversight of AI, which is a major problem, adding that he has been calling for AI safety regulation for over a decade.
There are now templates being developed for a regulatory framework of AI. But the speed of AI roll-out is far outpacing the preparation of a legal framework to regulate its investment, oversight and implementation, Michael Peregrine said in an article for forbes.com.
The New York Times has noted that by failing to establish such guardrails, policymakers are creating the conditions for a race to the bottom in irresponsible AI.
In the meantime, all we can do is watch and wait.
For comments, email at [email protected]
- Latest
- Trending