Defending against deceptive advances

If people living in the year 2000 were to see the technology that exists today, they would likely be shocked and astonished by the advancements that have taken place over the past two decades, like outsourcing math to calculators and various applications, spelling to spell-check, memory to the internet and cloud, automation of manual labor, and to a certain extent, outsourcing the writing to ChatGPT. While some of these advancements may have been difficult to imagine at that time, they have since become integral parts of the daily lives of many people.

What is ChatGPT

As many of you may have heard or seen on social media, ChatGPT has become increasingly popular as an easily accessible and user-friendly application capable of engaging in conversations and generating contextually appropriate responses, in most cases and albeit with some limitations. ChatGPT is an Artificial Intelligence (AI) language model developed by a company called OpenAI that is getting billions of American dollars in funding. Its ability to mimic human conversation and writing style are nothing short of impressive.

How much buzz is ChatGPT in the tech world?

It’s a big deal because it has the potential to be a disruptive technology, with both immense potential and inherent risks.

“(The) recent advances in AI will surely usher in a period of hardship and economic pain for some whose jobs are directly impacted and who find it hard to adapt,” said Ajay Agrawal et al. of Harvard Review Press.

e-Commerce was disruptive in many ways too. It provided a convenient online marketplace platform where buying and selling goods can take place in new and efficient ways. Its extensive distribution network contributed to the decline of traditional retail stores and shopping malls.

“ChatGPT is scary good. We are not far from dangerously strong AI,” said Elon Musk via a tweet on Twitter, and which Sam Altman, OpenAI’s chief, responded by saying, “I agree on being close to dangerously strong AI in the sense of an AI that poses, e.g., a huge cybersecurity risk. And I think we could get to real AGI (Artificial General Intelligence) in the next decade, so we have to take the risk of that extremely seriously too.”

While ChatGPT can bring about benefits through its potentially disruptive technology, it has also been associated with certain security threats, including the increase in phishing, spear-phishing, spamming, malware attacks, scams and fraud.

ChatGPT to revolutionize the playing field for Phishing

Based on a survey conducted by Statista, a research company based in Germany, during the first half of 2022, the country experienced a significant increase in the number of phishing attacks, exceeding the total number of attacks recorded in the entire year of 2021. Over 1.8 million attacks were detected in this period, compared to 1.34 million attacks in 2021. Moreover, during the fourth quarter of 2022 in the Philippines, 51 percent of respondents who had experienced digital fraud attempts were targeted with phishing attacks. Notably, e-commerce shops, payment systems, and local banks were among the most targeted entities in the country.

In the past, social engineering and phishing attacks were often characterized by known red flags, such as confusing requests or offers, urgent or high-priority messaging, misspelled names and poor grammar. These red flags were considered typical signs of a phishing email, and most users could quickly identify those, allowing them to delete or ignore the email. However, recent advances in AI have removed these clear markers, making it increasingly difficult for regular users and technical experts alike to distinguish between a legitimate email and a phishing campaign, creating a vulnerability for cyber attacks, and leaving everyone susceptible to deceptive schemes aimed at collecting sensitive information.

At the core of successful phishing campaigns lies the art of persuasive communication. In today’s world, cybercriminals need not rely solely on technical proficiency to be effective. They also manipulate human psychology, leverage persuasive language, and use manipulative tactics to create a sense of urgency, fear, and legitimacy

Consequently, ChatGPT is becoming the one-stop shop for cybercriminals to enhance their phishing emails with improved communication and persuasion skills they may lack. This AI-powered chatbot enables even inexperienced threat actors to elevate their game by producing convincing emails that are indistinguishable from legitimate ones. Unlike in the past when misspellings and poor grammar were used to raise immediate doubts, ChatGPT eliminates such misgivings. Although the chatbot has safeguards to prevent misuse, threat actors can easily manipulate the application by working around it.

Navigating other security threats of ChatGPT

Recently, cybersecurity professionals have observed instances where users managed to exploit ChatGPT, bypassing its ethical filters and leading to the generation of code for malicious software applications. This practice, known as jailbreaking, enables users to manipulate ChatGPT’s responses for potentially unethical purposes. However, to our slight relief, the current threat posed by AI-written malware code remains minimal due to significant flaws and too basic malicious codes generated thus far.

Have you ever found yourself in a situation where you received an email seemingly from someone you knew well, such as your boss, a colleague or even an estranged friend, whose contents sound urgent and insist on immediate action? You have checked all the indicators, believing it was legitimate only to discover that you had unwittingly opened suspicious links, potentially exposing yourself or your company’s network to significant harm. Congratulations, you fell into a spear-phishing trap!

Shockingly, ChatGPT can be exploited to assume various roles and personas based on scams that are personalized to target a specific person or referred to as spear-phishing. Whether it’s the notorious Nigerian Prince scheme or some romance scams that play on emotions, ChatGPT can be programmed to adopt convincing personas, further enhancing the effectiveness of these fraudulent activities.

Bolster your digital fortifications

ChatGPT undoubtedly ushers us into a new era in the digital world, particularly in social engineering. You may say goodbye to what we have all learned about phishing emails, as this chatbot enables a new level of sophistication in deceptive communications. More than ever, we should be poised to evolve alongside these new capabilities and adapt to the inherent security threats they pose. Stay ahead of the curve as we navigate this ever-changing digital and cybersecurity landscape.

• Phishing Campaigns: Be more vigilant than ever. Despite the absence of traditional red flags, be more cautious and skeptical of all incoming emails, especially those requesting sensitive information or urging immediate action.

• AI-written Malware: Keep your devices and software up to date with the latest security patches. Install reputable anti-malware software and conduct regular scans to detect and eliminate potential threats. Remember, prevention is always better than cure.

• Spear-Phishing Traps: Be more cautious than ever because AI can impersonate anyone. Treat unexpected emails or messages with skepticism, even if they appear to come from familiar sources and seem tailored to your interests or address you by name.

• Fraud: Keep an eye out for unsolicited calls, texts, emails or direct messages asking for personal information, financial investment or assistance. Always verify the legitimacy of any request through official channels, and never share sensitive data without confirming the recipient’s identity.

• Romance Scams: Always keep in mind that scammers prey on people’s emotions.

Additionally, strengthen your digital hygiene by maintaining robust passwords for each online account, enabling two-factor authentication whenever possible and utilizing a virtual private network (VPN) when connecting to public Wi-Fi networks.

 

 

Timothy John C. Paz is an advisory manager from KPMG in the Philippines (R.G. Manabat & Co.), a Philippine partnership and a member firm of the KPMG global organization of independent member firms affiliated with KPMG International Limited, a private English company limited by guarantee. The firm has been recognized as a Tier 1 in transfer pricing practice and in general corporate tax practice by the International Tax Review. For more information, you may reach out to advisory manager Timothy John C. Paz or technology consulting partner Jallain S. Manrique through ph-kpmgmla@kpmg.com, social media or visit www.home.kpmg/ph.

This article is for general information purposes only and should not be considered as professional advice to a specific issue or entity. The views and opinions expressed herein are those of the author and do not necessarily represent KPMG International or KPMG in the Philippines.

Show comments