27.7 C
Makati

Staying ahead of AI-powered threats enables organizations to harness digital innovations

Published:

By Ramprakash Ramamoorthy, director of research at ManageEngine

Businesses across the world, including in the Philippines, are increasingly turning to artificial intelligence (AI) to improve their business processes. These technologies unlock new capabilities, such as conversational assistants, forecast engines, and analytics, that allow both executives and employees to deliver faster, more efficient services.

These advantages are why generative AI alone holds the potential to contribute up to US$ 79.3 billion, or 20% of the 2022 GDP, to the Philippines’ economy. However, this technology can also be exploited by cyber-attackers to create new forms of threats designed to cheat their targets or inflict maximum damage to systems. To address this worrisome development, organizations first need to understand how AI-powered threats work.

Phishing messages crafted by generative AI

In the past, users could spot phishing attempts due to their obvious grammatical errors, bizarre sentence structure, and misspellings. Now, with the language generation capabilities of generative AI platforms, attackers can eliminate these mistakes and make phishing messages more believable. As a result, users have a much harder time distinguishing real messages from fake ones, making the risk of falling victim to scams greater than ever.

AI-generated malware

Even though ChatGPT is equipped with algorithms that can prevent the creation of malware, skilled cyber-attackers are finding ways to bypass them. For example, attackers can ask ChatGPT to create separate codes for specific malware functions. The attackers then adjust these codes to boost their efficacy and compile them into a single file.

For amateur hackers, ChatGPT is also a great tool for building or improving upon simple attack programs. Even though administrators are constantly scrambling to improve their safeguards, these risks can never be extinguished fully. Therefore, organizations need to employ solutions that can mitigate these types of attacks as quickly as possible.

Malicious deepfakes

When deepfake tools first entered the tech landscape, only knowledgeable users were able to utilize them effectively. However, as AI tools have become easier to access, people of all skill levels can now create their own deepfake videos using programs such as WOMBO.ai and Avatarify. Currently, the results are not as intricate as those of professional-grade software, and there are abnormal details that users can easily spot.

As deepfake technology continues to evolve, it will soon be possible for cyber-attackers to generate photorealistic results that look almost lifelike. This, in turn, will allow scammers to trick their victims into doing their bidding, whether that is transferring funds based on the false promise of capitalizing on tech investments or granting limitless access to sensitive data.

Voice cloning scams

Users may have a harder time identifying deepfake voices than deepfake videos. With a deepfake voice, attackers can pose as a company director in a voice phishing (vishing) scam to trick employees into providing their login credentials and executing tasks that open vulnerabilities within the company’s systems.

More worryingly, a viral deepfake recording gives attackers the power to mold people’s opinions on certain public figures or trigger an immediate response, thereby impacting the political and social landscape as a whole.

AI has the power to change the way we live, work, and teach, but in the wrong hands, it can also be used to deceive users and create malicious programs. To counter these threats before they become commonplace, organizations need to adopt defensive tools, such as threat detection systems and multi-factor authentication. Organizations also need to equip employees with cybersecurity knowledge so that they know how to detect and respond to potential malware attacks. By taking these actions, organizations will be better prepared to withstand the ever-evolving threat landscape.

Comments

Recent articles