0 No comments

Generative AI is making headlines as it changes the tech landscape, fueling new products and tools that streamline processes, enhancing creativity and helping with complex problem-solving. As with most new technology, however, it’s just a matter of time that a good and bad side of generative AI emerges.

Generative AI is not new. But rapid recent advances in the technology are leading to big changes. Some of the companies leading innovation in the generative AI space include Microsoft and Meta.

The landscape of work is also undergoing a transformation as it is impacted by generative AI. It will impact everything from customer service to research and development. And our daily lives will be impacted by the creative uses of generative AI as it infiltrates culture and society, whether it be the arts, entertainment or sports.

The rapidly evolving generative AI landscape also means more opportunity for cybercriminals. According to a 2023 survey from Abnormal Security, 98% of security leaders worry about the risks of generative AI, yet many are unprepared to fight them.

“From phishing emails so real that even the experts will have trouble telling fact from fiction to deepfakes that could impact everything including the future of our children, and beyond, it has never been more important for people to be educated about the threat landscape,” said Steve Grobman, chief technology officer of McAfee, in the company’s 2024 cybersecurity predictions.

One thing is certain, the rapid development of generative AI will lead to a global rise in ransomware attacks. Since the fourth quarter of 2022, there has been a 1,265% increase in malicious phishing emails, and a 967% rise in credential phishing in particular, according to a new report by cybersecurity firm SlashNext. On average, 31,000 phishing attacks were sent on a daily basis, according to the research.

Here are a couple of examples of how generative AI is being used to commit cybercrimes.

General Phishing Attacks

Generative AI is being used to create increasingly convincing general phishing techniques. Traditional phishing attacks were riddled with spelling errors and grammatical mistakes, but the sophistication of large language models (LLMs) means gleaning real-time information from news outlets and official corporate websites and incorporating them into phishing emails to make them more believable and authentic. AI chatbots can also create and spread phishing campaigns at an accelerated rate.

According to a report from the U.K.’s National Cyber Security Centre, while threat actors can use generative AI to gain access to passwords during a phishing attack, it will take advanced threat actors to use generative AI for malware. In order to create malware that can evade today’s security filters, a generative AI would need to be trained on large amounts of high-quality exploit data. The only groups likely to have access to that data today are nation-state actors, which the report categorized as a “realistic possibility.”

Vishing, which stands for voice phishing, uses convincing audio such as a phone call, voice message or voicemail, to lure victims. Targets are duped by a false sense of urgency and more likely to share sensitive information that they would otherwise never share with a stranger. Sometimes hackers use generative AI to clone the voice of a trusted contact and create deepfake audio. Imagine, for example, an employee receives a voice message from someone who sounds exactly like their boss, requesting an urgent bank transfer.

On the flip side, generative AI can be used by defenders as well. Generative AI can help find patterns to speed up the time it takes to detect or triage attacks and quickly identify malicious emails or phishing campaigns.

While no solution can catch all AI-generated attacks, companies need to bolster their existing security infrastructure as needed and look to adopt a zero trust strategy.

Leave a Reply

Your email address will not be published. Required fields are marked *