Technology

#Cybersecurity guru Mikko Hyppönen’s 5 biggest AI threats for 2024

Mikko Hyppönen has spent decades on the frontlines of the fight against malware. The 54-year-old has vanquished some of the world’s most destructive computer worms, www.youtube.com/watch?v=lnedOWfPKT0″ data-mce-href=”https://www.youtube.com/watch?v=lnedOWfPKT0″><sp< span=””>an style=”font-weight: 400;” data-mce-style=”font-weight: 400;”>tracked down</sp<>”font-weight: 400;”> the creators of the first-ever PC virus, and sold his own software since he was a teenager in Helsinki.&nbsp;an>p>

tyle=”font-weight: 400;” data-mce-style=”font-weight: 400;”>In the intervening years, he’s earned n>air.com/news/2004/01/virus-hunters-200401″ data-mce-href=”https://www.vanityfair.com/news/2004/01/virus-hunters-200401″>-weight: 400;” data-mce-style=”font-weight: 400;”>Vanity Fair profiles</a>=”font-weight: 400;” data-mce-style=”font-weight: 400;”>, spots on Foreign Policy’s Top 100 Global Thinkers, and the role of Chief Research Officer at yle=”font-weight: 400;” data-mce-style=”font-weight: 400;”>WithSecuren><span< span=””> style=”font-weight: 400;” data-mce-style=”font-weight: 400;”> — the largest </span<>

The ponytailed Finn is also the curator of the online -mce-href=”https://archive.org/details/malwaremuseum&tab=collection”>Malware Museum. Yet all the history in his archives could be overshadowed by the new era in tech: the age of “https://thenextweb.com/topic/artificial-intelligence”>xtannotation disambiguated wl-thing” itemid=”https://data.thenextweb.com/tnw/entity/artificial-intelligence”>artificial intelligence.

“AI changes everything,” Hyppönen tells TNW on a video call. “The AI revolution is going to be bigger than the internet revolution.”

ata-mce-style=”font-weight: 400;”>As a self-described optimistic, the hacker hunter expects the revolution to leave a positive impact. But he’s also worried about the cyber threats it will unleash. At the dawn of 2024, Hyppönen revealed his five most pressing concerns for the year to come. They come in no particular order — although there is one that’s causing the most sleepless nights.

Researchers have long described deepfakes as the most crime-researchers-warn” data-mce-href=”https://thenextweb.com/news/deepfakes-are-the-most-worrying-ai-crime-researchers-warn”>alarming use of AI for crime, but the synthetic media still hasn’t fulfilled their predictions. Not yet, anyway.

In recent months, however, their fears have started to materialise. enextweb.com/topic/deepfake”><span id=”urn:local-text-annotation-5sh8utdno4o5859za0871a5uhewdmavi” class=”textannotation disambiguated wl-thing” itemid=”https://data.thenextweb.com/tnw/entity/deepfakes”>Deepfake fraud attempts are up 3000% in 2023, according to research from Onfido, an ID verification unicorn based in London. 

In the world of information warfare, fabricated videos are also advancing. The crude deepfakes of Ukrainian President Volodymyr Zelenskyy from the early days of Russia’s full-scale invasion have lately been superseded by sophisticated media manipulations.

Deepfakes are also now emerging in simple cons. The most notable example was discovered in October, when a video appeared on TikTok that claimed to show MrBeast offering new iPhones for just $2.

 

Still, financial scams that harness convincing deepfakes remain rare. Hyppönen has only seen three so far — but he expects this number to quickly proliferate. As deepfakes become more refined, accessible, and affordable, their scale could expand rapidly.

“It’s not happening in massive scale just yet, but it’s going be a problem in a very short time,” Hyppönen says.

To reduce the risk, he suggests an old-fashioned defence: safe words. 

Picture a video call with colleagues or family members. If someone demands sensitive information, such as a cash transfer or confidential document, you would request the safe word before fulfilling the request.

“Right now, it sounds a little bit ridiculous, but we should be doing it nevertheless,” Hyppönen advises. “Setting up a safe word right now is a very cheap insurance against when this starts happening in large scale. That’s what we should be taking away right now for 2024.”

Despite resembling deepfakes in name, deep scams don’t necessarily involve manipulated media. In their case, the “deep” refers to the massive scale of the scam. This is reached through automation, which can expand the targets from a handful to endless.

The techniques can turbocharge all manner of scams. Investment scams, phishing scams, property scams, ticket scams, romance scams…  wherever there’s manual work, there’s room for automation.

Remember the Tinder Swindler? The conman stole an estimated $10 million from women he met online. Just imagine if he had been equipped with large language models (LLMs) to disseminate his lies, image generators to add apparent photographic evidence, and language converters to translate his messages. The pool of potential victims would be enormous.

“You could be scamming 10,000 victims at the same time instead of three or four,” Hyppönen says.

Airbnb scammers could also reap the benefits. Currently, they typically use stolen images from real listings to convince holidaymakers to make a booking. It’s a laborious process that can be foiled with a reverse image search. With GenAI, those barriers no longer exist.

“With Stable Diffusion, DALL-E, and Midjourney you can just generate unlimited amounts of completely plausible Airbnbs which no one will be able to find.”

AI is already writing malware. Hyppönen’s team has discovered three worms that launch LLMs to rewrite code every time the malware replicates. None have been found in real networks yet, but they’ve been published in GitHub — and they work. 

Using an OpenAI API, the worms harness GPT to generate different code for every target it infects. That makes them difficult to detect. OpenAI can, however, blacklist the behaviour of the malware.

“This is doable with the most powerful code-writing generative AI systems because they are closed source,” Hyppönen says.

“If you could download the whole large language model, then you could run it locally or on your own server. They couldn’t blacklist you anymore. This is the benefit of closed-source generative AI systems.”

The benefit also applies to image generator algorithms. Offer open access to the code and watch your restrictions on violence, porn, and deception get dismantled.

With that in mind, it’s unsurprising that OpenAI is more closed than its name suggests. Well, that and all the income they would lose to copycat developers, of course.

Another emerging concern involves zero-day exploits, which are discovered by attackers before developers have created a solution to the problem. AI can detect these threats — but it can also create them.

“It’s great when you can use an AI assistant to find zero-days in your code so you can fix them,” Hyppönen says. “And it’s awful when someone else is using AI to find zero-days in your code so they can exploit you. We’re not exactly there yet, but I believe that this will be a reality — and probably a reality in the shorter term.”

A student working at F-Secure has already demonstrated the threat. In a thesis assignment, they were given regular user rights to access the command line on a Windows 11 computer. The student then fully automated the process of scanning for vulnerabilities to become the local admin. F-Secure decided to classify the thesis.

“We didn’t it was responsible to publish the research,” Hyppönen says. “It was too good.”

F-Secure has baked automation into its defences for decades. That gives the company an edge over attackers, who still largely rely on manual operations. For criminals, there’s a clear way to close the gap: fully automated malware campaigns.

“That would turn the game into good AI versus bad AI,” Hyppönen says.

That game is set to start soon. When it does, the results could be alarming. So alarming that Hyppönen ranks fully automated malware as the number one security threat for 2024. Yet lurking around the corner is an even bigger threat.

Hyppönen has a noted hypothesis about IoT security. Known as Hyppönen Law, the theory states that whenever an appliance is described as “smart,” it’s vulnerable. If that law applies to superintelligent machines, we could get into some serious trouble.

Hyppönen expects to witness the impact.

“I think we will become the second most intelligent being on the planet during my lifetime,” he says. “I don’t think it’s going to happen in 2024. But I think it’s going to happen during my lifetime.”

That would add urgency to fears about artificial general intelligence. To maintain human control of AGI, Hyppönen advocates for strong alignment with our goals and needs.

“The things we are building must have an understanding of humanity and share its long-term interests with humans…  The upside is huge — bigger than anything ever — but the downside is also bigger than anything ever.”

If you liked the article, do not forget to share it with your friends. Follow us on Google News too, click on the star and choose us from your favorites.

For forums sites go to Forum.BuradaBiliyorum.Com

If you want to read more like this article, you can visit our Technology category.

Source

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close

Please allow ads on our site

Please consider supporting us by disabling your ad blocker!