Last year I saw a video on cybersecurity trends, they talked about artificial intelligence and phishing. Most of us know the concept of phishing (bad) but how can we easily explain it to others to give them awareness?
Phishing is a type of cyber attack where attackers trick individuals into revealing sensitive information, such as passwords and login credentials, usually by impersonating a trustworthy entity.
AI, ChatGPT and LLM – Artificial Intelligence and deepfake images, videos and MP3
Ignited by the release of ChatGPT in late 2022, artificial intelligence (AI) has captured the world’s interest and has the potential to bring many benefits to society. However, for the opportunities of AI to be fully realised, it must be developed in a safe and responsible way, especially when the pace of development is high, and the potential risks are still unknown.
As with any emerging technology, there’s always concern around what this means for security. The NCSC want everyone to benefit from the full potential of AI. However, for the opportunities of AI to be fully realised, it must be developed, deployed and operated in a secure and responsible way. Cyber security is a necessary precondition for the safety, resilience, privacy, fairness, efficacy and reliability of AI systems.
Prompt injection attacks are one of the most widely reported weaknesses in LLMs.
This is when an attacker creates an input designed to make the model behave in an unintended way. This could involve causing it to generate offensive content, or reveal confidential information, or trigger unintended consequences in a system that accepts unchecked input.
AI deepfake for phishing refers to the use of advanced artificial intelligence technologies to create convincing fake audio, video, or text content for the purpose of phishing. AI can easily product realistic videos or audio recordings that mimic real people, including someone you know personally, or a celebrity. These are used to create a sense of trust or urgency, like asking for money, or that your online account has been accessed and you need to verify your identity.
This is going to be very more important in the upcoming years. Thinking of the future, I believe that the future will look something like the past. One of the things we’ll see in the future are more AI based threats. What we’re going to see, though, also, is that change is the only constant.
So that means things will be similar to the past, but there will also be new things that we’re going to take a look at. One of the new things that I think is on the positive side is that we’re going to see a move away from passwords toward passkeys.
There’s a new standard called Fido that allows you to not have to send a password, but in fact, you do something that is simpler, easier to use and more secure. We don’t normally get to do both of those at the same time, and we’re going to need it. And what’s the reason for that? Well, because AI, as I mentioned, is going to be an increasing threat factor for us.
AI-based phishing emails are going to become more and more common, I expect, because they can generate what is very convincing emails to get people to try to log in or share their credentials in ways that they shouldn’t. And this is a very efficient way of doing it.
However, if you don’t have a password in the first place to send, if you only have something that is a secret that stays on your system, then there’s no way for someone to fish that out of you. So this is going to be a good thing to try to help against that.
Now, there are other things that we can take a look at that also in the AI space, generative AI, I think we’re going to see an increased use of deepfakes. These are things where we simulate the voice, the image, the likeness of an individual. And in fact, deepfake technology has become so good and it is so prevalent.
Mobile phone and generative AI in 2024
In fact, if you have a mobile phone, it’s probably already built into your operating system. In most cases. You may not know about it, but it’s there. So you could use this kind of technology to fake someone out, have them believe something that’s not true.
For instance, have someone call a relative and say, I need money. It sounds like it’s your voice. So they send the money.
So we’re going to need to do more in terms of educating people about deepfakes and the threat in that space, because I think we’re going to see more of it.
In early 2024 I saw on X (formerly Twitter) that a British social media user had given away hundreds of pounds to a fake company that was recommended by a deepfake video of Martin Lewis from Money Saving Expert.
And by the way, if you think deepfake detection is going to be a good way to go, I’m going to ask you to think again about that. Deepfake technology will always keep getting better and it will eventually be to the point where I don’t think detection is going to work. In many cases, we’ve already seen this happen.
So the focus needs to be not on detecting the deepfake with some sort of technology, but building security mechanisms around it so that we’re not reliant on the information that’s in the deepfake itself.
Other things that we’re going to take a look at would be a threat that comes to us from generative AI and that’s hallucinations. And by the way, you didn’t think I was going to actually write that word out when I have a magic board that can autocomplete. That’s what generative AI does, right? So I’m leveraging that. Hallucinations.
We’re going to be more and more dependent upon generative AI, large language models and chatbots to give us information. The problem is, some of the information they give us isn’t always right. And we call those hallucinations, and we’re going to make decisions based upon that that could cause security threats to us.
AI Cybersecurity – What is RAG Technology? Retrieval, Augmented Generation
So my hope is that there will be other technologies, things like retrieval, augmented generation, or what we call rag technology that will help reinforce and make this system better and more accurate. Other things that we can do to tune the models and train them better so that they don’t hallucinate nearly as much going forward.
And then finally, I’m going to say something. I want to leave you with a positive in terms of a look at the future. And that is there’s this symbiotic relationship between AI and cybersecurity, and that is we’re going to use AI to do a better job of cybersecurity.
In fact, there’s a lot of things that we can do in this space to leverage generative AI in order to better think about the way someone would attack us. Also summarise cases and things of that sort. So I think we’re going to be able to do a better job with cybersecurity by leveraging AI.
By the same token, we’re going to need to use our cybersecurity skills in order to secure this AI so that it can be trustworthy, so that we can, in fact, believe that the information it gives us is true. Okay, that’s the future. And it’s not a big surprise that the future is very heavy.
However, there’s a lot of existing threats that have continued to persist and will continue to persist as we move into the future.
Let’s take a quick look at the scorecard from last year’s predictions and see which ones of those actually came true and which ones carry forward. I mentioned data breach last year when I did the video, and in fact, it turns out that the cost of a data breach has continued to increase. In fact, now we’re on the order of four and a half million pounds on average worldwide. And in the USA that number is almost twice as high.
So that one I’m going to say yeah, came true ransomware. In fact, we’ve continued to see ransomware persist. The the overall numbers are a little bit down but the amount of time it takes to run one of these attacks has changed dramatically.
This according to the X-Force Threat Intelligence Index, which says that back in 2019, we were looking at 60 days on average to deploy one of these. Now we’re down to about four days. So this is kind of a mixed bag. You know, this is it’s sort of true. Sort of not true. But ransomware is going to continue to be a threat and it’s a faster threat than it used to be.
Multifactor authentication. I don’t know about you, but I’m definitely seeing more websites that are offering this as an alternative and I’m taking advantage and you should as well. I think we’ll continue to see a lot more of that as we go forward.
Internet of Things (IoT), threats. Yes. In fact, we’ve seen there was one study that came out that said there was in fact, a fourth percent increase in attacks over time. I already talked about AI That one’s only going to get bigger, as we would guess. And then quantum computing. I talked about that one last year, and in particular that quantum systems are going to one day be able to crack our cryptography.
They haven’t effectively done that yet, but we’re one year closer to it. So this is one of those you can say, well, it’s it’s sort of true. We’re definitely closer to the point when that’s going to become a real threat to us. Not here exactly yet.
One bit of good news. I can report, though, that I was partly right and partly wrong on, and that is the skills gap. So the skills gap actually moved from what was 770,000 open positions in the cybersecurity space to now, according to Cyber Secure. We’re down to about 570,000, so that’s an improvement. I predicted that we would still have a skills gap, and we do, but it actually has has gone down a little bit.