Black Hat 2024: AI, AI, and Everything Else
I’m back from another Black Hat! It was great seeing everyone. I put out a message on LinkedIn for people to come find me and, boy, did they. The hallway conversations were so engaging, I was sometimes late getting to the official talks, but I’m getting ahead of myself.
AI was everywhere, as we’d expect, but I also sat down to listen to experts on other topics like critical infrastructure, cyber insurance, and the root causes of cybersecurity failure.
Nathan Hamiel (Kudelski Security), Dr. Amanda Minnich (Microsoft), Nikki Pope (NVIDIA), Mikel Rodriguez (Google)
This excellent panel included some really heavy hitters, and my takeaway was that there’s a lot more bad than good with AI. According to the panelists, we’re very close to the point where we won’t be able to identify that a deepfake is a deepfake, and that’s scary stuff.
So what’s being done about it? Guardrails are the typical answers but these panelists–red teamers all– think people will find ways around whatever guardrails are put up. Instead, they suggest that AI models must learn the difference between right and wrong at the model level. If we can achieve that, then the models become like toddlers that can touch a hot stove and learn never to do that again. If that doesn’t happen…Lord help us.
Threat Hunting with LLM: From Discovering APT SAAIWC to Tracking APTs with AI
Hongfei Wang, Dong Wu, Yuan GU (All with DBAPPSecurity)
In slightly rosier news for the future of AI, these researchers talked about how they’re using LLMs to detect and find malicious code. They were able to train their model to recognize what malicious code looks like and were able to then use it for future detections, which included binary code, encoded code, and normal text.
On industrial control systems
Cassie Crossley (Schneider Electric), Noam Moche (Claroty), Thomas Brandstetter (Limes Security), Daniel Cuthbert (OWASP, UK Government Cybersecurity Advisory Board)
This was a really eye-opening talk about all of the valves, switches, and other internet-connected things used in critical infrastructure. Many of these devices can’t be shut down so when you’re working with them, you have to think far ahead and outside the box. There are a staggering amount of devices on the internet today that were never designed to be on the internet. The major takeaway here was that manufacturers need to plan ahead and be forward-thinking regarding how their devices might be used. You never know what an end user might try to do.
On root causes of cyber-insecurity
Keynote: Democracy’s Biggest Year: The Fight for Secure Elections Around the World
Jen Easterly (CISA), Hans de Vries (ENISA), Felicity Oswald OBE (NCSC), Christina A. Cassidy (Associated Press)
The overarching message of this keynote speech seemed to be that no matter how much money we throw at cybersecurity, we’re failing. When you take a step back and look at what the root cause is, it’s clearly software that wasn’t developed to be secure in the first place. It was announced that a new government initiative will be kicking off soon focusing on developers and even potentially holding developers liable for security to some degree.
On cyber insurance
Strengthen Cybersecurity by Leveraging Cyber Insurance
Bridget Quinn Choi (Woodruff Sawyer)
This was an interesting talk. Choi is a lawyer who handles cyber insurance policies, and she spoke about the origins of cyber insurance and how it has evolved. She cleared up some misconceptions and stressed that what cyber insurance should do is to help companies avoid being financially and reputationally devastated. Today, cyber insurance companies have teamed up with endpoint detection and response (EDR) companies, and policyholders get a discount for using particular EDR products. It’s a good thing that more people have EDR, but I’m hoping these products are actually good and effective.
Hallway conversations (on everything else)
A big theme was that every day data is being stolen, and there seems to be nothing we can do about it (which certainly mirrors the keynote panel). Companies are feeling like they don’t know what to do or that they have clear direction on how to be secure.
While many of the CISOs that I talked to certainly knew their stuff, many are new and feel overwhelmed by the sheer amount of things to know and protect. Not surprisingly, the trend seems to be that the title CISO doesn’t mean just one thing. Back in the day, CISOs used to be over literally everything security. Now it seems there’s a split with CISOs who cover the networking and AppSec side, and other kinds of CISOs who cover just policies, legal, and governance, risk, and compliance.. There really is so much for these people to cover so I wouldn’t be surprised if new titles emerge to better describe these two distinct positions.
On 2025
AI and LLM modeling will make unparalleled progress. When you stop and think about AppSec specifically, the progress over the last 10 years has been dramatic. Gone are the days of being a niche area—we’re just about mainstream. This next year will have a lot of growing pains but I think we’ll move the needle more than we have in multiple previous years combined.
*** This is a Security Bloggers Network syndicated blog from Mend authored by Lisa Haas. Read the original post at: https://www.mend.io/blog/black-hat-2024-ai-ai-and-everything-else/
Source link