Share on:

In this three-part series, Blumira VP of Operations Patrick Garrity interviews Dr. Chase Cunningham, information security analyst, advisor and author of Cyber Warfare: Truth, Tactics and Strategies – strategic concepts and truths to help you and your organization survive on the battleground of cyber warfare.

His book provides insights into the true history of cyber warfare, and the strategies, tactics, and cybersecurity tools that can be used to better defend yourself and your organization against cyber threat.

Some of the key features include:

  • Define and determine a cyber-defence strategy based on current and past real-life examples
  • Understand how future technologies will impact cyber warfare campaigns and society
  • Future-ready yourself and your business against any cyber threat

Listen to the full interview here:

In part 2, they discuss:

  • The impact of using AI (artificial intelligence) and ML (machine learning) systems to create deep fakes for criminal activity
  • How Facebook or Twitter account compromise and deep fakes could lead to negative social influence
    The impact of spreading disinformation, and what approach to take to stop it
  • How organizations with limited resources can both leverage cloud services and put effective controls in place for scalability of their cyber security program

Here’s a few summarized excerpts of their questions and answers:

I think the deep fakes portion of the book was interesting to me. Are there any places that you’ve heard of them being used, or seen them being used effectively today?

One place they showed up the most that has been creating problems is in the art community, where there’s been people using AI (artificial intelligence) and ML (machine learning) systems to basically create fake art that’s better than what humans can do. They sell it under the guise that it’s a rare picture, drawn by a famous artist, and it’s worth a few hundred thousand dollars – and somebody buys it because they’re a collector and it was rare and never found before. It turns out it was just created by a piece of machinery.

At that point, is it difficult to prove authenticity?

Yeah, that’s where they run into the problem of making sure that who they bought it from had the lineage to actually validate it. But there have been quite a few incidences that I wrote about in the book – one involves a guy that tested out his AI to see if you can come up with fake Shakespeare. He actually marketed it as Shakespeare wrote this play that no one ever knew about, and people were ready to buy it before he admitted he just made it up.

Sounds like more or less great opportunity for criminal activity to take advantage of other people.

When it comes to Facebook, Twitter, and other mediums, where if I got access to the right accounts, I could use fake stuff to generate relatively realistic things, then tweet then out to millions of people and incite social unrest – which, I mean, it’s happening on its own now.

On the topic of social influence, you mentioned election meddling and you go deep into this topic of people that spread disinformation as social influencers. I’m curious if you have any thoughts on what the right approach to stop this disinformation or recognize when something isn’t real?

That’s the issue with the way our country is built – you have the freedom of speech to say what you want and we can’t necessarily impact people’s ability to do that. But if you’re going to do something that is blatantly false or going to cause degradation, death, etc., we should at least make people aware that this is potentially negative information. There’s a lot of talk around registering certain things and processing things in real time and tagging it, which I think is probably the only way you can get to it.

But you don’t even necessarily have people validate that something is true with this type of action – what you’re trying to do is incite an emotional response. If something I say spins you up really quickly, then I win. You’re not going to take the time to go off and process it, research it and make sure that it’s true. When you can spout stuff that incites emotional response, you win as long as somebody puts their eyeballs on it.

You talk about defending the edge, how the perimeter has changed, and then you get into some technical concepts of the software-defined perimeter and micro-segmentation. More and more, we see organizations with limited resources – how do they actually achieve getting those controls or architecture put in place so that they are successful from a cyber security program perspective?

Everybody’s already moving towards these solutions in some shape. Just like you talked about during this COVID crisis, people were literally rationing VPNs. I had talks with a lot of folks that allowed work groups to use VPNs from 8am to noon, and this group from noon to 4pm – that proves that that technology doesn’t scale. So if the two things you have to ration are toilet paper and VPN access, you should probably look at a different type of solution.

If you’re a small or medium-sized business, leveraging and moving to cloud-based resources provide a lot of benefits for companies that can’t have their own security or IT teams. Cloud services are already architected and designed in some ways with future principles in mind.

Check out the first video and overview, Cyber Warfare, Part 1: BYOD, Social Influence & Autonomous Vehicles.

Security news and stories right to your inbox!