Deepfakes And Even The Tools Used To Detect Them Can Be Dangerous

The Critical Intersection Newsletter

You have a lot going on, so join the thousands of other leaders and let me do the work and provide you with curated cybersecurity content. It would be my honor to do so.

NOTES: If you want to ensure you get this newsletter every week, please add my "from" address to your contact list. If you would like to Unsubscribe scroll to the bottom and select "unsubscribe". Thank you.

In this week's edition:

  • Cyber Stats - Ransomware

  • Featured Article - Deepfakes And Even The Tools Used To Detect Them Can Be Dangerous

  • Free Cybersecurity Resources - eBooks, tools, apps & services

  • Trending Story - Cutting Through the Noise: What is Zero Trust Security?

  • Cybersecurity News Highlights

  • Cyber Scam of the Week - Smishing for Bank Information

  • Social Posts of the Week

Cyber Stats

Here are some interesting ransomware statistics:

  • Ransomware attacks against healthcare organizations reached one of their highest levels in years in April 2023, with 31 organizations falling victim, surpassing the previous record high of 29 attacks in December 2021​​.

  • At least one healthcare provider had to close temporarily after being hit by a ransomware attack in April 2023​.

  • The number of ransomware attacks overall was down in April 2023 compared to March 2023, with 339 attacks in April compared to 437 in March​​.

  • In 2022, ransomware accounted for around 20% of all cybercrimes. It was observed that 93% of ransomware is Windows-based executables, and the most common entry point for ransomware is phishing

  • The volume of ransomware attacks dropped by 23% in 2022 compared to the previous year​4​.

  • About 20% of the costs of ransomware attacks are attributed to brand reputation damage​6​.

  • Between May 2021 and June 2022, there were an estimated 3,640 successful ransomware attacks globally​7.

  • A survey of senior non-technical employees found that just 42% of companies would report a ransomware attack to both law enforcement and a cybersecurity incident response service​.

  • Of the companies that have previously been attacked by ransomware, 88% said they would pay the ransom again if attacked​.

  • Phishing is the primary delivery method for ransomware, with a report finding that 75% of 1,400 organizations surveyed suffered a ransomware attack​

Featured Original Article

Deepfakes And Even The Tools Used To Detect Them Can Be Dangerous

Image Source and Credit: FreeImages‍

As technology continues to evolve rapidly in today's digital world; so have concerns around the use of deepfakes. Utilizing AI-generated videos, images, or text content; these fake representations can be both convincing and false when it comes down to people themselves or even events taking place around them. However, worrying this may seem - there are many individuals who see potential in using such technology but there still remain significant risks attached regardless of its misuse or exploitation.

Throughout this article, we'll explore the various hazards presented by Deepfakes which include misinformation campaigns aimed at spreading lies or propaganda; identity theft resulting from an individual's likeness being used without permission via manipulating footage; non-consensual pornography gaining rise through intended humiliation: all circles back into this undermining public trust concept which is where the true danger lies.

With new machine learning techniques including Generative Adversarial Networks (GANs), it has never been easier for Deepfakes to increase in their sophistication level. While there are certain purposes where they could be utilized legitimately within areas like art or education, the overall dangers remain significant towards national security if misused.

Cybersecurity Threats Posed by Deepfakes

  1. Business Identity Compromise (BIC)
    The utilization of deepfakes has brought about a fresh challenge for businesses and organizations against cybercrime. Cybercriminals are taking advantage by distorting audio, video or textual content and posing as higher-ups such as CEOs and colleagues in order to get hold of classified information and money without permission - dubbed the Business Identity Compromise (BIC). The attacks not only carry financial repercussions but also harm the company’s image.

Attackers can use deepfake technology to create realistic phone calls or messages, duping employees into revealing sensitive information or transferring funds. In some cases, these scams have resulted in significant financial losses for targeted companies.

Attacks on Medical Infrastructure

Deepfakes can also target critical medical infrastructure, such as hospitals and radiology departments. By manipulating medical images, such as MRI or CT scans, attackers can alter the diagnosis and treatment of patients. This malicious tampering of medical data can result in severe consequences, from insurance fraud to life-threatening misdiagnoses.

Escalating Disinformation Campaigns

Deepfakes can also be used to spread disinformation and undermine public trust. By creating convincing yet false text, audio, or video content, bad actors can sway public opinion, manipulate political discourse, and distort facts. The widespread use of deepfakes in disinformation campaigns poses a significant challenge to democracies, national security, and the integrity of information online.

Evidentiary Impact in Litigation

Deepfakes also have the potential to compromise the integrity of evidence in legal proceedings. As deepfake technology becomes more advanced, it may become increasingly difficult for courts to determine the authenticity of digital evidence. This challenge could undermine the truth-seeking function of the legal system and erode public trust in the judicial process.

AI Detectors and Tools to Combat Deepfakes

While deepfakes present significant dangers, various AI detectors and tools have been developed to identify and combat manipulated content. However, it is essential to acknowledge that AI detectors can be inaccurate and may occasionally misidentify legitimate content as deepfakes.

Pioneering AI Detectors

Several AI detector tools have been developed to identify deepfakes with varying degrees of accuracy. For example, Pindrop, a company specializing in AI security, has analyzed over five billion voice interactions and identified millions of fraud attempts using AI-generated voices. Microsoft's Video Authenticator is another tool designed to provide a confidence level indicating if media content has been artificially manipulated.

Detection Through Metadata Analysis

By analyzing metadata, such as timestamps, file formats, and compression rates, it is possible to identify inconsistencies in digital evidence that may suggest manipulation. This method can be particularly useful when combined with other detection tools and techniques.

Reverse Engineering

Researchers from Facebook and Michigan State University have developed a reverse engineering approach for detecting deepfakes. This method involves analyzing the unique characteristics of deepfake images and comparing them to known genuine images to identify manipulation.

Deepfake Detection Competitions and Research

To encourage innovation in deepfake detection, various competitions and research initiatives have been established. The National Defense Authorization Act (NDAA) for Fiscal Year 2020 established a "Deepfakes Prize Competition" to promote deepfake detection research, development, and commercialization. Additionally, Facebook, Microsoft, and other tech giants have invested in deepfake detection research and tools.

Protecting Yourself and Your Business from Deepfakes

While no foolproof method exists to completely eliminate the risk of deepfakes, there are several steps individuals and organizations can take to mitigate the dangers:

  • Educate personnel on the scope and risks of deepfakes and how to identify them.

  • Be vigilant when consuming content online and do not assume the authenticity of digital media based on appearance alone.

  • Never disclose personal or sensitive information without verifying the identity of the recipient through reliable, independent sources.

  • Implement multi-factor authentication and encryption on all devices, accounts, and systems.

  • Tighten payment permission processes and require multi-person authorization for larger transactions.

  • Invest in deepfake detection tools to screen communications for potential manipulation.

  • Ensure that insurance policies cover damages resulting from deepfake fraud.

  • Be cautious when sharing personal images and videos on social media and accepting new contacts.

Conclusion

Advances in Artificial Intelligence have given rise to a new kind of danger - one that could potentially impact all aspects of society; Deepfakes are here and pose real threats to individuals businesses as well as our entire society! Given these concerns; it's vital for organizations across various sectors not to take any chances when it comes to protecting themselves against potential fraudsters peddling these false narratives online!

It is important then that organizations cultivate knowledge around safeguarding themselves from such manipulation; keeping abreast with the latest developments both in AI tech (for early warnings) as well as cutting-edge detection tools from industry-leading tech companies like Facebook, Microsoft, & Google to name just a few. Furthermore, strict digital media policies within an organization will also play a role in ensuring that all the content out there is genuine and trustworthy.

Finally, it is important to keep employees informed about the risks associated with deepfakes and how they can identify manipulated media. By taking these steps, companies can better protect themselves from the threat of deepfakes.

Free Resources

Trending Story

Other Bytes

Cyber Scam of the Week

Smishing for Bank Information

Recently, a ring of cybercriminals used a smishing attack in order to steal credit card information. A smishing attack is a type of phishing attack through SMS messaging. The cybercriminals used this stolen bank information to purchase cryptocurrency that they then exchanged for cash. Read the below information to learn more about this scam and how you can protect yourself from cybercriminals. 

In this scam, cybercriminals sent SMS messages pretending to be a bank. The message says that a security issue needs to be resolved and will prompt you to click a link to your bank’s login page. This page looks legitimate, but it’s actually a spoofed page that records your keystrokes. If you enter your bank login information, cybercriminals will use this information to hack into your bank account. 

Follow the tips below to stay safe from similar scams:

  • Think before you open a link. Cyberattacks are designed to catch you off guard and trigger you to open links impulsively. 

  • Never enter your bank login information from a link in a text message. Instead, navigate to your bank’s official website to log in.

  • Remember that this type of attack isn’t exclusive to banks. Cybercriminals could use this technique to impersonate any organization. 

This Cyber Scam is provided by our sponsors: Netsync & KnowBe4

Cybersecurity Social

Just a couple of interesting social posts