
CanSecWest 2025 _newtype
Presentations
Harnessing Language Models for Detection of Evasive Malicious Email Attachments
Our presentation will show how LLM models can effectively detect evasive malicious attachments without depending on the analysis of the malicious payload, which typically occurs in the later stages of attachment analysis. Our approach is exemplified by our success in defending against real-world threats, in actual production traffic including HTML smuggling campaigns, Obfuscated SVG , Phishing Links behind CDN, CAPTCHA, Downloaders, Redirectors.
BadResolve: Bypassing Android's Intent Checks to Resurrect LaunchAnyWhere Privilege Escalations
The LaunchAnywhere vulnerability in Android has been a significant security concern, enabling unprivileged applications to escalate privileges and invoke arbitrary protected or privileged activities. Despite extensive mitigation efforts by Google, such as introducing destination component checks via the resolveActivity API, these defenses have proven insufficient. In this talk, we introduce BadResolve, a novel exploitation technique that bypasses these checks using TOCTOU (Time of Check to Time of Use) race conditions. By controlling previously unforeseen parameters, BadResolve allows attackers to exploit Android's Intent resolution process, reintroducing LaunchAnywhere vulnerabilities.
We demonstrate how BadResolve works in practice, providing instructions for exploiting race conditions with 100% reliability, allowing unprivileged apps to invoke privileged activities. Our research also uncovers new CVEs that affect all Android versions, highlighting ongoing risks such as silent app installations, unauthorized phone calls, and modifications to critical system settings.
Additionally, we present a novel approach combining Large Language Models (LLMs) with traditional static analysis techniques to efficiently identify such kind of vulnerabilities in Android and OEMās opensource and closed-source codebases.
Biometrics System Hacking
Biometric systems, such as facial recognition and voiceprint identification, are widely used for personal identification. In recent years, many manufacturers have integrated facial recognition technology into their products. But how secure are these systems?
In this talk, we will demonstrate simple yet highly effective attack methods to bypass facial recognition systems. Additionally, we will explore techniques for spoofing voiceprint-based authentication, exposing how smart speaker security mechanisms can be manipulated.
Threat Modeling AI Systems ā Understanding the Risks
AI is everywhere. From help bots to logistics systems to your car, it seems like every software company wants every new feature they release to include AI. But how do we keep it secure? In this session, we will discuss the threat landscape for AI systems.
SOAR Implementation Pain Points and How to Avoid Them
As cybersecurity threats continue to escalate in complexity and frequency, organizations increasingly rely on automation to enhance their defenses. Security Orchestration, Automation, and Response (SOAR) platforms have emerged as powerful tools for streamlining operations and reducing the burden of repetitive tasks on security teams. However, implementing SOAR is not without its challenges. This presentation will explore the common challenges organizations encounter when deploying SOAR and provide actionable strategies to overcome them.
Deepfake Deception: Weaponizing AI-Generated Voice Clones in Social Engineering Attacks
As deepfake technology rapidly evolves, its application in social engineering has reached a new level of sophistication. This talk will explore a real-world red team engagement where AI-driven deep fake voice cloning was leveraged to test an organizationās security controls. Through extensive research, we examined multiple deepfake methods, from video-based impersonation for video calls to voice cloning for phishing scenarios. Our findings revealed that audio deep fakes were the most effective and hardest to detect by human targets.
Counter-Incident Response: Anticipating Attacker Moves
Traditional incident response focuses on detecting, containing, and remediating threats, while counter-incident response includes the additional layer of preparing for adversarial interference in these processes. In this talk, we will look at various scenarios we encountered in our incident response cases. From the lessons learned from these cases, we have now developed strategies and processes that make it unlikely that an attacker who is still in the network can manipulate our incident response processes, or at least that we are prepared for the manipulation and can prevent it with compensatory measures.
Keys to Freedom: Analysis and Resolution of Arab Ransom Locker Infections
The presentation "Keys to Freedom: Analysis and Resolution of Arab Ransom Locker Infections" explores the intricate workings of the Arab Ransom Locker malware, focusing on its impact on mobile devices. This session delves into a comprehensive analysis of the ransomware's attack vector, encryption mechanisms, and behavioral patterns. It will also provide a step-by-step guide to unlocking infected devices, including proven recovery techniques, decryption tools, and preventive strategies. Targeted at cybersecurity professionals and mobile device users, the presentation aims to equip attendees with actionable insights to understand, mitigate, and neutralize the threat posed by this malicious ransomware.
AI Security Landscape: Tales and Techniques from the Frontlines
The once theoretical AI bogeyman has arrivedāand it brought friends. Over the past 12 months, adversaries have shifted from exploratory probing to weaponized exploitation across the entire AI stack, requiring a fundamental reassessment of defense postures. This presentation dissects the evolution of AI-specific TTPs, including advancements in model poisoning, LLM jailbreaking techniques, and the abuse of vulnerabilities in ML tooling and infrastructure.