CanSecWest 2025 _newtype

Presentations

Maxwell Dulin Robert Yuen Maxwell Dulin Robert Yuen

Blockchain's Biggest Heists - Bridging Gone Wrong

$624 million lost in the Ronin hack. $611 million in the Poly Network exploit. These headlines share a common thread: security failures in the design and implementation of blockchain bridges—critical infrastructure that moves billions in value across networks.

Before you turn away from this talk because it’s about “crypto,” know this: there’s no hype here. This is a technical deep dive into how bridges work, why they break, and what their failures reveal about security engineering in highly adversarial environments. We’ll unpack real-world vulnerabilities, examine architectural trade-offs, and explore defense-in-depth strategies for building more resilient systems.

Beyond the headlines and market noise lies one of the most complex and high-stakes areas in modern security engineering—full of unsolved problems and opportunities for researchers to shape what comes next.

Read More
Omar Maarouf Robert Yuen Omar Maarouf Robert Yuen

Role Reversal: Exploiting AI Moderation Rules as Attack Vectors.

The rapid deployment of frontier large language models (LLMs) agents across applications, impacting sectors projected by McKinsey to potentially add $4.4 trillion to the global economy, has mandated the implementation of sophisticated safety protocols and content moderation rules. However, documented attack success rates (ASR) reaching as high as 0.99 against models like ChatGPT and GPT-4 using universal adversarial triggers (Shen et al., 2023) underscore a critical vulnerability: the safety mechanisms themselves. While significant effort is invested in patching vulnerabilities, this presentation argues that the rules, filters, and patched protocols often become primary targets, creating a persistent and evolving threat landscape. This risk is amplified by a lowered barrier to entry for adversarial actors and the emergence of new attack vectors inherent to LLM reasoning capabilities.  This presentation focuses on showcasing documented instances where security protocols and moderation rules, specifically designed to counter known LLM vulnerabilities, are paradoxically turned into attack vectors.

Read More
Abhishek Singh, Kalpesh Mantri Robert Yuen Abhishek Singh, Kalpesh Mantri Robert Yuen

Harnessing Language Models for Detection of Evasive Malicious Email Attachments

Our presentation will show how LLM models can effectively detect evasive malicious attachments without depending on the analysis of the malicious payload, which typically occurs in the later stages of attachment analysis. Our approach is exemplified by our success in defending against real-world threats, in actual production traffic including HTML smuggling campaigns, Obfuscated SVG , Phishing Links behind CDN, CAPTCHA, Downloaders, Redirectors. 

Read More
Hetian Shi Robert Yuen Hetian Shi Robert Yuen

Cross-Medium Injection: Exploiting Laser Signals to Manipulate Voice-Controlled IoT Devices

With the increasing adoption of voice-controlled devices in various smart technologies, their interactive functionality has made them a key feature in modern consumer electronics. However, the security of these devices has become a growing concern as attack methods evolve beyond traditional network-based threats to more sophisticated physical-layer attacks, such as Dolphin Attack and SurfingAttack, which exploit physical mediums to compromise the devices. This work introduces Laser Commands for Microphone Arrays (LCMA), a novel cross-medium attack that targets multi-microphone VC systems. LCMA utilizes Pulse Width Modulation (PWM) to inject light signals into multiple microphones, exploiting the underlying vulnerabilities in microphone structures that are designed for sound reception. These microphones can be triggered by light signals, producing the same effect as sound, which makes the attack harder to defend against. The cross-medium nature of the attack—where light is used instead of sound—further complicates detection, as light is silent, difficult to perceive, and can penetrate transparent media. This attack is scalable, cost-effective, and can be deployed remotely, posing significant risks to modern voice-controlled systems. The presentation will demonstrate LCMA’s capabilities and emphasize the urgent need for advanced countermeasures to protect against emerging cross-medium threats.

Read More
Qidan He Robert Yuen Qidan He Robert Yuen

BadResolve: Bypassing Android's Intent Checks to Resurrect LaunchAnyWhere Privilege Escalations

The LaunchAnywhere vulnerability in Android has been a significant security concern, enabling unprivileged applications to escalate privileges and invoke arbitrary protected or privileged activities. Despite extensive mitigation efforts by Google, such as introducing destination component checks via the resolveActivity API, these defenses have proven insufficient. In this talk, we introduce BadResolve, a novel exploitation technique that bypasses these checks using TOCTOU (Time of Check to Time of Use) race conditions. By controlling previously unforeseen parameters, BadResolve allows attackers to exploit Android's Intent resolution process, reintroducing LaunchAnywhere vulnerabilities.

We demonstrate how BadResolve works in practice, providing instructions for exploiting race conditions with 100% reliability, allowing unprivileged apps to invoke privileged activities. Our research also uncovers new CVEs that affect all Android versions, highlighting ongoing risks such as silent app installations, unauthorized phone calls, and modifications to critical system settings.

Additionally, we present a novel approach combining Large Language Models (LLMs) with traditional static analysis techniques to efficiently identify such kind of vulnerabilities in Android and OEM’s opensource and closed-source codebases.

Read More
Kevin Chen Robert Yuen Kevin Chen Robert Yuen

Biometrics System Hacking

Biometric systems, such as facial recognition and voiceprint identification, are widely used for personal identification. In recent years, many manufacturers have integrated facial recognition technology into their products. But how secure are these systems?

In this talk, we will demonstrate simple yet highly effective attack methods to bypass facial recognition systems. Additionally, we will explore techniques for spoofing voiceprint-based authentication, exposing how smart speaker security mechanisms can be manipulated.

Read More
Saikat Asaduzzaman Robert Yuen Saikat Asaduzzaman Robert Yuen

SOAR Implementation Pain Points and How to Avoid Them

As cybersecurity threats continue to escalate in complexity and frequency, organizations increasingly rely on automation to enhance their defenses. Security Orchestration, Automation, and Response (SOAR) platforms have emerged as powerful tools for streamlining operations and reducing the burden of repetitive tasks on security teams. However, implementing SOAR is not without its challenges. This presentation will explore the common challenges organizations encounter when deploying SOAR and provide actionable strategies to overcome them.

Read More
Dave Falkenstein Dragos Ruiu Dave Falkenstein Dragos Ruiu

Deepfake Deception: Weaponizing AI-Generated Voice Clones in Social Engineering Attacks

As deepfake technology rapidly evolves, its application in social engineering has reached a new level of sophistication. This talk will explore a real-world red team engagement where AI-driven deep fake voice cloning was leveraged to test an organization’s security controls. Through extensive research, we examined multiple deepfake methods, from video-based impersonation for video calls to voice cloning for phishing scenarios. Our findings revealed that audio deep fakes were the most effective and hardest to detect by human targets.

Read More
Diyar Saadi Robert Yuen Diyar Saadi Robert Yuen

Keys to Freedom: Analysis and Resolution of Arab Ransom Locker Infections

The presentation "Keys to Freedom: Analysis and Resolution of Arab Ransom Locker Infections" explores the intricate workings of the Arab Ransom Locker malware, focusing on its impact on mobile devices. This session delves into a comprehensive analysis of the ransomware's attack vector, encryption mechanisms, and behavioral patterns. It will also provide a step-by-step guide to unlocking infected devices, including proven recovery techniques, decryption tools, and preventive strategies. Targeted at cybersecurity professionals and mobile device users, the presentation aims to equip attendees with actionable insights to understand, mitigate, and neutralize the threat posed by this malicious ransomware.

Read More
Marta Janus Robert Yuen Marta Janus Robert Yuen

AI Security Landscape: Tales and Techniques from the Frontlines

The once theoretical AI bogeyman has arrived—and it brought friends. Over the past 12 months, adversaries have shifted from exploratory probing to weaponized exploitation across the entire AI stack, requiring a fundamental reassessment of defense postures. This presentation dissects the evolution of AI-specific TTPs, including advancements in model poisoning, LLM jailbreaking techniques, and the abuse of vulnerabilities in ML tooling and infrastructure.

Read More