Stolen axios publish access turned npm install into a RAT dropper

A stolen axios maintainer credential pushed malicious versions `1.14.1` and `0.30.4` to npm, turning routine installs into install-time code execution. For crypto operators, this is less a dependency-hygiene story than a host-compromise problem: treat affected machines as potentially breached, rotate secrets, and work back from the install window.

AI Author: Cube Security TeamMar 31, 2026

A stolen axios maintainer credential pushed malicious npm releases axios@1.14.1 and 0.30.4, briefly turning ordinary installs into attacker-controlled code execution. For crypto-infrastructure teams, the key issue is not that a popular dependency went bad for a few hours. It is that compromised publish access can outrun normal review and land on developer workstations, CI runners, and other secrets-bearing hosts before anyone can react. If those versions touched a sensitive machine, this is a host-compromise problem with a dependency-shaped entry point.

Stolen axios publish access turned npm install into an install-time RAT dropper

For about three hours, npm install axios could hand an attacker code execution on the machine doing the install. The confirmed malicious releases were axios@1.14.1 and axios@0.30.4, published after an axios maintainer account was hijacked; both pulled in plain-crypto-js@4.2.1, a typosquatted dependency whose only real job was to run malware during installation.

That distinction matters. The attacker did not need to hide malicious logic in application runtime and hope someone later hit the right code path. They turned dependency resolution itself into the execution path. If a developer workstation, CI runner, or other secrets-bearing build host resolved one of those versions during the live window, npm’s normal lifecycle behavior did the rest.

The chain is now fairly well established across multiple technical writeups. The compromised axios package added plain-crypto-js@4.2.1, which closely mimicked the legitimate crypto-js package closely enough to look unremarkable at a glance, then added a postinstall script: node setup.js. npm runs postinstall automatically after installation. setup.js was obfuscated, fingerprinted the host OS, and contacted sfrclak[.]com:8000/6202033 to fetch or drive a second-stage payload.

The second stage varied by platform. Researchers reported a Mach-O binary on macOS, a PowerShell-based backdoor on Windows, and a Python payload on Linux. All three used the same C2 pattern: HTTP POST requests with base64-encoded JSON and a hardcoded IE8 user agent, a very specific sign that nobody involved expected packet inspection to be a literary exercise. The exposure window was roughly 00:21 to 03:25 UTC before npm removed the bad versions and revoked package tokens.

Two details matter as much as the dropper itself. First, the malicious dependency was a phantom one: it was added to the manifest but not imported by axios anywhere. That is a strong signal of tampering, because the dependency existed to win install-time execution, not to provide functionality. Second, the malware cleaned up after itself. Multiple analyses found that after setup.js ran, it deleted that file and replaced its own package.json with a clean-looking stub that reported version 4.2.0. So a later npm list or casual look inside node_modules could suggest everything was normal even if the host had already beaconed out and fetched a payload.

There are still open questions, and they should stay open until evidence closes them. The exact path used to compromise the maintainer account is not confirmed publicly. Datadog also notes it did not independently analyze axios@0.30.4, though other researchers report the same backdoor behavior there. Downstream victim count is also still unknown. What is confirmed is narrower and more useful: a stolen publisher session bypassed normal release review, trusted publishing metadata did not protect the manual publish, and ordinary installs became initial access.

For crypto operators, the uncomfortable part is not just that axios is widely used. It is that this attack path lands closest to the machines that assemble releases, hold CI secrets, talk to cloud control planes, and sometimes sit one hop away from signing workflows. In that environment, “we removed the package” is cleanup theater. The install event is the incident.

Affected axios installs require credential rotation and host triage

If the bad axios versions touched a machine that held secrets, rotate credentials first and clean dependencies second. axios@1.14.1 and axios@0.30.4 turned npm install into code execution through plain-crypto-js@4.2.1, so the working assumption is not “we pulled a bad dependency” but “a host may have run attacker code during installation.”

That distinction changes the response scope immediately. Downgrading axios or deleting node_modules fixes future installs; it does nothing for a developer laptop, CI runner, or signing-adjacent host that already executed the postinstall loader. Researchers have now published enough artifacts to treat this as a practical IR case: the C2 endpoint sfrclak[.]com:8000/6202033, macOS payload path /Library/Caches/com.apple.act.mond, Windows artifacts including %PROGRAMDATA%\wt.exe, %PROGRAMDATA%\system.bat, and the HKCU\Software\Microsoft\Windows\CurrentVersion\Run\MicrosoftUpdate persistence key, plus the Linux script /tmp/ld.py. Some Windows and Linux second stages appear buggy, which is encouraging in the very narrow sense that a falling piano may miss you. It does not make install-time execution safe.

The first job is to separate direct compromise from exposure. Start with lockfiles, package-manager caches, CI logs, container build logs, and workstation shell history to determine whether those exact axios versions were actually resolved during the roughly three-hour live window. Then narrow further: which hosts completed installation, which had outbound access, and which held useful material at the time - cloud credentials, CI secrets, SSH keys, API tokens, wallet software, seed material, signing scripts, or access to systems that do. A repository that merely allowed ^1.14.0 is not the same as a runner that fetched 1.14.1 and reached out over HTTP a second later.

For machines that did install the malicious versions, assume secrets on that host are burned. Rotate cloud and CI credentials, npm tokens, GitHub tokens, exchange API keys, RPC credentials, SSH material, and any wallet-adjacent secrets that were present or reachable. Review outbound network telemetry for connections to sfrclak[.]com:8000, look for the published file paths and registry persistence, and preserve forensic evidence before wiping where possible. On ephemeral runners, rebuilding from a known-good image is usually faster than pretending you can lovingly scrub a disposable machine back to innocence. On developer endpoints and any system near signing workflows, the bar should be higher: investigate, reimage if needed, and verify what that host could reach, not just what it stored locally.

There is also a straightforward preventative lesson here. Trusted publishing via CI helped the maintainers distinguish normal releases from the attacker’s manual publish, but it did not protect anyone who automatically accepted a fresh release during installation. Operators can shrink that blast radius by pinning versions, delaying uptake of brand-new releases, disabling lifecycle scripts where workflows allow it, and instrumenting build hosts so an unexpected npm install beacon is visible before it becomes someone’s cloud incident. Supply-chain attacks keep winning on speed: one stolen publisher credential, one install step, and suddenly dependency hygiene has become key management by other means.

Why Cube Was Not Affected

Cube was not affected by this incident. The failure mode here was a stolen publisher credential that turned a routine npm install into remote code execution on the installing host. Cube’s relevant control boundary is its non-custodial MPC / threshold-signing design: no full user private key is assembled in one place by the exchange, which reduces exchange-held key concentration. In practice, that distributed signing model, alongside cautious operational security habits, kept this compromised package publish from turning into a direct Cube compromise.

How to Trade Safely After the Axios npm Compromise

This workflow is for teams or traders whose developer laptop, CI runner, or other secrets-bearing host may have installed the compromised axios versions during the exposure window. If you never touched those releases on a sensitive machine, you are probably outside the direct response scope. When returning to Cube Exchange, treat this as a host-compromise recovery problem first and a trading problem second:

  • Return to Cube Exchange only from a clean device or reimaged runner, not from a host that may have installed the compromised package.
  • Rotate credentials and secrets that exposed hosts could access before funding or trading again.
  • Re-verify asset, network, and destination details for any deposit or withdrawal rather than trusting cached clipboard or saved assumptions.
  • Use a small test transfer first when moving funds back into active trading.
  • Resume trading with cautious order entry and position sizing until the incident boundary is understood.

Recent articles

Read the latest from Cube News

The newest briefings, updates, and market notes from the news desk.