Trader Manipulates AI Using Morse Code for Crypto Token Theft

A bad actor successfully siphoned billions in crypto tokens from a verified wallet by leveraging a post with morse code, without ever accessing the private keys. The incident involved tagging @grok and using dots and dashes in an X post.

On May 4, Bankrbot announced it had transferred 3 billion DRB to an unauthorized account (0xe8e47…a686b) from a wallet linked to Grok, an AI by X. This Base transaction revealed the path of this unauthorized transfer.

CryptoSlate’s examination of related X posts suggests Morse-code obfuscation led Grok to interpret and respond with a command that triggered Bankrbot to execute the token transfer. The problem lay in the transition from language processing to execution authority, turning AI output into a payment mechanism when treated as valid by another system.

This incident highlights the need for crypto investors to view AI-agent interactions as wallet-control risks. Public commands can become spend authorizations if one system regards model outputs as instructions and another has token-moving permissions.

Wallet permissions, parsers, social triggers, and execution policies now form layers of potential attack vectors.

Related Reading: AI agents autonomously spending in crypto raises significant questions about software payment mechanisms (Mar 28, 2026).

CryptoSlate’s analysis estimated the DRB transfer value at $155,000 to $200,000. Most funds were reportedly returned, with some retained as an informal bug bounty, highlighting reliance on post-transaction coordination over pre-transaction limits.

Bankr developer 0xDeployer stated that 80% of the funds were recovered and the remaining discussed with the DRB community. Bankr automatically assigns X wallets to interacting accounts like Grok, controlled by the associated X account holder, not Bankr or xAI staff.

The path of this breach involved four steps: identifying a Bankr Club Membership NFT in a Grok-associated wallet; posting Morse code on X with additional formatting; Grok translating this into a clean command with @bankrbot; and Bankrbot executing the transfer. This incident emphasizes the need for explicit policies separating chat instructions from transaction authority.

Prompt injection often viewed as a model-behavior issue now presents concrete financial risks when system outputs are excessively empowered. Malicious instructions can infiltrate models via third-party content, necessitating tool access controls and confirmations around consequential actions.

In crypto, such failures lead to asset-control problems due to transaction finality. Once executed, recovery depends on counterparties or legal intervention, marking this as a control failure rather than just model misbehavior.

Bankr’s access controls include read-only modes and allowlists but require stronger external policy checks. Trading agents should separate read and write functions with user confirmations for writes. Recipient allowlists must be enforced externally, while spend limits should reset per session or action to mitigate excessive spending authority.

Local key isolation is critical for users running assistants with wallet access. Bankr previously blocked Grok replies in earlier agent versions but failed to maintain this in the latest update, allowing a public Grok reply to become an executable instruction.

Bankr has since reinforced blocks on Grok’s account and emphasized controls like IP whitelisting and permissioned API keys for agents. Agent safety hinges more on permission boundaries than market dynamics, as shown by prior coverage of agent-economy flows and autonomous payments in crypto.

The key lesson is to validate model outputs through a separate policy layer before execution, preventing prompt injection from exploiting transaction authorization processes. This incident underscores treating AI output as untrusted until validated for intent and authority.

Platform Hexoria Forex officieel vertrouwd platform voor AI-handel