In a recent blog entry, Ethereum co-founder Vitalik Buterin expounded on his personal artificial intelligence setup, which he describes as both “private” and “secure.” He revealed that the AI system operates exclusively on local hardware, with custom tools designed around the large language model (LLM) to prevent any actions, such as message sending or cryptocurrency transactions, without human approval.
Buterin emphasized the integration of a new two-factor authentication system comprising both humans and the LLM. His post, released on Wednesday, builds upon his earlier advocacy for privacy-focused AI solutions. Previously in February, he presented an Ethereum-AI roadmap encompassing private AI applications, agent markets, and governance frameworks.
In this latest piece, Buterin provides a detailed account of how he has personally applied these principles using the open-source Qwen3.5:35B model on a locally run llama-server. After experimenting with various configurations, his preferred setup involves an Nvidia 5090 GPU-equipped laptop that processes approximately 90 tokens per second, which he finds sufficiently efficient.
To reduce dependency on external search engines — considered privacy risks — Buterin retains a comprehensive compilation of Wikipedia articles and technical documentation directly on his device.
A particularly significant point involves the connection between AI systems and his Ethereum wallet and messaging accounts. Buterin developed and shared an open-source messaging daemon that enables his AI agent to access Signal messages and emails but restricts outgoing communications unless explicitly approved by a human user.
For developers creating AI-integrated Ethereum wallets, he recommends employing similar structures wherein autonomous transactions are limited to $100 daily, with higher amounts needing manual verification.
This approach aligns with Buterin’s existing strategy for managing his cryptocurrency assets, where 90% of his funds reside in a multisig Safe wallet, distributing keys among trusted individuals to prevent single points of failure.
Buterin contextualized these AI safeguards as an extension of this philosophy into the realm of autonomous agents. He began the post by highlighting findings from security researchers that approximately 15% of skills for OpenClaw — now the most rapidly expanding GitHub repository — included harmful commands, with some covertly extracting user data without notice.
Reflecting on these concerns, Buterin remarked: “I come from a mindset of being deeply scared that just as we were finally making a step forward in privacy with the mainstreaming of end-to-end encryption and more local-first software, we are on the verge of taking 10 steps backward by normalizing feeding your entire life to cloud-based AI.”