ENFR
8news

Tech • IA • Crypto

TodayVideosVideo recapsAll topicsTop articlesArchives

DeepSeek’s Claude Code Killer Goes Viral Overnight

AIAI RevolutionMay 7, 202612:58
0:00 / 0:00

TL;DR

DeepSeek TUI, an open-source terminal-based AI coding agent built around DeepSeek V4, surged to the top of GitHub trending in May 2026, drawing global attention for both its rapid adoption and unconventional creator.

KEY POINTS

Explosive GitHub Growth

The project rapidly gained traction in early May, surpassing 10,200 stars after adding more than 2,400 stars in a single day. Earlier the same day, it had been reported near 8,700 stars, highlighting unusually fast adoption across developer communities on GitHub, Reddit, and X.

Terminal-Native AI Coding Workflow

DeepSeek TUI enables developers to interact with an AI agent directly inside the terminal, eliminating the need for browser-based workflows. It can read and edit files, execute shell commands, manage Git repositories, apply patches, and coordinate subtasks, all within a keyboard-driven interface.

Built Around DeepSeek V4

Unlike multi-model tools, the system is tightly optimized for DeepSeek V4, leveraging its 1 million token context window, low-cost pricing, and distinct Pro and Flash modes. This focused design is seen as a key factor behind its appeal.

Unconventional Creator Story

The project was developed by Hunter Bound, an American programmer with a background in music education and law, currently studying patent law. The contrast between his nontraditional technical background and the sophistication of the tool amplified interest in the project.

AI-Assisted Development Loop

Bound reportedly used AI tools to help build the system itself, creating a feedback loop where AI assists in developing a tool that enables further AI-driven coding. This “self-iterating” approach became a focal point of discussion in developer circles.

Rust-Based Dual Architecture

The tool uses a dual-binary Rust architecture, consisting of a dispatcher CLI and a runtime engine. The dispatcher manages sessions and configuration, while the runtime handles the agent loop and terminal UI, built using the Ratatui framework for native performance.

Real-Time Reasoning Visibility

A standout feature is live reasoning streaming, where the model’s internal reasoning process is displayed alongside outputs. This allows developers to observe decision-making steps, tool calls, and intermediate logic in real time.

Context Compression and Cost Control

The system addresses scaling issues in long sessions by tracking context usage and compressing older data. It can shrink tool outputs without invoking the model, reducing token costs and avoiding unnecessary summarization.

Loop Detection and Safety Controls

To prevent runaway automation, the agent detects repeated failed tool calls. It intervenes after repeated attempts, issuing warnings or halting execution, a safeguard critical for tools with system-level access.

Multi-Agent Task Distribution

Through a feature called RLM, tasks can be distributed across multiple sub-agents running on the cheaper DeepSeek V4 Flash model. This enables parallel exploration of solutions at significantly lower cost than relying solely on higher-tier models.

Flexible Operating Modes

The tool offers Plan Mode (read-only analysis), Agent Mode (approval-based execution), and YOLO Mode (fully autonomous operation). These modes balance safety and automation depending on user needs.

Global and Cross-Platform Reach

The project includes multilingual support, a Chinese-language README, and installation options via npm, Cargo, and Homebrew. Cross-platform fixes, including Windows and ARM Linux support, indicate active maintenance.

Integrated Development Tooling

DeepSeek TUI connects with language servers such as Rust Analyzer, TypeScript Language Server, and others to surface real-time diagnostics. It also supports session persistence, rollback checkpoints, and integration via Model Context Protocol (MCP).

CONCLUSION

DeepSeek TUI combines rapid open-source momentum with novel technical features and an unexpected origin story, positioning it as a potentially significant entrant in AI-assisted software development.

Full transcript

Deepseek just got its own version of claude code and somehow the whole thing turned into one of the strangest developer stories of the year. The project is called DeepSeek Tui. It is an open-source terminal native AI coding agent built around DeepSeek V4 and in the last few days it exploded on GitHub. According to JIDS, on May 6th, it reached the top of GitHub trending, gained 2,434 stars in a single day, and pushed past 10,200 total stars. Another report earlier that same day had it at around 8,700 stars, which shows how fast this thing was moving. One moment, it looked like another Deep Seek rapper, and then suddenly developers on GitHub, Reddit, X, and Chinese tech communities were all talking about it. So the basic idea is this. Instead of opening a browser, copying code into a chatbot, waiting for suggestions, and then manually applying everything, DeepSeek Tui lets you talk to DeepSeek directly inside your terminal. It can read and edit files, run shell commands, search the web, manage git repositories, apply patches, handle tasks, and coordinate sub aents from a keyboard-driven terminal interface. So the comparison with cloud code is obvious. It sits in the same category as Claude Code, Ader, Klein, and Open Code, except this one is heavily designed around DeepSeek V4 instead of trying to be a fully generic multimodel tool. That Deepseek native focus is probably why it caught so much attention. Deepseek V4 came with a 1 million token context window, low pricing, and a lot of excitement around its Pro and Flash versions. Deepseek 2 tries to turn those model strengths into a real coding workflow. It is not an official DeepSeek product. It was created by Hunter Bound, an independent American developer using the GitHub handle HMBbound. The project initially launched on January 19th, 2026, and by early May, it had already gone through dozens of releases. One technical writeup mentions version 0.8.8 across 37 releases, while Jidditic says the project updated to version 0.8.13 8.13 on the morning of May 6th with fixes focused on runtime and TUI related issues. The stranger part is the person behind it. Hunter Bound is not the usual AI researcher with years of compiler work behind him. His background is music education and law. He earned a bachelor's degree in music education from the University of North Texas in 2015, then a master's degree in music education from Southern Methodist University in 2019. He is now studying at SMU's Deadmond School of Law. Described in another report as a secondyear patent law student. According to the reports, he built Deepseek Tui using AI assisted coding. He has described that workflow almost like an early version of AI selfiteration where AI helps build the tool that later helps other people code with AI. That detail made the story more viral because it turned the project into more than just a coding tool. A patent law student with a music background ships a Rustbased AI coding agent. It goes number one on GitHub trending. Chinese developers start noticing it and then he starts learning Chinese to communicate with the Deep Seek community. On May 3rd, Bound wrote that 2 days earlier he was nobody and that the previous two days had been the craziest of his life. He also posted that he wanted to connect with Chinese developers and called them Whale Brothers, which immediately became a small meme. Some people found the phrase funny, some asked where it came from, and Netzens on X later shared that he had managed to get a WeChat account and started communicating with Chinese developers. The project also has a Chinese friendly side. Bound wrote a readme ZHCN MD file, and the open-source homepage includes a mirror friendly installation version for Chinese developers. Even the contributor list became part of the discussion because it included Claude and Gemini, most likely meaning AI assisted contribution traces. Now, technically, DeepSeek 2 is more interesting than a basic wrapper. Under the hood, it uses a dual binary rust architecture. There is a DeepSk dispatcher CLI and there is a DeepSseek 2e runtime. The dispatcher handles authentication, configuration, model selection, and session management. The runtime handles the actual agent loop and the terminal interface. If one binary runs without the other, it throws a missing companion binary error because both are required. The UI is built with Ratatouille, so it is a native Rust terminal app, not an Electron app, not a Python Damon, and not a node process sitting in the background. Installation can be done with npm install G deepseeek 2 through cargo with separate installs for Deepseek2i. CLI and DeepSeek 2 or through Homebrew on Mac OS. One version even fixed Windows path separators and ARM 64 Linux binary availability which suggests the developer is actually maintaining cross-platform support. The internal flow is pretty straightforward. The DeepSseek Dispatcher launches the DeepSseek 2e companion runtime which connects the Ratatouille interface to an asynchronous engine and an OpenAI compatible streaming client. Tool calls move through a typed registry covering shell commands, file operations, git actions, web search, URL fetching, sub aent sessions, MCP server connections, and RLM. The results stream back into the transcript in real time so the user can see what is happening instead of waiting for one giant response at the end. Now, DeepSseev forshapes almost everything inside this tool. A lot of coding apps say they support DeepSeek, but in many cases that just means they connect to DeepSseek's API and treat it like any other model. Deepseek 2 is different. It is designed around the things that make Deepseek V4 special. The huge 1 million token context window, cheaper cash tokens, the lowcost V4 flash model, and the stronger V4 Pro reasoning mode. It even tracks cash hits and cash misses so developers can see when the model is using cheaper cached input instead of paying full price every time. It also tries to solve one of the biggest problems with AI coding agents. The conversation gets too big. When an agent works on a project for a long time, it keeps collecting files, tool results, command outputs, errors, fixes, and explanations. At some point, the session can become messy and expensive. Deepseek 2 can track how much context is being used and compress older parts of the session. In version 0.8.13, it added a smarter cleanup system. Instead of paying the AI to summarize everything, the tool can first shrink old tool results by itself. So, for example, instead of keeping a huge old command output in full, it keeps a short oneline version and saves the newest important data. If that is enough to reduce the session size, it can skip the paid AI summary completely. It also has protection against one of the most annoying agent problems, getting stuck in a loop. Sometimes an AI coding agent keeps running the same tool or the same command again and again, even though it already failed. Deepseek2 now watches for that. If the same tool with the same arguments appears for the third time in one user request, it stops the repeat and inserts a correction message instead. If a tool keeps failing, it warns on the third try and stops on the eighth. That sounds technical, but it matters a lot. When an AI agent has access to your files and terminal, you want it to be smart enough to stop wasting time and money. The most attention-grabbing feature is the live reasoning stream. Deepseek v4 Pro can send its reasoning separately from the final answer and Deepseek 2 shows that reasoning directly in the terminal. So instead of only seeing the final result, developers can watch the model work through the problem, decide what to check, call tools, and then give the answer. One technical writeup says the change log even handles cases where the model is reasoning before calling a tool, even when it has not shown a normal message yet. That is the kind of detail most basic rappers would probably miss. Deepseek2 also has three main working modes. Plan mode is the safe mode. The agent can read your code, inspect files, search the project, and explain what it wants to do without changing anything. Agent mode is the normal mode. It can use the full tool set, but when it wants to do something serious like edit files, run commands, or make git changes, it asks for approval first. Then there is YOLO mode where the agent can act automatically inside trusted projects. That sounds risky and it can be which is why permission checks matter. The change log even mentions a fix for git commands being approved too easily in YOLO mode which shows how careful these tools have to be. There is also automatic model behavior. Users can type model auto and the tool can choose the best model for each step. It can also adjust how much reasoning the model uses. With shift plus tab, users can switch between no reasoning, high reasoning, and maximum reasoning. So if the task is simple, it can stay lighter and cheaper. If the task is harder, it can push the model into deeper thinking. Then there is RLM shown inside the tool as RLM query. This is one of the features that makes DeepSeek 2 feel less like a basic claude code clone. Instead of sending everything to one main model, it can split work across 1 to6 smaller sub aents. These sub aents usually run on the cheaper deepseek v4 flash model. One can inspect one file, another can check a different approach. Another can research something and another can look for bugs. If a subtask needs stronger reasoning, it can be moved up to V4 Pro. The idea is inspired by Alex Jang's RLM work and Sakana AI's novelty search research, but here it is turned into something practical for coding. The cost angle is a huge part of the appeal. Deepseek V4 flash is cheap enough that running multiple small agents at the same time becomes realistic. One report says V4 flash costs around $0.14 for input and $0.28 for output per million tokens at the discount reverted rate. Another says running up to 16 V4 flash subtasks can cost roughly onethird of using the pro model for similar work. For developers watching their API bills, that is a serious selling point. The tool also connects to MCP, the model context protocol. In simple terms, that means it can plug into other tools and services similar to how other modern AI coding agents are starting to connect with outside systems. It also supports skills. A skill is basically a small instruction package that teaches the agent how to handle a certain kind of task. And developers can even install community skills from GitHub without needing a separate back-end service. It also has features built for longer work sessions. You can save a session and continue later. You can create checkpoints while working on a big task. It has its own rollback system, so before and after each round of work, it can create project snapshots. If something goes wrong, users can roll back with commands like restore and revert turn without messing with the project's normal git setup. It also has a persistent task cue, meaning unfinished background tasks can continue after restarting the program. And for code diagnostics, it connects with tools like Rust Analyzer, Pyite, TypeScript Language Server, Go Plus, and Clanged, so it can see real coding errors and warnings after edits. And finally, it is clearly trying to be usable outside one small developer circle. It supports persistent personal notes so your preferences can stay across sessions. It supports English, Japanese, simplified Chinese, and Brazilian Portuguese, and it can automatically adapt to your system language. It can also run through HTTP and SSE with a command like deepseek serve-http which means it can be used in more automated workflows without opening the full terminal interface. So even though DeepSeek 2 is still an open-source project moving fast, it is already trying to become more than a simple terminal chatbot. Also, if you want more content around science, space, and advanced tech, we've launched a separate channel for that. Links in the description. Go check it out. Now, the question is whether it stays a viral GitHub moment or turns into something people actually use every day. Let me know what you think. Subscribe for more AI updates. Thanks for watching and I'll catch you in the next one.

More from AI