ENFR
8news

Tech • IA • Crypto

TodayAll videosVideo recapsAll topicsTop articles 24hArchives

Anthropic Claude Co-Work, Memory & Opus 4.7 Effort Scaling

AnthropicFriday, May 8, 2026· 10 videos

Briefing

0:00 / 0:00

Claude Co-Work turns AI operator

Anthropic introduced Claude Co-Work, shifting AI from chat assistant to autonomous operator. The system executes workflows across local files, Google Drive, Notion, Slack, and Chrome with user-approved access. It follows a plan-first model, outlining steps before acting and requesting permission for changes. This marks a move toward supervised autonomy where users delegate outcomes rather than individual actions.

Opus 4.7 scales with effort

Anthropic highlighted test-time compute as a new scaling frontier using Claude Opus 4.7. Increasing effort levels raised token usage from about 4,600 tokens in 50 seconds to roughly 10× more compute for significantly better outputs. The model allocates thinking, tool, and text tokens to balance reasoning and execution. Gains come with clear trade-offs in latency and cost, reframing performance tuning as a runtime decision.

Anthropic unveils memory and Dreaming

Anthropic introduced a persistent memory system paired with a process called Dreaming for continuous learning. Agents store structured knowledge like past failures, success criteria, and operational patterns in a file-system hierarchy. Over time, this enables self-improving behavior without retraining. The design supports multi-agent environments where shared memory compounds performance gains.

Activation translation exposes AI reasoning

Researchers developed activation translation to interpret internal neural states in language models. The method converts activations into human-readable reasoning, then reconstructs them to validate accuracy. This two-step loop offers a new way to audit how models think during safety tests. It addresses limits of behavioral evaluation, where outputs alone may hide underlying intent.

Google Cloud demos agentic SDLC

Google Cloud showcased end-to-end development powered by Claude-based agents. A single system spans ideation, design, coding, deployment, and analytics using tools like Cloud Run, Firestore, and BigQuery. Features such as plan mode and rapid UI prototyping compress traditional handoffs across teams. The result is a unified, AI-driven software lifecycle with fewer coordination delays.

Replit launches ByBench evaluation loop

Replit introduced ByBench, an open-source benchmark for “vibe coding” agents building apps from scratch. It uses about 20 real-world PRDs and automated AI evaluators to test functional correctness. The system complements a continuous evaluation loop driven by production data instead of static scores. This reflects a shift toward measuring real-world performance as models and tools evolve daily.

Enterprise agents gain shared memory

Asana detailed multi-agent systems operating on a shared organizational memory. Agents access a structured work graph linking goals, projects, and tasks, enabling coordinated execution. Persistent histories allow agents to retain value even after creators leave. The approach moves enterprises from isolated bots to collaborative, context-aware digital workers.

AI coding shifts bottlenecks beyond code

AI-assisted development is pushing constraints away from coding into verification, security, and coordination. Faster generation renders legacy processes like heavy upfront planning and strict ownership less effective. Teams are adopting just-in-time planning and resolving debates by generating working implementations. The broader shift places emphasis on systems, tooling, and infrastructure over raw model capability.

Videos covered