
Tech • IA • Crypto
OpenAI développerait son propre smartphone centré sur l’IA et un écosystème de puces afin d’échapper à la dépendance aux plateformes Apple et Google et de permettre une informatique entièrement pilotée par des agents.
OpenAI collaborerait avec MediaTek et Qualcomm sur des processeurs mobiles, tandis que Luxshare Precision assurerait l’assemblage, avec une production de masse prévue pour 2028. Les spécifications finales des puces et les choix de fournisseurs sont attendus fin 2026 ou début 2027, signe d’une planification concrète de la chaîne d’approvisionnement. Cette démarche reflète une volonté stratégique de maîtriser à la fois le matériel et le logiciel.
Les assistants actuels restent contraints par les règles de iOS et Android (sandboxing, permissions, écosystèmes fragmentés). Même des tâches simples en plusieurs étapes, comme réserver ou payer, exigent de naviguer entre plusieurs apps. Cela empêche les agents d’agir de manière fluide malgré une bonne compréhension des intentions.
Les smartphones sont les appareils personnels les plus riches en données: localisation, communications, paiements, santé, routines quotidiennes. Ils sont donc centraux pour tout agent avancé. Contrôler l’appareil permettrait une intégration continue du contexte et une action proactive, plutôt que réactive.
Le modèle envisagé remplace les interfaces centrées sur les apps par une interaction guidée par l’intention. L’utilisateur donne une commande et l’IA coordonne automatiquement les services. Les apps subsistent mais en arrière-plan, réduisant la navigation manuelle.
Le projet repose sur une architecture hybride combinant IA sur l’appareil et traitement cloud. Les tâches légères sont locales, les raisonnements complexes délégués au cloud. Cela exige des processeurs optimisés pour l’efficacité énergétique, la gestion mémoire et le contexte continu, pas seulement la performance brute.
Même si les puces IA rapportent plus à l’unité, le marché mondial des smartphones premium — 300 à 400 millions d’unités par an — offre une grande échelle. Une part même modeste pourrait générer des revenus importants pour OpenAI.
Pour Luxshare, ce partenariat permettrait de dépasser son rôle dans la chaîne Apple et de devenir un fabricant clé d’appareils IA natifs, renforçant sa position mondiale.
OpenAI aurait constitué une équipe hardware de 200 personnes, avec un design signé LoveFrom, dirigé par Jony Ive. D’anciens cadres Apple comme Tang Tan et Evans Hankey ont rejoint le projet. D’autres produits sont prévus: enceinte connectée à 200–300 $ (2027), écouteurs IA, lunettes intelligentes (2028), et appareils expérimentaux comme une lampe intelligente.
En Chine, des entreprises avancent via partenariats. ByteDance et ZTE ont lancé le Nubia AI phone, avec un “agent GUI” simulant les actions utilisateur. La demande initiale a explosé, avec des prix passant de 3 500 yuans (480 $) à 36 000 yuans (5 000 $). Mais WeChat et Alipay ont bloqué le système pour des raisons de sécurité.
Le modèle chinois accélère le déploiement en contournant les API traditionnelles, mais pose des risques: atteinte à la sécurité des paiements, protection des données et intégrité des plateformes. Cela illustre la tension entre vitesse d’innovation et stabilité.
Deux stratégies émergent: transformer Android de l’intérieur ou créer de nouveaux écosystèmes matériels IA-first. Les deux visent à faire de l’IA la couche principale d’interface.
La stratégie d’OpenAI semble évoluer: moins de référence à l’intelligence artificielle générale (AGI), plus d’accent sur le déploiement progressif et l’adaptation sociétale. Cela suggère un passage d’une percée unique à une intégration continue de systèmes toujours plus capables.
Les ambitions hardware d’OpenAI reflètent un tournant plus large: l’IA pourrait ne plus s’intégrer aux plateformes existantes, poussant à concevoir des appareils centrés sur des agents intelligents plutôt que des apps.
Open AI has a problem. It can build the smartest AI in the world, but most people still use it through phones controlled by Apple and Google. So, chat GPT can get faster, smarter, and more agentic. But it is still stuck behind someone else's lock screen inside someone else's rules. Now, that may be changing. According to a new industry survey from Tienfong international securities analyst Mingchi Quo, OpenAI is working with MediaTek and Qualcomm on mobile phone processors. While Lux Share Precision has reportedly won the exclusive system co-design and manufacturing contract and the phone is reportedly expected to enter mass production in 2028. The final chip specs and supplier choices are expected by the end of 2026 or the first quarter of 2027. Which means this already sounds like real supply chain planning, not just a concept. And the reason is simple. An AI agent cannot reach its full potential as just another app. If OpenAI wants Chat GPT to understand your day, use your apps, manage tasks, and actually get things done, it may need to control the device itself. Right now, even a powerful AI assistant on an iPhone is still trapped inside Apple's rules. It has to deal with app permissions, sandboxing, privacy restrictions, and all the normal walls between one app and another. So, even something as simple as ordering food, checking your schedule, comparing options, making a payment, and sending a message can become a messy chain of steps. The AI may understand what you want, yet the phone itself still treats it like another app on the shelf. That is why the phone is so important. A speaker can help at home. Glasses can help when you are walking around. Headphones can handle small moments during the day. Still, the phone is the device that has the highest information density around your life. It knows your location, your calendar, your messages, your payment habits, your health data, your photos, your apps, and your routines. For an AI agent, that context is the whole point. Mingchi Quo's argument is that OpenAI needs full control over the operating system and hardware if it wants the agent to offer truly complete services. And that makes sense. If the AI is supposed to become the main interface, the old model of phones starts to look outdated. The current home screen is basically a shelf full of icons. You search for an app, tap it, move through menus, type things, copy things, switch apps, and manually connect everything together. An open AI phone would likely flip that logic. Instead of opening an app, you tell the phone what you want done. The AI decides which tools, apps, services, and cloud systems are needed. Apps may still exist, of course. You may just stop thinking about them as separate places you personally need to visit. The technical version of that idea depends on a tight mix of cloud AI and ondevice AI. The phone processor has to constantly understand context without destroying battery life. That means power management, memory hierarchy, edgeside models, and smaller local models become extremely important. Light tasks could run locally. More complex or highintensity reasoning could move to the cloud. That is also why MediaTek and Qualcomm being involved matters. OpenAI would not just be building a normal phone with a chatbot slapped on top. The processor itself would need to be designed around agentic behavior. Kuo also gave an interesting comparison with Google and MediaTek's TPU project Zebraish. He said the revenue from a single AI chip is roughly equal to the revenue from 30 to 40 AI agent mobile phone processors. That gives some perspective. One high-end AI chip can bring in far more money per unit, but the phone market has scale. OpenAI is reportedly targeting the global high-end mobile phone market, which is around 300 million to 400 million units per year. Even capturing a small piece of that would create a huge new business line. For Lux Share, this could be even more strategic than financial in the short term. In Apple's supply chain, Luxair has had a hard time surpassing Hanghai in iPhone assembly. An OpenAI phone gives Lux Share a chance to become the main manufacturer of what could be the next generation of AI native phones. That is a very different position from just being one supplier inside Apple's giant machine. And this also fits into a wider hardware push that has been building for months. Earlier this year, the information reported that OpenAI had created a hardware team of around 200 people with product design handled by Lovefrom, the studio led by former Apple design chief Joanie IV. The team reportedly has a lot of Apple DNA. Tangan, a 25-year Apple veteran who worked on iPhone and Apple Watch product design, is involved. Evans Hanky, who led Apple's industrial design team after Joanie IV left, is also part of the picture. The rumored product lineup is wider than just a phone. The first product is expected to be a smart speaker priced around $200 to $300 with shipping expected in February 2027. Then there are AI headphones reportedly cenamed dime or sweet P with a metal cobblestone style shape and capsule-like earphones placed behind the ears powered by a 2nanmter chip. Smart glasses are expected for mass production in 2028 aimed directly at Meta Rayban and Apple's rumored N50 glasses. There's also a smart lamp prototype with the final launch still uncertain and an AI pen or pocket device that Sam Alman has hinted at several times. Sam Alman once described smartphones as being like Time Square, full of information, bombardment, and fragmented attention. He contrasted that with the kind of device Open AAI wants to build, something more like a lakeside cottage where you can close the door when you need to focus. That line is interesting because it shows how OpenAI may try to frame this phone. It will probably not be sold as another smartphone. It will be sold as a calmer, more intelligent, more agent-driven way to use technology. Of course, saying that is much easier than building it. Apple has spent years perfecting the iPhone, the ecosystem, the app model, the chips, the supply chain, and the privacy story. OpenAI is trying to enter one of the hardest consumer electronics categories in the world. That may explain why it has been aggressively hiring Apple talent. According to the information, OpenAI hired more than 20 hardware experts from Apple last year alone. The report even claimed Apple became so worried about more executives leaving that it canled an annual closed dooror meeting originally planned for China. The supply chain side is moving too. Luxare has reportedly won assembly work for at least one open AI device and Goorek is also in talks to provide components such as speaker modules for future products. That matters because these are companies with real experience building Apple products like iPhone, AirPods, HomePod, and Apple Watch. In a way, OpenAI appears to be using Apple's talent network and Apple's supplier network to build something that may one day compete with Apple's most important product. And this whole idea is not only happening in the United States. China is already testing a faster and more radical version of the AI phone through bite dance and ZTE. At the end of last year, Bite Dance worked with ZTE to launch the first generation Duba phone, the Nubia M1 153. The engineering prototypes reportedly sold out immediately. The original price was 3,500 yuan or roughly $480. Yet, the resale price was reportedly pushed up to 36,000 yuan or around $5,000 at one point. And ZTE's stock even hit its daily limit. The Duba approach is very aggressive. Instead of waiting for every app to create perfect APIs for AI agents, the model reads the screen and simulates manual actions through a guey agent. That means the AI can compare prices across platforms, organize files, send more polished WeChat replies, book flights, order food, and perform tasks by operating the phone more like a person would. The advantage is speed. People could actually test the thing at the end of last year. The price of that speed is security and compatibility. WeChat, Alip Pay, Tao, and banking apps reportedly started blocking the Dubau phone for security reasons. From their perspective, it makes sense. If an AI can bypass normal app boundaries and imitate user behavior that creates a serious problem for payment apps, banks, messaging platforms, and e-commerce systems. It may be useful, but it also punches a hole through the normal permission model. Dubau phone 2.0 Zero is already in development and is expected in the second half of the second quarter of this year. There are also reports that Bite Dance is trying to expand the idea to more phone manufacturers. Lanjing News reported that Honor was actually the first phone maker Bite Dance contacted. Although Honor denied the rumor and said any strategic cooperation would be announced through official channels. One source described Honor's caution pretty well. Duba can be radical as an exploratory engineering machine. Yet, Honor has hundreds of millions of users, so stability, compatibility, and security problems could turn into massive user complaints. According to the blogger Digital Chat Station, Vivo is now in talks with Dubau, and other top five domestic manufacturers are lining up. His wording was that a wave of AIOS and Duba AI phones is approaching. So, China is moving through partnerships with existing Android phone makers, while OpenAI seems to be taking the slower route. build the phone, the software layer, the processor requirements, and the ecosystem from the ground up. Both paths point to the same conclusion. AI at the app level is not enough. If the agent is only another feature inside someone else's phone, it remains limited. To make AI the sole of the device, you either transform existing phones from inside Android like Dobau is trying to do or you build a new device from scratch like OpenAI appears to be planning. This also connects to the strange mood around Sam Alman's recent posts. He posted that after AGI, no one is going to work and the economy is going to collapse. Then he also posted that he is switching to polyphasic sleep because GPT 5.5 in codeex is so good that he cannot afford to sleep for long stretches and miss out on working. The irony is obvious. The person building technology that may one day make work obsolete is also saying the technology is so useful that he wants to work even more. Altman has said before that AGI could arrive by 2030. Although many tech leaders are skeptical of the whole framing, Peter Steinberger, the creator of Moltbot, recently argued on the Y Combinator podcast that the industry should focus more on specialized intelligence instead of generalized intelligence. Anthropic President Daniela Amode has called the AGI concept outdated while Google DeepMind CEO has argued that AGI cannot be achieved without world models. And that leads into another quiet shift. OpenAI updated its operating principles. In the 2018 charter, AGI was mentioned 12 times. In the 2026 version, it appears only twice. That is a major change in tone. The old charter was built around a future AGI breakthrough. The new version is much more focused on iterative deployment, meaning society has to adapt step by step as stronger AI systems arrive. One line from the new principles says the world needs to grapple with each successive level of AI capability. That is a very different framing from waiting for one giant AGI moment. It also removed a notable promise from the 2018 charter where OpenAI said that if another safety focused research institute got close to AGI first, OpenAI would stop competing and help. The 2026 version no longer includes that commitment. Instead, it talks about cases where trading off some empowerment for more resilience may be necessary. Also, if you want more content around science, space, and advanced tech, we've launched a separate channel for that. Links in the description. Go check it out. And that is where I'll leave it. Drop your thoughts in the comments. Subscribe if you want more AI updates like this. Thanks for watching and I'll catch you in the next one.