
Tech • IA • Crypto
OpenAI’s ChatGPT 5.5 introduces AI agents with recursive architectures, but effective, consistent, and professional use demands in-depth coding of agent instructions, memory, and workflows rather than simple prompts.
OpenAI’s Promise vs. Reality OpenAI’s ChatGPT 5.5 promised easy creation of AI agents that can autonomously handle tasks like scheduling, presentation creation, coding, and internet research through simple natural language commands. However, practical tests reveal significant limitations, especially for professional or enterprise contexts, where naive use causes inconsistency, lack of control, and unreliable outputs.
Complex Agentic Architecture Behind AI Agents Professional deployment of AI agents requires understanding an intricate architecture involving orchestrators, multiple agent levels, and complex memory management. Agents must coordinate to retrieve data, synthesize information, and deliver coherent responses—tasks far beyond casual prompting.
Memory Management and Context Window Constraints AI agents suffer from context window limitations: irrelevant information quickly dilutes crucial context, causing up to 30-60% loss in effective memory even after a few off-topic elements. Memory is stored as attention blocks mathematically concatenated, which influences all agent behavior and responses. Without carefully managing and coordinating memory (via summary and project memory files), agents become inconsistent.
New Recursive Agent Functionality ChatGPT 5.5 introduces a recursive agent architecture allowing sub-agents (level 1) to spawn further sub-agents (level 2), enabling complex chained workflows. This innovative capability requires sophisticated memory systems to maintain operational coherence across agents.
Default System Lacks Memory Coordination By default, ChatGPT 5.5 does not configure memory management or standardized instructions. Consequently, system instructions coded by the orchestrator vary unpredictably, producing inconsistent agent behavior, making debugging or auditing impossible in real-world scenarios.
Need for Rigorous Coding of Agent Instructions and Memory Successful AI agent systems require carefully coded templates that define agent IDs, roles, system tags, recursion, tool permissions (e.g., web tools access), output formats, and workflow steps. This robust coding enables stability, reproducibility, and enterprise readiness, avoiding the random and inconsistent results stemming from raw natural language prompts.
Pragmatic Use Requires Choosing Appropriate Models per Task The architecture allows specifying precise model versions for orchestrators and agents, optimizing cost and performance. For example, an orchestrator might use GPT-5.5 for reasoning, while sub-agents run on GPT-5.3 for simpler tasks, balancing budget and capability.
Agent “Skills” and Routing Enable Modular Design An efficient system separates skills—containing roles, tools, memory rules—from agents. Skills act as modular units that agents call, enabling scalable and maintainable AI agent ecosystems. This modularization also aids in automating agent creation.
Critique of Simplistic Marketing and Influencer Promises The commonly marketed notion that "anyone can create AI agents with a few simple sentences" is misleading. Real-world AI agent workflows involve detailed configuration, memory management, and context handling. Many influencers’ prompts promise capacities like accessing closed systems (Instagram, TikTok) or handling complex market research tasks that AI cannot access or perform realistically.
Limitations in AI Access to Real-time and Proprietary Data AI models lack access to many real-time or proprietary data sources (e.g., Google Trends, Semrush, social media insights behind login walls), making certain "automated research" claims impossible under current architectures.
Risk of Untraceable “Hallucinations” Without Proper Configuration Without managing memory and instruction standards, AI agents produce outputs based on probabilistic token sequences, not controlled logic. This leads to hallucinations, errors, and opaque outputs that cannot be audited or reliably used, especially in business environments.
Testing Highlights Inconsistent Agent Control and Outputs Testing shows orchestrators issue commands unpredictably. Agents self-correct on flagged errors but lack structured, consistent inputs. Such randomness wastes tokens, money, and time, undermining trust in the systems.
Best Practices for Reliable AI Agent Development Experts recommend manual management of agent templates and memory instructions rather than relying on autogenerated skills. Stability and reproducibility come from coding and controlling: agent identity, recursion logic, reasoning depth, tool permissions, and memory architecture.
OpenAI’s Official Documentation and SDK Guidance OpenAI provides detailed SDKs and coding guides covering agent instructions, memory management, execution templates, and orchestrator logic. Leveraging these resources is essential for building rigorous, traceable, and scalable AI agent systems.
Conclusion for Businesses and Developers Organizations must transition from casual prompting to architect systems integrating memory, instruction templates, modular skills, and recursive agents to harness AI agents effectively. This requires training, coding expertise, and a clear understanding of system limitations.
While ChatGPT 5.5 marks progress toward autonomous AI agents, practical deployment demands detailed architecture coding, memory coordination, and consistent instructions to ensure reliability and business readiness. Simplistic prompt-driven creation remains insufficient for professional use.
OpenAI promised us: creating AI agents with ChatGPT 5.5 has never been easier. In this tutorial, we'll create AI agents using Codex, specifically Codex V2. We'll use the latest ChatGPT 5.5 model. We'll discuss agentic loops, AI agents, memory systems, workflows, and configuration. And at the end of this video, I'll show you the code you need to enter to create your agents in ChatGPT 5.5. All of this is covered in this video. OpenAI's promise with ChatGPT 5.5 was to provide the beginnings of AGI. But let me tell you, you have to take the marketing from these companies—OpenAI, Anthropic (Claude), xAI (Grok), and Google—with a grain of salt. So, what's behind this "more and more"? I 've tested it for you, and I'm going to explain it all. The promise was that it was possible to create agents using very simple phrases: "I want to create an agent that will help me schedule appointments, make PowerPoint presentations, code the agent system for me." So, seen like that, it sounds amazing. Even a 5-year-old could code an agent. If you continue to use ChatGPT this way, well, I can tell you it's time to stop. But the question is: how does it work? Will you really get what you asked for? And above all, if you're a professional or a business, let me tell you right now, you absolutely must not use these methods, and I'm going to show you how. To understand the AI agent system, you have to imagine a machine that will retrieve your text. It will have to deduce what action you want to take, and then act. It has to connect to the internet, retrieve documents, synthesize the information, and deliver a response. From that perspective, it sounds great. But when I actually started looking at the professional documentation, the kind used by developers, those who create SDKs, those who develop agents, I discovered a whole, extremely technical architecture for managing AI agents. And that's exactly what we're going to talk about in the second part of the video. And you'll understand why. Intuitively, when you ask the AI to work, the question is how the AI agents will maintain consistency in their work. Yes, initially, the system will have to check your schedule. Then, you'll ask our system to create a PowerPoint presentation. After that, to update your lists and emails. How will these agents communicate with each other? The first part of the problem is how we'll manage the context window. The more an AI communicates and works, the more the window will fill up. But it won't fill up with just one action. It will fill up with very different actions: "I 'm going to go on Gmail, I'm going to retrieve data via MCP, I need to extract the data, I need to understand it." In short, what studies have shown is that with just four irrelevant elements, 30 to 60% of context is lost. So the AI starts to become inconsistent, and that's a major problem. The more complex tasks you give it, the more the model will start to misunderstand what it has to do. So why does it happen this way? Because the way memory is managed in a conversation is through matrices, like Lego bricks if you will, where each time you speak, the model retains a memory in the form of mathematical values. These are called attention blocks. These attention blocks are concatenated. We create a chain of all the interactions the model has had, and these interactions will influence all of the model's behavior and responses. And that's where the problem lies. You understand, earlier, that when OpenAI tells us "well, a "An agent will retrieve the data and connect, another agent will create the schedule, and yet another agent will make a PowerPoint presentation." The problem is, how do we maintain consistency? And the question is: who manages what in this scenario? If we don't do this memory coordination, we can't allow the AI to perform coherent work over a long period. That's the first point we need to grasp. Here's what I discovered while working with the AI agent systems of ChatGPT 5.5. To maintain consistency, the main system, called an orchestrator (if you don't know what that is, refer to the videos in the description; we've already discussed it), uses two types of memory: one called a memory summary.md, which is a system that is loaded and compacted at the start of the discussion, and a project memory, which is the memory.md file. But all the consistency of the AI's work is managed using what are called rollout summaries and raw files. Memories. These two memory files will manage the level 1 and level 2 agents. I discovered that OpenAI has integrated, for the first time, an architecture with a recursive agent function. Let me explain: Claude (with Claude 4.6, Claude 4.7) is developing this function, but it's not yet present. The recursive agent allows a sub-agent, a level 1 agent, to launch its own recursive agent—that is, another agent in the chain. This is extremely powerful because it allows for complementary work and is completely unique, requiring the activation of a new memory system. But the problem is that I've run tests, and what happens is that if you don't configure it, the ChatGPT 5.5 model doesn't configure any of the memories by default. So there's no consistency. Therefore, you can't control who did what, or how it was done. In fact, what sources provided this information? If there's a bug, you won't be able to fix it. And the big problem is that the orchestrator system automatically codes instructions that it then dispatches to the various agents. But each time it sends an instruction to an agent, it doesn't use the same prompting method. So, every time you want to use your prompt, you'll end up with agents coded differently. And that's exactly what you should n't do because you won't be able to create agents usable in your company's or work environment. Here's an example. If you need to go further, to learn how to master the best of AI, to become capable of building useful, clean, auditable, and enterprise-ready AI systems, you'll find the description "The Best of AI." And for those who do development and coding, there's specific training, level 3, "Prompt Engineering Elite." This includes 80 Hours of coursework that you complete at your own pace, how to pass the Google certification in prompt engineering, 30 AI tools, updates included, concrete use cases for businesses, creating AI agents, creating teams of AI agents, and understanding how to use these systems in the business domain to create something reliable. All the information is in the description. We'll take the example we did in the previous video where I showed you how to activate execution memory in the ChatGPT chat interface. Here, we're going to launch an orchestrator system that will manage agents. These agents will perform internet searches—the ChatGPT web search agents—and they will collect information. In the prompt, I'm going to remove the memory section from the instructions. We'll see how it works. We'll analyze it and then we'll activate the memory. You 'll find the diagrams, code, and logic in the "The Basics of" section. "AI." And of course, you'll have the entire course, solutions, and architecture described in the training materials. For your agent to operate stably, your system must create an operating log from startup. It needs to create memory and agent instructions, but by default, if you don't do this, ChatGPT won't do it for you. That's what you're going to discover. If I remove all memory management from my instructions and take this relatively simple prompt: we're going to produce structured content, we're going to search the official ChatGPT documentation using web search functions and follow a plan—this plan is here to build a course. Imagine you're in the exact same situation. You have a company, you're looking for strategies, and you give the AI a task: internet research and a final report. That's exactly what we're doing. We're going to copy and paste the instructions into the interface. So You'll find the prompt in the description to help you understand the structure. And now we're going to analyze how the model works because what you need to understand is: can the model actually work? So, you've already seen that the instructions are a bit more complete than what you find on the internet. It's not just "go do some research and write me a report on ChatGPT 5.5"; it's given a basic architecture. But what's going to happen is how the model handles the instructions, how it was trained. What you need to understand is that an AI doesn't do things randomly. I'll talk about that in a moment. But an AI has received human training. You have humans who trained the machine to tell it how to behave in this type of situation. And what you see is that the model is executing the log entry, starting the tasks. And if I click on it, we can see that ChatGPT is in the working phase of the agent and preparing the agent interface. And you can see that in real time, it's launching the agentic systems. The problem is that when you create these systems, you realize one thing: first, you can't control it. Second, it launches agentic systems that you don't control. At no point do you have control over the system's instructions. At no point do you know how it's going to code the instructions. So here, it's coding the information it's going to send to the agents all by itself in the background. This is the shell command where it will launch the agentic systems. So it has launched the first function, but the problem is that you don't control anything. Every time you run the request, the model can change version, write it differently, and you'll get a completely different result. But it doesn't stop there. So as long as we continue to make people believe that it's enough to Telling ChatGPT, "Go create AI agents and do my work," will always lead to this problem. We're talking about an AI that will completely randomly, without any control, execute sequences of code probabilities and tokens. You get content without knowing how it was generated, what data it was based on, what criteria it used, or what probabilities it was based on, because all the instructions are random. And you get a final product thinking, "Well, the AI did its work." No, the AI has just delivered a set of content on which you don't even know what was done. And then you risk asking yourself, "I coded agents, did you save time?" Well, no, all it takes is for there to be simply One or two errors in your report. It's all wrong. Why? Because, as I explained earlier, throughout the conversation and the queries and searches the model performs, it will create concatenated memory blocks. It will influence the probabilities of each step. It will pollute its context, and at no point have you configured the memory systems. So you have absolutely no idea; you're completely in the dark. The more complex the workflow, the greater the risk of ending up in areas where we're not even at 20 or 30% accuracy. So when people tell me, "The AI is hallucinating," it's not that it's hallucinating; it's that we're not taking into account the AI's operational problems. If we don't take them into account, we won't be able to solve them. So, despite everything, the model will still code what I asked it to. He's still going to give me the lesson; he's doing it right now, but I don't even know how he did it. So I'm going into this completely blind. We need to distinguish between the two parts... If you want to create AI agents, you'll have to understand how it all works, how it's built, and where to intervene. What the system shows, and this is positive, is that the model is capable of following instructions for almost four hours straight. That's what we call alignment capability. It means that when the model has worked, it confirms its operation. It can tell me what worked and if it had a problem because I configured it to do that. But you can see that the part of the prompt it generated is too vague, too limited. So we'll have to understand that we need to code, first, the agents' instructions, and second, the memory relationships. Ultimately, I get a report, I get links, but at no point do I know if or how the agents worked, what criteria they used, or even what information was sent back by which agent. Then I discovered something, and that's when I realized I absolutely had to intervene. When I launched the workflow, I saw that ChatGPT 5.5 creates agents and gives them names. These names allow the system to identify the different agents. But the problem is, I realized the agents were making mistakes, and ChatGPT was telling them, "Correct yourself." So, imagine, tomorrow you make a mistake and your boss tells you, "Correct yourself." That's not going to get us anywhere. So, the information it's sending them is there. It's not structured enough to allow (as OpenAI and Sam Altman would have us believe by selling us GPT) "make me agents to do my work." We're nowhere near that level of system development. So we're going to do the same thing we've been doing for the last three years with artificial intelligence. We're going to have people saying in tutorials, "Just ask ChatGPT, it'll do all the work for you." What are you going to lose? One month, two months, six months of subscription, work, time, for absolutely nothing. And those who have understood, especially those who are now the developers or specialists of AI agents, have understood that all the sequences need to be coded. So, it's not that complicated. Why? Because, first of all, I've modeled them. But I'm also going to warn you about what you shouldn't ask of AI. There comes a point where we need to be realistic about the models, and that's exactly what I want to talk about. We're now going to create our first agent within this framework. This agent will be a market research agent that will allow us to identify market trends and then capitalize on them. with other agents. For that, we're going to come here and ask him to read the document. Think about it for a moment. Take a step back from the marketing of OpenAI, Anthropic, and influencers who tell you one thing: "You put three or four sentences on an interface." Think carefully about what I'm going to tell you: "Transform what I ask you into marketing research, capable of conducting marketing research and formulating strategies for any brand." Can you imagine a tool tomorrow where, with five sentences, you have a machine that does your job? I'm trying to tell you something... do you remember when you were 6 years old? Yes, yes, make that effort. And they told you, "If you're good, Santa Claus will bring you presents at Christmas." No, but you're probably thinking, "Okay, I'm not six years old anymore," but guys, how are you going to explain to me that you still believe there's a machine capable of conducting marketing research and formulating strategies for any brand with just five sentences written like that? Think about it for five seconds. If all that were true, all those influencers would have quit working three years ago and hired AI to handle everything. Stop believing in simplistic solutions. Understand that we have a very powerful tool that, however, has limitations, and we need to know how to use these tools while being aware of those limitations. And that means knowing how to develop and code instructions to account for those limitations and provide the necessary instructions to make the model work. But never, ever, will you get a marketing report with just one sentence or five lines. It's pointless. So, all the profiles I see on LinkedIn that put under all their posts: "Claude, ChatGPT, to get the special prompt to have the AI work for you." Remember the 6-year-old waiting for presents on Christmas Day if they'd been good? It's the same thing. We'll continue to understand what not to do when you're working today with instructions from AI agents. And if you look at the "Osint policy" section, so if we look and start reading, we see that the marketing search system coded or offered by this influencer tells us that it needs to search for information on the culinary exploration channel of Reels, Instagram carousels. Think about this for 5 seconds, those of you who do marketing. Instagram requires identification. You can't browse with a web search; it's not possible. It's a closed system. TikTok is the same. This isn't accessible if you don't have a registered account. That's not accessible. None of the data is accessible, according to what's displayed on the screen. Facebook is the same; no public data. So if the model needed to search for data, it can't do it on Instagram, TikTok, or Facebook. So, you can see from the prompt that, as I was saying, don't ask an agent to do things it can't do. The worst part is that the agent won't tell you "I can't," and you'll get something that's impossible. Here, I've added a.md file that I generated, which is actually a guide for creating effective marketing search strategies. Let's take the guide to effective marketing search. Think about it for another five seconds. You 're no longer that six-year-old child being told magical stories where, by rubbing a lamp called ChatGPT or Claude, a genie appears and makes all your dreams come true. We need to move beyond that system. You are entrepreneurs, you are rational people, you are adults, and you no longer believe in promises, but in competence and action. How do you explain to an AI how to run marketing research while taking into account the economic cycle (growth, stagnation, and contraction phases, target audience), seasonality and sectoral variation (i.e., the seasonality of economic cycles), technological momentum, the adoption of new platforms, algorithmic changes, and regulatory revolutions? Seriously, do you even realize that the three points it's asked to do in section 2.1 are ludicrous? It's impossible for a single AI to do that. But the big problem is that if you code instructions that are meaningless, neither ChatGPT, nor Claude, nor ChatGPT 5.5, nor Claude 4.7 will ever tell you "it's not possible." So the first thing to be vigilant about is your critical thinking. Deep down, you know it's impossible that these models could do that with just three and a half sentences, even if an 800-line text pops up. You know it. Don't tell me you believed it; it's impossible. You can't possibly tell me, "But yes, of course. When I ask ChatGPT what the best strategies are for the economic cycle, sector seasonality, and technological momentum, of course, I know ChatGPT can do it." Look me straight in the eye and tell me that yes, you were convinced AI could do that. How? Explain it to me. Then, "signals to watch, search trends, Google Trends." Google Trends is a closed system; it doesn't have access to it. Semrush costs $200 a month; it's not accessible; it does n't have access to it. "Consumer sentiment, social media, daily reviews." Which website do you want it to go to where this data is open? Ultimately, you understand that the first issue to resolve is how even the professionals who tell you they're competent, who tell you they have the perfect prompts, who tell you they've helped 30,000 people... OK, explain to me how you think AI is going to function with these instructions. How is an AI going to know economic cycles, sector seasonality, technological trends, regulatory changes on institutional websites, legal and monthly newsletters, macroeconomic data from INSEE, the Bank of France, and quarterly Eurostat? I don't know if you're starting to put all this together. Do you already understand that we're talking about completely unrelated topics and that we're going to end up with the same thing by proposing instructions that aren't usable by a ChatGPT 5.5 or Claude 4.7 agent? We're going to end up with only one thing: a concatenated attention matrix that means nothing and will ensure that all the model's responses are very well-written but unusable because all the instructions are impossible for the model to carry out. So the first phase of awareness is that you need to learn and understand how AI works. Otherwise, anything you ask an AI, or an Open-Claude AI agent, or Hermes, or Claude Code, will be useless. That 's the first step. If you start to understand that, well, it makes you realize that you've wasted three years of social media content where they tell you it's super simple, you just ask with a magic phrase and you get a result. There you have it, that's the marketing promise of influencers supported by the promise of OpenAI. OpenAI uses the same principle. But why? Otherwise, tomorrow, when you realize that in the end everything has to be coded, that in the end it requires skills, that in the end it's not so simple, well, you'll realize there will be much less enthusiasm. What we wanted to create, first and foremost, is adoption. It's a massive undertaking to give the impression that it's easy. We taught ChatGPT and Claude a number of fairly basic processes. In short, it's always the same thing: "Go search the internet for information, take that information, and spit it out." But is that enough to create an AI agent that will take your schedules, structure them, create a PowerPoint presentation aligned with your company's needs, and then update your appointment system? The answer is no. Why? First, because we don't have context window management. The model isn't coded, so it doesn't know how you work. Therefore, it will aggregate probabilities that are probably not the right ones, and thus it will give generic answers that aren't tailored to your needs. To maintain consistency within the systems, requests must be separated into different AI agents, each with its own memory. This memory must be accessible to an orchestrator to understand what has been done, how it was done, and whether it can be used or if a new cycle needs to be initiated. This is the first phase of AI agent awareness. Okay, now we can move on to how we'll code the instructions. The solution will come from the architecture, where I'll show you the template used by OpenAI to code all the AI agent instructions. For those who want to find the official information, it's available in the "Code Prompting Guide," the "Assistant Agent SDK" section, and the entire SDK agent coding methodology is in the "Memory" section. These three sections will allow you to correctly code the AI agent instructions. For the template I'm going to include in the training description below, you'll use this template for each AI agent to build the system. OpenAI uses this structure: an "agent ID" which has a "system tag." You saw that Codex gives a name and then a tag to the model, which is called the description. And today, we can add a new variable: "depth." Depth means we can now create a level 1 (sub-agent), but also a level 2, a sub-agent that can itself launch a secondary agent. And that's absolutely new. It 's structured around the concepts of depth and variables. "Recursion" is a Boolean structure, meaning it has a true or false value. Then, we have an area that defines the tools. So, for example, we can include web tools: what is allowed? What is prohibited? Which domains are accessible, and then do we give it read and write permissions, including the working directories? The rest of the structure is fairly standard. It follows what we know about prompts: the model's purpose, the constraints, the output format, and generally, the workflow steps: step 1, step 2, step 3. So, in my opinion, this template can be improved. If I were to improve it, here's what I would change: I would create a first section in YAML format. This is the one used by Anthropic. I would add the option to enable recursion. I'll show you that it's possible to choose which model you launch in the architecture. When you launch the orchestrator system, you can choose a different agent for the tasks. So, if I consider a GPT 5.3 Codex to be sufficient here, I can define a GPT 5.3 or even a GPT 5.4. Why? Well, because it's less expensive and because the level of reasoning and complexity of the model doesn't require it. systematically using the most expensive model. We optimize our budget. So, how do we integrate the reasoning variable? We need to look at the OpenAI developer documentation. You integrate the model variable where you specify the model name and, most importantly, the reasoning variable. This means that when you start coding, I 'll use the same prompt I used, but I'll tell it: "add as an instruction to the orchestrator" (I'm defining GPT 5.5, reasoning medium), "to the agent system, I'm giving it 5.3 with medium effort." But what you need to do, if you don't give it the exact reasoning: effort: format, and write the variables using the template. If we don't do this, it will actually write it as text. And unfortunately, what's happened repeatedly is that, because these are AIs, and their responses are probabilistic, it will write them to me differently each time, and under certain conditions, this can cause malfunctions in my AI agent system. Whereas if we code it perfectly in the same way the model is used to receiving it, then you ensure compatibility of operation. So the system change is that I have the description, I have the execution functions, I have the agent policies, the search agents have their models, the orchestrator model has its operation. I have the tool actions. If you have "skill" functions, we can activate them as well. This will now allow us to code instructions for each agent and not leave it to chance for the AI to choose who does what, how, and with which model. We have one point left, which is memory. Depending on the architecture, it's important to keep track of how the model operates. First, because it allows you to verify how it worked, what it did, where it crashed, what the sources were, and whether it followed the intended behavior. And if you need to check tomorrow because you have a doubt, how do you do it? Well, in that case, you need to consult the documentation for the memory section. But let me explain how it's designed. Memory sessions load the memory-summary.md file at startup. Then, memory.md is managed by the orchestrator. All you need to tell the AI is that the orchestrator manages these two models. That's all. What you then need to tell the sub-agents is that they use the intermediate raw memory function. That's where they exchange and write their memory section. If you want to go further, meaning you want the model to work and write down each response, each exchange cycle, then we can implement rollout functions. This means that every time a request is generated, it will record the request's identifier and save it in this file. It's an even higher level of control over the model's operation. So they've already thought of this entire architecture. But what you realize is... why in the documentation and public presentation, does OpenAI keep telling us, "I'll show you the documentation: create agents to help me with my meeting, go to Google Calendar, and make a PowerPoint presentation"? And when you actually start reading the documentation, you realize that developing AI agents is a whole different ball game. This shows you one thing: between the professional field and the general public, there's a huge difference in terms of skills. And that's the The main reason why adopting artificial intelligence is so complicated for businesses is that, to democratize adoption, they were given simple and standardized tasks. But as soon as you want to start working, you need to track how your model is functioning. So you have to configure all these parameters. If you don't, you don't know how the model worked. So don't worry, in this course, I'll give you the entire logic of the files because I've extracted and modeled it. This means that the orchestrator will always operate based on this structure, that the orchestrator will always use this file to keep track of its operation. So that's what you need to keep. And the architecture of the AI agents, or sub-agents, is defined by the ability to write to its memory, its directory, and its operating log in sequence using the specified format. So, I've coded all the instructions for you. You take the blocks and apply them to each agent, you distribute them. Now, we can go even further. We can automate the creation of agentic systems. I've done it. And here's how I suggest you think about automating them. I should point out that this is more developed in Level 3, "Prompt Engineering Excellence," because I consider it to be getting quite technical. So, I'm reserving this for professionals, those in companies, in auditing, developers. The first thing is, I 'm going to create an agent. This agent will route to a skill. The skill is composed of three sections. The section that will configure how the agent works—that is, the role, the model, the tools, the permissions—in a template. This is the template I showed you. Then, the memory management rules in the memory.md section. And if you recall how a skill system is structured, when the agent calls the skill—in the skill.md file—you have the structure that sends the agent, which builds the systems, to the directory to retrieve the agent's template. So it follows the template, and when it manages the memory portion, it follows the memory section. The goal of a skill is to route to different sections. It's not about writing a 1500-page instruction; it's about enabling the model to know what to look for to address the problem at the moment you're coding. To help you, I'll provide the routing logic I've chosen, but you can certainly develop your own, because there isn't just one way to build the system; there are several. But what you absolutely must understand is that if you don't manage memory, you don't know what the model will use, and you can't track or audit it. So you can't know how it worked, how it reasoned; it remains a black box. And that's something you can't do in a business environment. You need to know how a model functioned and whether it followed the instructions. Another point is that if you don't properly define the agents' system prompts, you don't know how an agent behaves from one instance to the next. So you don't have standardization; you don't have something stable. That's why the operational variables—the ID, the name, the role, the models, the tools, the permissions, the access files—these are variables that absolutely must be coded. The key thing to keep in mind is that to automate this, I recommend using an agent that will call upon a skill, and within that skill, you'll route to the files that will allow you to speed up the process. for creating AI agents. So, when you're in Codex, there's no need to use the "skill" function (you know, the "skill generator" and "skill creator" functions). No need. Once you've used the templates I gave you, the only thing you have to do is optimize the instructions with ChatGPT 5.5 or Claude if you're using Claude. You just need to create a "skills" directory manually. That is, you right-click, select "new folder" with an "s", and inside, you create the directory with the name of the skill. So, "skill name". And that's where you'll put your skill.md file. Note, however, that your file must be in Markdown. It's skill.md. So you change the extension and put the routing inside. Once that's done, you just copy and paste the instructions in there. It's not more complicated than that. So why do I think it's a bad idea to use the "automated skill" system? Well, because it will start... Skill Creator, when you use it, tends to reformulate everything you 've done. But if what you've done works and is fine, you shouldn't let the AI (remember that AI is a probability of tokens) reformulate it differently. What works, you have to keep. You take it, you copy it, you do it manually. You go from being a simple user to an architect system. So here, you create your skill system in the same way we did. Here, you put your directory, and in your directory, you put the example files you're calling in your directory system. All of this will save you many hours coding agents based on the templates I've provided below. And most importantly, as you've seen, if you don't do this, look what happens: you have an orchestrator that cycles through sub-agents, telling them, "What you did isn't right, start over." But since ChatGPT 5.5, by default, doesn't manage memory and doesn't properly code system instructions, it's normal that you'll lose a lot of tokens, and therefore money, and therefore time, and therefore your data plan, because by default, ChatGPT 5.5 isn't the wonderful tool we were promised. It's a tool that, like all Transformers systems, requires coding the instructions, coding the architecture. And once that 's done, you no longer have the problem of randomly generated instructions. You create a driven, functional system. So start practicing properly with level 2, and in level 3, we'll go a step further, getting even more technical. But even now, I think it's pretty good. So grab all the resources in the description, start working, start developing your AI agents. That's when you'll really start using AI to sell in business. Alright, see you soon.