
Tech • IA • Crypto
Developers and entrepreneurs are pushing decentralized, privacy-first AI and social technologies to counter data exploitation and restore user control over digital lives.
Early personal computing promised autonomy, with users owning hardware and controlling their data. Over time, especially since the 2000s, that model eroded as large platforms centralized services. Modern systems increasingly monetize attention and behavior, turning users into data sources rather than beneficiaries.
Major technology firms rely on harvesting and aggregating user data to fuel advertising and analytics markets. Even when data is not directly sold to governments, intermediaries such as brokers compile detailed profiles across platforms. This fragmented ownership leaves individuals without a unified or controlled digital identity.
Users’ online lives are scattered across ecosystems controlled by companies like Google, Apple, and Meta. This lack of ownership creates dependence and limits autonomy. The growing integration of AI threatens to deepen this imbalance by extending monetization from attention to cognition.
Advanced AI systems could intensify manipulation by shaping perception and decision-making. At the same time, they offer tools to reverse centralization by enabling personal agents that manage data locally or securely. The direction depends on how systems are designed and who controls them.
A new generation of AI is moving beyond browser-based chatbots toward autonomous agents that operate across a user’s entire digital environment. These agents can retrieve messages, filter content, and manage workflows without relying on centralized apps, significantly improving convenience and efficiency.
Improvements in hardware, internet speed, and software usability now allow individuals to self-host services that previously required large-scale infrastructure. Lightweight servers and home-based systems can support social networks, communication, and data storage without centralized intermediaries.
New ecosystems emphasize “value-for-value” exchanges, where users directly compensate creators rather than paying with data. This peer-to-peer monetization model challenges advertising-driven platforms and aligns incentives toward user benefit rather than engagement maximization.
Companies are developing tools such as local AI appliances and encrypted cloud systems. These solutions use open-weight models, device-level encryption, and secure execution environments to ensure that user data remains inaccessible to providers while still delivering advanced functionality.
The concept of self-custody, widely associated with Bitcoin, is expanding into AI. Developers anticipate a future where protecting AI data requires safeguards similar to cryptographic key storage, including defenses against coercion and unauthorized access.
The proliferation of AI-generated content is making it harder to distinguish humans from bots on traditional platforms. This erosion of trust is driving interest in systems that can verify identity and authorship more reliably.
Decentralized networks such as Nostr use cryptographic signatures to confirm authorship of content. Combined with customizable “Web of Trust” metrics, users can filter information based on relationships, reputation, and economic signals rather than opaque platform algorithms.
Instead of top-down moderation, decentralized systems allow individuals to define their own filtering criteria. Metrics such as interactions, payments, or trust connections help identify credible participants without imposing a universal standard.
The convergence of decentralized infrastructure, cryptographic identity, and privacy-focused AI is reshaping the future of computing, offering a path away from data exploitation toward user sovereignty.
All right, thanks for all thank you all for coming. Uh, my name is Justin. I'm the technical lead for HRF's AI program, Human Rights Foundation's AI program. Uh, let's go down the road with some introductions to get started. >> What's going on? Thank you for being with us. I'm Derek Ross. I do developer relations at Soapbox. A whole bunch of Noster stuff. >> Uh, my name is Jesse Pner. I'm CEO of Vora. Uh, we make tools to help you reclaim your digital life. um which includes self- custody not just for your Bitcoin but for your AI, for your emails, for your text messages, for all your data. >> And I'm Mark Suman. I'm the CEO of Maple. We make personal intelligence for users. We're bringing privacy back to AI. So, Chad GBT, all those tools, we make similar versions, but protecting your privacy. >> Yeah. So, the topic we want to discuss today is is basically, you know, freedom and sovereignty and computing, right? So when computers, personal computers first came out in like the 80s and 90s, you know, it was this very idealistic vision, bicycle for the mind, very empowering, uh you were totally in control. You bought this hardware, you owned it, you could you had all you were augmented with all kinds of superpowers and you know the internet comes around and it gets better and better, right? Information super highway, the cipher punks, if you ever like, you know, listen to some of their early talks, they were incredibly idealistic of, you know, where this was going, right? And this kind of stayed true until maybe the early 2000s uh when you know it sort of started changing course right and so uh computers maybe have gone from something that liberated us to something that more and more the average person is sort of uh uh serves right so you're you know you're you're you're kind of farmed for your attention uh you're surveiled at all times right and so you know the 2010s is maybe a period where you know computers became less and less free and the the one uh the one thing the one thing that resisted that trend was Bitcoin, right? And so we don't need to talk about that too much more because you guys are all here for a Bitcoin conference. You kind of understand the value there. But I think we still have a long ways to go uh in terms of making our digital lives more sovereign, more free, uh and makes make computers something that serve us that make our our lives better as opposed to something that that that harms us. So I So I'm I'm shooting for like an open discussion about about kind of how we get there. So maybe I'll start with you. Uh uh Eric like uh or not Eric. I was Jesse like I was thinking. Yeah. Yeah. Uh Jesse uh you were talking about that's kind of like the mission of your your company. Um so walk me through that. Like what what where do you guys see like the problems are now in in computing today? >> Well, I don't think anybody's really happy with the state of technology right now. Um, just about everybody I talk to complains about their devices, their social media feed. Um, and it just it doesn't feel good. Um, you log on to social media, you're entering into a psychological operations battleground and there are people trying to piss you off or make you afraid, make you click on things. Um, and we feel that. Uh we we feel it and we don't own our digital lives and we feel that too. Um each of these big tech companies owns a shard of our digital life. Google, Apple, Facebook, they all own a piece. So we're essentially digitally homeless. We don't have a home in cyerspace. This space and this identity that's becoming a bigger and bigger part of our lives. And that's just the beginning. Because what happens when AI gets added into this toxic mix? And it's not just our attention that's being monetized, it's our cognition. And we will lose free society, free thinking. We'll essentially become machines if we do not choose a different way. And what's exciting is that AI also has the opposite potential. It has the potential to radically accelerate decentralization, accelerate privacy. So that's what we have to focus on as the Bitcoin community to take our perspective, our values, our philosophy, the inheritors of the cipher punk tradition, and take that into what's happening in AI at this critical moment. So Mark, uh, you know, this is a compelling thing that you'd think other companies would be offering, right? So, uh, sovereign private AI is is is would be a compelling value proposition. Why don't big tech companies offer this? Why are they basically selling the exact opposite, right? Just walk us through how it how it actually works. >> Yeah, I mean, it's it's the old trope of follow the money. So, you follow the incentives of these companies and they have built their businesses on monetizing user data. So they've built these algorithms that capture your attention. They know how to keep you in there. And then they take all that information and they sell it to advertisers. They sell it to governments. They sell it to the highest bidder. And um a lot of those times that data is actually not sold directly to the government, but it's sold to a data broker who aggregates data from other sources. And so you might think, oh, I'm just using this one tool. I've only told this tool this thing, and then I've kept myself separated over here in this thing. But they're they're joining them together later on, and they're building a whole profile on you. So it's it's really just looking at the incentives, but you you brought up the 90s. The 90s were such a great time for so many reasons. One reason is they didn't have this whole business model of monetizing data. They simply sold you a service and that was it. >> Right. We need some sold you a physical like CD. That was >> No, those came in the mail for free. >> Yeah. Or on the front of a magazine or something, right? So yeah, it's really a lot of the incentives we need to we need to build tools that have just better business models that that favor the user and not favor the company. So Derek, there's uh so there's this idea that um you know one of the places we went wrong, one of the difference between 2000s and the '9s is that we started monetizing data, right? And so you know uh maybe there wasn't a way to monetize the internet in the '9s. And so like how does how does the Nostra community kind of look at this differently? Right? So we talked a little bit more about AI and now Nostra has this idea of value for value and zaps and stuff like that where we may be able to have an economy that's more peer-to-peer and less kind of indirect monetization where you you leak data and then it's sold up the river uh and and monetized by someone else that you're not even aware of. >> So let's rewind for a second. So back in the '9s, internet's new, well, the worldwide web's new, and to scale, to grow, we didn't have the tech infrastructure to be able to run all these pieces ourselves at home. So we started, so we relied on large tech corporations to do so, to facilitate growth. But but that had a downside, which is, you know, why we're all here now talking about it. But with Noster, the tech stack is now mature. I can run part of that social web piece that I want to run at home. I can run applications that run on hardware that I own at home. I can run a mini server that facilitates all of my social transactions. And the tech stack is very very now decentralized. Internet is now fast enough and generally available in most places. So now we don't need to rely on these large tech companies, large infrastructure. we can mostly do it ourselves. And with Zaps, relying on us, relying on our community for funding, relying on our community for monetization. If a creator puts out a video, a photo, a blog article, a shit post, a gift, whatever, any type of content, music, what you name it, we can monetize it ourselves. We don't need to rely on advertising as a as a monetization method. We don't need to rely on large tech companies to monetize our data in terms of offering us free services. Now there there are different methods. Value for value is really new. It's it's it's you know been around for a while but most people don't understand because they're so used to getting content for free. They're so used to just signing up for a service, giving all of your data, all of your content to a large company, giving all of your attention to a large company, then monetizing it to give you back content from advertisers. So, it it's a really disruptive model and it's going to take time. Revolution doesn't happen overnight. So, it will take a while and it is going to take people spending their their hard-earned sats to help facilitate this. and we're we're just getting started. >> Yeah. So, one thing is, you know, we uh you're mentioning how, you know, we we stopped self-hosting uh our own services and started depending on the cloud, big companies to run some of our computers for us, right? But we still we still spend a lot of money on like computer hardware, right? We all have these fancy phones, laptops, and stuff like that. But somewhere behaviorally or techn technologically, we stopped uh self-hosting. And I think self-hosting is really really critical uh like it needs to happen for us to have like free services that we all live uh can use. So I think any anybody respond I'm curious Jesse in particular. >> I just say that I think now that the tech stack both hardware and software is ripe to be able to do this right like probably some of us have been running a lot of these services at home for the past two decades anyways. The average person couldn't do that. It was hard us more technical people h have been doing it but now the way the tech stack is almost anybody can do you can point and click and install a self-hosted service now you couldn't do that you know 20 years ago absolutely and really AI is like everything that Derek just laid out AI is like the final missing piece that we didn't even realize we needed that pulls it all together and you know when you Think about these issues of privacy and decentralization. Why hasn't it gone mainstream? Because right now the appeal is mainly ideological. Do this because it's the right thing. It's the good thing. It's better for the world. And yeah, you get, you know, us. We get the hardcore like, yeah, we're going to do it. Even if it's kind of a pain or annoying, we're going to do it anyway. But guess what? We're never going to get the mainstream that way. we're always just going to be niche in the corner and you know that's not good enough. This this is something that has to get brought to all of humanity and that's not going to come through lecturing people or telling them what they're supposed to do. The way we're going to get that is creating amazing experiences that aren't possible otherwise. And this is why AI is so important. Everybody's going to have a claw. Everybody's going to have a personal agent that runs your life because it's just so freaking useful that nobody's going to be able to resist that. But what's the first thing that happens when you start using a claw system? You want your agent to deal with your digital >> for those people don't know just just say a little bit more what it what what that means. So you know the first phase of AI is something like chat GPT where the AI lives in the browser but the next phase is the AI lives on a computer. It's not stuck in a browser. It can do anything that you can do on your computer. And that's super useful because now I don't have to deal with my apps. I just ask my agent what I need when I want it. And that's so much more useful. You know, if if I'm curious about something Justin sent to me or our correspondence, I don't want to look through was it in text or Signal or Gmail or Facebook this that or the other thing. I just want to say what did Justin say to me yesterday? That's it. That's what the agent can do. Or I don't want to go into X, Facebook, all this other crap. I want my agent to show me my feed on my terms the way I want to see it. So what starts to happen with these agents as soon as you set them up on your computer, you want to hook them to all your accounts and you very quickly notice the APIs, the accounts that work with your agents and the ones that are hostile to your agent. So like in X, Facebook, Instagram, they don't want agents pulling down the data. They don't want agents interacting with their platform. And for good reason. It's an existential threat to their business model. So, they are hostile. Up until now, the UX for these other feeds is arguably better than the current state of Nostra. I mean, Nostra's come a long way, but you have these very polished UXs. Now, it's the opposite because if I'm running a claw, the Facebook UX sucks because my agent can't connect to it. It's garbage to me. It's worthless to me. the API that my agent can connect to the >> that you had to pay for >> that I have to pay for now is 10 times 100 times >> and it's free >> free open cryptographically authenticated it's the dream network for an agent and we don't need these data brokers we don't need these centralized companies in the middle we need lightweight discovery networks the agents will build the roads agent to agent peer-to-peer >> and and the word that comes to mind as you're talking is convenience. You've like you've you've said basically defined >> which is not a word that is usually used to describe like Bitcoin user experiences, right? Like this is one of the first times we can actually offer this. >> Yeah. Bitcoin, Nostraster and AI. We can make these convenient and make privacy convenient and that's the only way that we capture mainstream, right? And I agree with you. Agents are the next iteration of all these tools. And if you think about back in the 90s, since we're harping on the 90s, we were all worried about viruses getting on our computer that would like go in and touch all of our apps and would read all of our emails and text messages and stuff. And now it's like, oh, actually no, we want to invite those tools on our system, but we need to do it in the right way. We need to make tools that are secure, that are private, that are going to be able to have access to all this. And so if you are running something like openclaw or Hermes, but then you're having uh anthropic claude or you know chatbt's models talk to it, well you're basically just opening up your computer to Sam Alman or to to Daario or something. So we need to be using a combination of local AI, right? If you have a machine that's powerful enough for it or some kind of convenient cloud hosted AI, you know, we try to bring the privacy of local into the cloud. Find something that's more private and really we need to build tools that are very convenient. Yeah. So Mark and Jesse, could you both just so everyone knows give like a a brief uh explanation of your approach to AI because they're they're both very great. They're both complimentary. Uh and I want the audience to understand these. >> Yeah. So we're um we're starting with a product that we call the Aegis, which is a home AI appliance where um it has a local openweight AI, basically an open- source AI that you can verify for yourself. you can choose what kind of AI you want to run and it runs on your local hardware. Um, you it's an appliance. It's not a computer. So, it's super easy to set up. You just pair a mobile app, desktop app, and now you have an AI that's completely private. It it's loyal to you and no one else and it manages your digital life. Um, and it can work with the cloud AIS when there's a non-privacy sensitive task. So it kind of orchestrates these different tasks and figures out um when to bring in the cloud and when to use local. Um and then you know from there we're going to follow up with additional products because we think that Bitcoin self-custody and AI self-custody are eventually going to converge into the same product and you'll need cold storage for your AI. We'll need to think about wrench attacks and government seizure and all the things we think about Bitcoin self custody. It's going to be the same thing for AI because if somebody owns your AI data, they own you and so at some point that may even be more valuable to you than your Bitcoin keys. >> Yeah. So we are a cloud version of a lot of what he just described, right? So last year we set out to build ChatgBT but endtoend encrypted. So we run open models and then your your device has a private key on it and everything's encrypted locally before it's sent up to our cloud and then it's processed in trusted execution environments and then sent back to your device encrypted. So we actually can't see anything that users are talking about. Now we're we're announcing really this week is we have our own agent that's coming soon. Um so you can go to www.tryle.ai and you can join the wait list. Um, we're building an end toend encrypted private agent and it's going to continue to grow because we we really want to build convenient tools that the most people can take advantage of. >> Welcome to predict. The world is a market. Everything is a market. Every headline moves the line. Every moment is your market. Call the moves. Bet on your instinct. your prediction, your edge. Dual bits predict where everything is a market. So one other aspect of this is that I I don't know for me it feels feels like over the last year public social media has become or traditional social media has become harder and harder to use and uh more and more I don't know if I'm talking to a human or not you know and it it it it uh feels less and less authentic. So Derek if you could kind of address I mean this is this is uh like I I think Jesse uh mentioned how the the cryptographic ver verifiability of Noster is useful here. So could you just kind of walk us through that? Like you know in the future you may not know if an ex account is a bot or not but with with Nostra's cryptographic verifiability and with kind of a web of trust how how do you see that working in Nostra? Well, uh I I'll say that whenever we saw the influx of OpenClaw bots, uh two three months ago and we we realized, wow, bots have got a lot better. We we need tools and I think that kind of accelerated web of trust. Web of trust essentially it it is metrics used to determine filters where if I like a certain type of content maybe zaps which are value for value transactions or maybe I like comments or maybe I'm weird and I like likes for some reason then I can tailor my content my feed my interactions what I want to see based on these web of trust metrics. At the most basic level, web of trust is if I follow somebody and that person follows somebody, there's a chance that they're a good account to follow that at its most basic level. But web of trust needs to be based off of additional metrics because follows are are are the least important item on this stack. So we start adding in, you know, value for value. We we track maybe a metric of who is zapping and and how much are they zapping. Is it oneat zaps or are they actually sending like real monetary value to to people? So then you can use this to configure your feed. You you would you would say maybe I only want to see people that are using proof of work notes or I only want to see content from npubs profiles that are that are known good zappers. And this isn't a social credit score because every single person in this room can configure their own metrics and configure their their own weights and measurements and everything. It's not there's not one central entity that pushes this down from the top. Like every app, every person, every human can decide what what they want to see. And you can't do that on on Facebook or or X or, you know, Tik Tok, Instagram, whatever. You know, that's controlled from the top. They're determining who or who isn't a bot and whatever those metrics are and they're using >> and that's going to just be KYC. It's just going to be heavy heavy >> heavy heavy heavy KYC. So with Web of Trust, you essentially allow your community, you to do the the moderation. And I think it it boils down to giving users the most amount of tools that they could have possible to determine who, you know, who is human and and who is not and who they want they want to see in their feed. You can't do that anywhere else. >> It's really critical, too, even for agents, right? Like we can have really good agents that are on Noster and Right. there is an a uh an ao an agent web of trust like some agent built it like a couple months ago. >> Yeah. But let's say that Elon Musk today came on to Twitter and said, "Hey, this is my bot. This is my open claw. It's called Elon bot." There would immediately be 10,000 copycat Elon bots on Twitter like within a matter of minutes and you would never know which one's the right one. They would all have blue check marks, yada yada yada. Whereas with Noster, it's got a cryptographical proof that this is the one that I said is my bot and nobody can spoof that. And I think it's really important moving forward to know authenticity of accounts. >> Yeah. Because ultimately at the end of the day, since tweets aren't cryptographically signed, you don't know who who sent them. It's just a line in a database at the grant, you know, at the end of the day. So someone could add a new line to the database and boom, Mark said something when he actually really didn't. But it looks like he did because it wasn't signed with Mark's uh private key because Twitter doesn't allow you to do that. We're on Noster. you can sign and everything since it's cryptographically signed, you have proof that yes, Mark actually did post this. So yeah, you can't do that again anywhere else. >> Like a like a president who resigned from being president of the United States on Twitter and we have no idea if he really posted it. >> Absolutely. Yeah, you you you don't know if he posted that. That's absolutely wild. They were like, "Oh, well, I guess he posted I guess that's happening." But if if he would have signed the transaction or had somebody sign the transaction for him, I guess on on Nost, we would say, "Oh, well, I guess this is verified that somebody that had access to that private key posted that information." >> Okay. With Oh, go ahead. >> I was just going to say briefly, you know, uh, a cryptographic signature might be the only thing that an AI is never going to be able to figure out how to fake. >> Yeah. It's it's there's no amount of AI slop that's going to get the the signature verification to pass without the private key. And so that I think is going to be an increasingly important foothold in this kind of world of AI where the AI data and AI generated content is going to vastly outweigh the human generated content probably you know 10 to1 100 to one beyond >> and also uh the the Noster app called Divine just launched today and they're actually using human I I forget what it's called human mode proof mode human proof mode or something like that to determine content that was that was posted by humans because they don't want to see the AI slop. They're tired of the, you know, people are we're we're all tired of the AI slop. So, content on Divine, there's human mode and you can toggle it on and see if it's content that was posted by uh a a human. >> Okay. So, only a few seconds remain here. Let's go down the line. Where can people find you and uh and and keep track of your progress? Start start with you, Mark. >> Okay. www.tryle.ai. Check it out. Uh, you can find me on Twitter and Noster, Mark Zuman on Twitter. Mark at MarkX on on Noster. >> Uh, yeah, I'm Jesse Pner on X and uh I've got a Noster account linked there and uh I'm uh you can find Vora at vora.io and we've got a mailing list. Um, please sign up. You'll hear from us uh more uh when we're shipping. Um but uh yeah, within a a few months we'll have more details uh about our product. >> I deleted all my socials 3 years ago. Fuck Legacy social media. You can find me on Noster Derek Ross at grownostster.org. Check us out on Soapbox at soapbox.pub and our brand new Noster app ditto.pub. >> Awesome. Well, I'm on Nostra as well. Uh thank you all for joining me and thank you all for joining as well. Every year, this community comes together to celebrate, to debate, to build what comes next. And every year, the stage gets bigger. Sound money, center stage. So, where do you go to celebrate the next chapter in Bitcoin history? You come home. Nashville, July 2027.