*Recorded Feb 2, 2026
Connect with Tom
Tom Ryan on Substack
Linkedin: https://www.linkedin.com/in/tommyryan/
Company: https://www.linkedin.com/company/asymmetricresponse/services/
Episode 6 Summary
Chris J Snook welcomed Tom Ryan of Asymmetric Response to “The ATOMIQ LEVEL” to discuss the impact of the Open Claw and AI frenzy on personal Operational Security (OPSEC) for high-net-worth individuals. Tom Ryan, whose background includes military service and application security, defined OPSEC as controlling shared information, noting the challenges posed by widespread data breaches, deep fakes, and the use of AI in pattern-of-life analysis on platforms like Open AI, which tracks users via watermarks and hash values. Chris J Snook and Tom Ryan stressed the critical need for families to be educated on digital security, the risks of connecting AI to personal devices, and the emerging attack vectors posed by autonomous AI agents, recommending initial steps like using contained environments like Virtual Machines and reviewing terms of service with AI tools for security risks.
Details
The ATOMIQ LEVEL Podcast and Guest Introduction Chris J Snook welcomed Tom Ryan of Asymmetric Response to the show, “The ATOMIQ LEVEL,” which focuses on wealth and happiness. The topic of the discussion is the recent frenzy surrounding Open Claw and AI, specifically addressing the impact on personal operational security (OPSEC) for high-net-worth individuals and their advisors (00:00:00). Snook invited Ryan to share their background, including their journey from joining the military, which led to their OPSEC expertise, becoming a developer during the early .com stages, and eventually entering cybersecurity, partly inspired by a Harry Potter website hack at Scholastic (00:01:06).
Cybersecurity Background and Application Security Tom Ryan’s journey into cybersecurity began after their military service, transitioning into development during the early .com era, then pivoting to security following the hacking of the first Harry Potter website they were involved in launching in 1999 (00:01:59). They explained that their deep involvement in Application Security (AppSec) means securing any code, including low-code/no-code platforms and the orchestration layers of AI. Ryan noted that finding and fixing compromises in these layers is challenging because there is currently only one tool available for security testing, and they are constantly monitoring how AI models change (00:02:54).
Defining Operational Security (OPSEC) Tom Ryan defined Operational Security for the layman as controlling and monitoring what information is shared and what people can see. They highlighted that individuals are constantly giving up data, which is then sold, such as when registering for a driver’s license or paying bills, noting that this practice undermines personal OPSEC unless data is actively purged (00:03:51). Ryan recounted an instance where the only way they could find a billionaire’s actual home address was through their publicly available fishing license record (00:04:42).
Challenges to OPSEC in the Digital Age Chris J Snook observed that in the last 30 to 40 years, two generations (Gen X and Boomers) grew up without needing to worry about the constant sharing of personal information, contrasting it with the present where data brokers share personal details, even for those who are not actively online (00:05:33). The prevalence of social media introduces more OPSEC issues, such as geostamping on photos (00:07:25). Tom Ryan stated that despite any individual’s OPSEC efforts, data breaches are widespread, and they personally have been compromised in at least 27 of them through seemingly minor actions like registering for a conference (00:05:33).
Practical Steps for Personal OPSEC Addressing the overwhelm listeners might feel, Tom Ryan discussed the issue of “doxing” (the sharing of documentation or exposure) and noted that individuals need to educate their families, as lack of awareness is a common back door (00:08:24) (00:13:38). Ryan cited an example where the location of the head of MI6 was compromised because their spouse posted a public picture. Ryan explained that modern facial recognition can find old public pictures, and even innocuous images can reveal specific location data, including the building address and floor level, by analyzing EXIF data and other visual information (00:09:18) (00:14:31).
AI and Tracking Mechanisms in Social Platforms and AI Tools Tom Ryan warned that platforms like LinkedIn, Facebook, and Instagram use AI rules meaning that once content is posted, they own it, enabling pattern-of-life analysis and semantic analysis to profile a user’s personality based on their writing patterns. Open AI, as an example, watermarks both code and images; the file names often include a hash value that can be used to track the user, where and when the content was made (00:09:18). Ryan explained that these watermarks are used to prevent garbage data from contaminating their models, but people rarely realize this tracking occurs unless they examine the code they copied from the AI (00:10:41).
Initial Measures for a Family’s Digital Security When advising a normal family with something to lose, Chris J Snook sought guidance on how to start defining what to protect and how to become aware of unexpected concerns (00:12:32). Tom Ryan emphasized that even if one individual implements the best OPSEC, it is useless if the family is not taught the same practices (00:13:38). Ryan highlighted that even a simple photo can now reveal precise location information (00:14:31).
The Impact of Recent AI Developments (Open Claw/Moltbook) Chris J Snook and Tom Ryan referenced the recent viral phenomenon known as Moltbook (Moltbot), which saw an explosive, rapid adoption among developers, initially appearing as AI agents self-organizing on a social network. Snook observed that although much of the initial excitement about Maltbook’s AI autonomy might have been scripted, it offered a glimpse into an inevitable future of autonomous AI agents (00:17:20) (00:18:47). Financial incentives, such as one instance of an individual turning $50 into $248,000 overnight by creating a prediction market knockoff, drive rapid development, creating new, irreversible vulnerabilities (00:19:49).
New Attack Vectors with AI Agents Tom Ryan identified a new and alarming attack vector where malware can be pushed through AI agents operating on a social media platform and then ingested by other agents, creating a massive botnet (00:22:28). Chris J Snook elaborated on the concept of an AI social network, where artificial entities interact, adopt human attributes like personality and jobs, and even transact financially with digital asset wallets (00:23:21). Snook stressed the danger that AI agents could self-organize and communicate with each other, potentially sharing a human owner’s data without their knowledge, even if the user took precautions like using a dedicated computer (00:25:22).
The Rise of Deep Fakes and IP Protection The discussion touched upon the growing sophistication of deep fakes, which Tom Ryan stated are being used in areas like divorce cases involving fake phone calls and videos. Chris J Snook added that large language models are so adept at mimicking human writing styles that it will become increasingly difficult to discern real content from fakes (00:29:13). Snook suggested the need for a decentralized ID system stored on a public blockchain to verify content authenticity, but noted the widespread implementation is still far off (00:30:18).
Monetization and Taxation of AI Chris J Snook noted the rapid monetization efforts around AI, citing an example of an “AR Treasury Revenue Service” already launched in the Clawbot network (00:31:05). Ryan and Snook commented that governments are seeking to tax AI, and this focus on revenue generation often overshadows the consideration of unintended consequences and existing vulnerabilities (00:30:18).
Responsible AI Experimentation and Vulnerability Mitigation Chris J Snook advocated for calming down rather than slowing down in response to the rapid AI changes and encouraged listeners to look for opportunistic benefits (00:32:42). Tom Ryan explained that during the first week of any new platform, security holes are actively being sought out. Ryan identified API connections and web sockets as initial security focus areas (00:34:16). They warned that if an AI platform is connected to services like OneDrive, a “prompt injection” could allow a malicious actor to steal internal secrets (00:35:02).
Prompt Injection Explained and Physical Security Analogy Chris J Snook defined a prompt injection as embedding a malicious instruction within a seemingly innocent prompt, which can compromise a user’s code base without their knowledge (00:35:02). Tom Ryan added that code can be embedded in a document using white font, making it invisible to the human eye, but the code can still be uploaded and used to take control of the machine (00:36:00). Ryan emphasized the hacker’s mindset of exploiting vulnerabilities and cautioned that the typical business person who rushes to capitalize on AI often overlooks security risks, such as incurring unexpected $3,000 to $8,000 bills from services like Anthropic due to configuration issues (00:36:49).
Setting Up a Contained AI Environment Tom Ryan stressed that simply using an old machine for AI experimentation is risky because the AI will scan the entire machine. They recommended using a clean, locked-down build or a contained environment, suggesting that free tools like VMware and Fusion can be used to set up a Virtual Private Cloud (VPC) or a Virtual Machine (VM) (00:37:45). Chris J Snook highlighted the practical risk to families, such as a teenager using an old laptop linked to the family’s master Apple account and network, thereby opening the entire back door through an AI agent (00:38:36).
Historical Parallel to Digital Vulnerabilities To explain the widespread danger of unconfigured AI tools, Tom Ryan drew a parallel to the early days of Napster and LimeWire, noting that if these file-sharing tools were not configured correctly, they would access everything on a user’s machine (00:42:44). Ryan warned that connecting personal devices like an iPhone to a MacBook with iCloud setup means that even hidden folders could become available through Open Claw or similar platforms (00:44:18).
The Importance of Supply Chain Awareness Tom Ryan reminded listeners that everyone is part of someone else’s supply chain, so even if an individual views themself as “small,” they can become a primary target (00:45:35). Ryan cited a recent compromise of Iron Mountain, where 1.4 terabytes of compressed data were exfiltrated, and breaches at major companies like Hilton, Match.com, and Bumble (00:46:30). Snook and Ryan noted that many people still use their work emails for personal activities, creating major security vulnerabilities (00:48:38).
The Shift in AI Decision-Making and Security Flaws Tom Ryan observed that historically, a CISO or equivalent made high-level AI risk decisions, but now CFOs or CTOs are making choices based on company relationships with vendors like Microsoft or Open AI, often without adequate knowledge of the associated security risks (00:51:28). They mentioned that executive-driven decisions to use AI tools, such as GitHub Advanced Security or C-pilot, can inadvertently increase vulnerabilities and work for security teams by introducing bloat code (00:52:29). Ryan asserted that they can assess how vulnerable a company is by reviewing their job requisitions, which often reveal their supply chain and the tools they use (00:54:17).
The Entrepreneurial Opportunity in AI Chris J Snook and Tom Ryan acknowledged the immense promise of AI, particularly for small and medium-sized businesses, which make up the majority of the economy (00:56:41). Snook observed that while AI presents an opportunity to cut down on expensive SaaS subscriptions, it currently resembles the streaming model where initial cost savings have been replaced by a return to high monthly expenses through multiple subscriptions (00:58:02). Ryan emphasized that it is crucial to recognize that some large financial companies backing new AI tools may be focusing on IPOs or M&A deals rather than genuine security implementation (00:56:41).
Shift from SaaS to Agentic Solutions Chris J Snook posited that with no-code solutions and AI, people can move away from non-customizable Software as a Service (SaaS) tools to self-hosted, custom-built environments like their own CRM, which could be developed in days or weeks. This shift allows individuals to own their data and environment on their own servers and stack, contrasting with the limited ownership and connectivity often associated with leased SaaS products (00:58:53). Chris J Snook questioned whether everyone should be moving toward their own agentic solution, while acknowledging the vulnerabilities in the publishing platform that facilitates this world (00:59:43).
OpenAI IPO and Financial Strategy Tom Ryan and Chris J Snook discussed Amazon’s significant investment in OpenAI and the latter’s planned IPO by the end of the year (00:59:43). Chris J Snook suggested that OpenAI’s urgency to go public stems from financial vulnerability, anticipating a market bubble burst in a couple of years, allowing venture capital and preferred shareholders to benefit by “dumping it on retail” investors. This strategy is seen as a way to make OpenAI “too big to fail” financially (01:00:39).
Infrastructure and Energy Bottleneck Chris J Snook emphasized that current infrastructure, power, and compute resources are not adequately set up to support the rapid growth of AI, pointing to an “energy bottleneck” discussed in a deep dive they wrote on Substack with Ben Ryberg (01:00:39). Tom Ryan reinforced this point by mentioning that a conversation at the Cyber Breakfast Club focused entirely on who owns the power bill associated with AI’s new power (01:01:58).
The Convergence of Cyber and Physical Security Tom Ryan discussed the historical and ongoing attack vector through physical systems like camera and VoIP systems, which they first used in 2006, and are still being used in 2026. They noted the irony of industry finally having the conversation about converging cyber and physical security (01:02:50). Tom Ryan also highlighted a job posting for an AI engineer in the physical security world offering a salary of $156,000, which Chris J Snook suggested was too low for such an important job (01:03:52).
Privacy and Data Harvesting in the Home Chris J Snook and Tom Ryan discussed the loss of privacy, referencing the difficulties encountered years ago when working on “privacy as a choice” (01:03:52). Chris J Snook pointed out that devices like Ring doorbells and Nest systems effectively turn homes into “oil wells” where occupants are paying for the convenience of having their valuable data harvested, comparing an “Apple ID” to an “inside joke” because it dictates identity (01:04:53). Tom Ryan shared that they give away Faraday bags from the company SLNT as swag to new executive customers to help them manage this reality (01:06:21).
Surveillance and Pattern of Life Tracking Chris J Snook and Tom Ryan highlighted how technology tracks individuals’ “pattern of life,” citing the placement of long hallways with cameras and sensors in airports to pick up IP addresses from laptops and phones (01:06:55). Chris J Snook questioned how easily authorities can find people given the pervasive tracking infrastructure, suggesting that delays in finding individuals indicate “something else going on” (01:07:35).
Rushing In vs. Leading In with New Information Chris J Snook described their reaction to a recent incident (Moltbots) where they initially felt “FOMO” but prioritized understanding the consequences of acting or not acting before publishing content (01:09:11). They chose to interview Tom Ryan to gain a “level set” perspective before publishing, aiming to “calm down so we can speed people up,” acknowledging that rushing in often leads to negative consequences (01:10:03).
Asymmetric Mindset and Offensive Security Services Tom Ryan detailed the shift to operating their own business, driven by previous employers undercutting their value as a sales engineer despite bringing in significant revenue (01:12:37). Their current business focuses on offensive security, encompassing both old-school and AI-related approaches, by viewing everything, including humans and DNA, as “code” with vulnerabilities. Tom Ryan’s company serves government agencies and is involved in simulations and tabletops around threats like deep fakes and complex cyber-physical attacks (01:13:29) (01:16:12).
Deep Fake Security Threats in Real-Life Hostility Tom Ryan explained that their talk on deep fakes focuses on using them to compromise executives, including putting them in “a hostile area” potentially leading to a “chained attack” resulting in real-life kidnapping. Chris J Snook clarified this concept by relating it to a scenario in the super yacht business where actors use deep fakes of clients’ voices to reroute large deposits, resulting in the theft of funds and the client losing their reservation (01:13:29) (01:15:15). Tom Ryan confirmed that their company conducts simulations of these offensive security attacks (01:16:12).
Vulnerabilities in Luxury Assets and Hacking through OT/IT Separation Tom Ryan discussed the cybersecurity risks associated with super yachts, which are now required to have cyber security insurance. The major vulnerability is the lack of separation between IT and OT (Operational Technology) networks, enabling a phone, even when connected via Starlink, to be turned into a command and control center, potentially compromising the yacht’s systems, such as the ballast system, and causing it to sink (01:17:02).
The Need for a Cognitive Solution to AI Challenges Chris J Snook pondered whether society will ever revert to less complex methods, suggesting that the solution to current challenges is not technical but “cognitive,” involving training virtues and ethics in the next generation (01:18:02). They referenced the “fourth turning” theory, which suggests that society will eventually enter a new cycle with new institutions (01:19:14). Tom Ryan did not believe that bots would save people from themselves, stating that a “harsh, ugly reality check” will occur to determine what works and what does not (01:21:23).
AI as a Tool and the Role of Intent Chris J Snook and Tom Ryan agreed that AI is fundamentally a tool, comparable to a scalpel or a firearm, with its impact determined by the user’s intent—it can be used for good or for harm (01:22:12). Tom Ryan stated that red teaming has become “a lot easier” with AI, which can be used to bypass facial recognition technology (01:22:57). Tom Ryan observed that voice recognition systems, like the one Clubhouse implemented with Stripe, are easily defeated by technologies like Eleven Labs (01:23:56).
The Challenge of Narrative Control with AI Tom Ryan highlighted that a significant, risky capability of AI is its effectiveness at “controlling the narrative” through narrative operations, which they have witnessed in various controversial events. They discussed how competing interests use algorithms and overnight experts seeking clicks to control public perception of issues, such as the SIG P320 firearms controversy or the coverage of the Charlie Kirk shooting (01:31:52). Tom Ryan added that they have seen how the algorithms in social media were controlled to push narratives like “angels versus demons” following certain events (01:33:39).
Simple Steps for Personal Security and Risk Management When asked for simple, proactive security steps, Tom Ryan recommended that every time an individual receives a terms of service upgrade, they should copy and paste the document into an AI tool like Grock to analyze the risks (01:34:37). Tom Ryan stated that they will be releasing a guide, potentially with scripts on a GitHub repo, on how to secure virtual machines, although they noted the complexity of the task (01:35:40).
Focus on Vulnerability, Exploitation, and Protection Tom Ryan emphasized that all security, whether cyber or physical, revolves around three questions: “Am I vulnerable, am I exploitable, how do I protect myself?” (01:40:51). They clarified that security should be viewed as “reputational risk management,” considering the impact of a compromise on one’s reputation and employability (01:41:36).
Risk Profiling and Lifestyle Choices Tom Ryan and Chris J Snook discussed how risk profiles vary significantly based on lifestyle and business activities, contrasting Warren Buffett’s low-key existence in Omaha, Nebraska, with his $35-an-hour security guards with a more high-risk individual like Elon Musk. They concluded that the best advice is for individuals to understand their personal risk profile and determine what actions they need to take to live a life of peace (01:41:36) (01:44:07).












