Wealth Matters 3.0
THE ATOMIQ LEVEL
EP023: The Fight to Keep Human Judgment, Privacy, and Ownership on Your Desk
0:00
-1:18:31

EP023: The Fight to Keep Human Judgment, Privacy, and Ownership on Your Desk

ATOMIQ LEVEL Podcast with Tor Hagemann brought to you by @MarketStack

Watch the Video Interview

The Man Building an AI Lockbox Was Born in Denmark and Started His Career at Palantir

Before Tor Hagemann became the founder and CEO of Lovarys, before he began building what he describes as an “AI lockbox” for lawyers, accountants, and professionals trusted with sensitive client data, he was a Scandinavian kid with a machine on his desk and a question forming quietly in the background of his life.

What does it mean to really own your technology?

Tor was born in Copenhagen, Denmark, and grew up mostly in the United States, in Southern Wisconsin, just north of Chicago. His background gave him a split-screen view of the world early on: Scandinavian roots, American upbringing, and a family life that moved across borders just as the global financial and legal systems were becoming harder for ordinary people to navigate. After he finished school, his family returned to Europe for a time and lived near London. That chapter brought with it a messy education in cross-border tax complexity, U.S. citizenship issues, Singapore-linked corporate arrangements, and later, the unexpected consequences of Brexit.

For many kids, those details would have been background noise. For Tor, they became a kind of early apprenticeship in systems. He saw, up close, how law, money, identity, jurisdiction, technology, and institutions could wrap around a family’s life. He saw that the rules were real, but not always simple. He saw that the people most affected by complexity often had the least control over it.

And then he started tinkering.

As a kid, Tor installed Linux on an old Windows computer. He ran local servers. He maintained a chat system for friends. He learned FTP because the internet connection was not good enough to depend on remote services. He built things locally because he had to. That practical constraint became a philosophy before it became a company.

The lesson was simple: when the network is unreliable, ownership matters.

Share

The Childhood Tinkerer Who Learned to Prefer Local Control

There is a certain kind of technologist who begins not with theory, but with necessity. Tor seems to belong to that lineage. He did not start by chasing abstraction. He started by trying to make machines do useful things in the real world, inside the limits of the tools and bandwidth available to him.

He took a class in C++ in the early 2000s. He ran a wiki on Ubuntu. He experimented with hermetic systems, not because “local-first” had become a fashionable slogan, but because running things locally was how he learned to make them work. Long before AI became a boardroom anxiety or a venture-backed gold rush, Tor was developing a relationship with technology rooted in control, transparency, and practical sovereignty.

That history matters because Lovarys did not come from nowhere. It came from a life spent noticing the difference between using a tool and depending on someone else’s machine.

In the old world, ownership felt obvious. You bought a CD, a tape, a book, a device, or a house. It was yours. You could lend it, resell it, repair it, store it, break it, or keep it on a shelf. But the digital age quietly rewrote the meaning of ownership. Music became streaming. Software became a subscription. Files became cloud accounts. Identity became a login. Even intelligence, in the AI era, is increasingly something we rent from a remote system we do not control.

That shift is more than economic.

It is civilizational.

Because when ownership becomes access, freedom becomes conditional. When intelligence runs on someone else’s computer, privacy becomes a promise instead of a fact. When professional judgment depends on remote infrastructure, every sensitive client document becomes part of a trust tradeoff most firms have not fully examined.

Tor noticed that tradeoff early. Lovarys is his answer.

Palantir, Privacy, and the Education of a Systems Builder

Tor studied mathematics at university but left before finishing to take a job in Silicon Valley in the early 2010s. His first major professional stop was Palantir Technologies, a company that has become almost mythic in public conversation because of its work at the nexus of data, government, defense, intelligence, and enterprise systems.

But Tor’s account of that experience is more nuanced than the caricature.

He describes Palantir, at least during his time there, as intensely focused on privacy and civil liberties. The company had lawyers dedicated to privacy and civil liberties, and internal conversations gave employees a direct line to ask hard questions about contracts and ethical implications. In other words, Tor’s early professional technology education did not happen inside a naive belief that data was just data. It happened in an environment where privacy, power, access, and responsibility were live issues.

That matters because the current AI boom often treats data as fuel and privacy as a feature. Tor’s career taught him that trust is not a marketing layer you paste on top of a product. Trust is architecture. Trust is governance. Trust is custody. Trust is deciding where the computation happens and who holds the key.

After Palantir, Tor moved through other technical worlds: larger companies, smaller startups, insurance core systems, logistics, asset tracking, postage infrastructure, public key and private key systems, digital signatures embedded into 2D barcodes, and zero-knowledge-style trust mechanisms designed to prove something happened without exposing everything behind it.

By the time he founded Lovarys, he was not just another founder riding the AI wave. He was a systems builder who had spent years working around the central question of modern technology:

Who controls the machine, the data, and the proof?

The Problem: Your AI Is Running on Someone Else’s Computer

Lovarys was born from a deceptively simple observation.

Nearly everyone using AI is running it on someone else’s computer.

That may sound harmless when the prompt is casual. It becomes much less harmless when the user is a lawyer, accountant, financial advisor, family office, doctor, founder, board member, or professional holding sensitive client records, private strategy documents, tax data, estate plans, intellectual property, litigation files, employment details, or proprietary deal information.

The mainstream AI model asks professionals to trust a remote intelligence layer with information they are often ethically, legally, or contractually obligated to protect. The industry has dressed that up with enterprise agreements, data policies, compliance language, and assurances. Some of those assurances may be meaningful. But Tor’s premise is more fundamental.

Why send the sensitive information out in the first place if the intelligence can come to the data?

That is the heart of Lovarys. It is a hardware, software, and managed-services approach for putting AI capability on a box the user controls. In the conversation, Tor describes using lightly modified or off-the-shelf hardware, such as a Mac Mini in a custom casing, not because it sounds futuristic, but because the technology is understandable, powerful, and less exposed to vendor risk. The goal is not to mystify the buyer. The goal is to make the system legible enough to trust.

Lovarys is not trying to sell magic.

It is trying to sell custody.

The AI Lockbox

The most memorable metaphor in the conversation is the “AI lockbox.” Lovarys issues hardware security keys to users, turning access into something physical and intentional. The key can include biometric security. If the key is not authorized, the box cannot be used. In plain language, the user has a physical relationship with the security boundary.

That is powerful because it restores a familiar intuition. People understand a lockbox. They understand a safe. They understand a key. They understand that some things should be kept inside a controlled environment.

Lovarys applies that intuition to artificial intelligence.

Inside the box, the model can run locally. The data can remain local. The documents can be indexed locally. The retrieval-augmented generation layer can query private files without broadcasting them to a cloud model. The professional can ask questions of their own documents while preserving a stronger security posture around client information.

This is not merely a technical feature. It is a psychological bridge. For lawyers and accountants who worry about privilege, liability, confidentiality, or client mandates that say “no AI,” a local AI lockbox may offer a way to use the technology without surrendering the trust that makes their profession possible.

Air-Gapped Intelligence and the Strange Importance of GPS

One of the most fascinating parts of the conversation is how physical the future of AI becomes when Tor explains the product. We are used to thinking of AI as something floating in the cloud, a disembodied intelligence summoned through a chat window. Lovarys brings it back down to the desk.

The system supports air-gapped operation, meaning no electrical signals transmit data in or out of the box. Wi-Fi and Bluetooth can be administratively turned off. Storage can be handled through local NVMe enclosures. A vector database such as Weaviate can index documents for retrieval-augmented generation, allowing the model to answer questions based on the user’s private files.

Then comes the unexpected detail: GPS.

Lovarys adds a GPS receiver because the Mac Mini does not include a built-in GPS chip. That receiver can help provide accurate local clocking and act as an entropy source for key generation on an offline device. It is the kind of detail most users may never think about, but it reveals the seriousness of the architecture. When a system is offline, basic assumptions change. Time, identity, randomness, custody, and backup all have to be reconsidered.

This is where Tor’s philosophy becomes tangible. Local AI is not simply taking a chatbot and putting it in a box. It requires rebuilding trust from the hardware up.

Share

Leasing Intelligence Versus Owning the Conditions of Thought

The deeper philosophical thread of the episode is the idea that we may be entering an age of leased intelligence.

People once worried about leasing software. Then they leased storage. Then they leased media. Now, increasingly, they lease cognition. Frontier models become the place where people draft strategy, analyze documents, write emails, summarize knowledge, build workflows, and make decisions. The more useful these systems become, the more dependent users become on the infrastructure they do not own.

Tor does not present this as a simple good-versus-evil story. He is realistic. Everyone trusts someone. There is always a tradeoff between freedom and efficiency. Cloud models are powerful. They move fast. They are convenient. For many use cases, they may be perfectly acceptable.

But not all use cases are the same.

A law firm handling privileged documents is not the same as a student asking for a recipe. A family office analyzing estate structures is not the same as someone writing a birthday card. An accountant working with sensitive tax records is not the same as a casual user summarizing public information.

The question is not whether cloud AI is useful.

The question is whether professionals should have the option to run intelligence where their data already lives, under conditions they can inspect, govern, and defend.

That is the freedom Lovarys is trying to preserve.

Leave a comment

Trust Is the Real Interface

Another important thread in the conversation is Tor’s refusal to reduce adoption to specifications. He understands that buyers do not make serious technology decisions on logic alone. They need ethos and pathos before logos. They need to trust the people, the use case, and the architecture before they care about the fine print.

That is why Lovarys is taking a consultative approach. Tor wants to identify the use case first. E-discovery. Intellectual property law. Mergers and acquisitions due diligence. Accounting. Client mandates that prohibit cloud AI. Professional workflows where sensitive data cannot casually leave the building.

This matters because the future of AI adoption will not be determined only by speed. Society does not adopt trust-sensitive technology at the same pace that engineers ship it. Tor compares this dynamic to the COVID-era vaccine timeline: the technology may move quickly, but social trust, institutional confidence, and behavioral adoption move more slowly.

Distrust is increasingly the default.

That is why trust has to be designed into the system instead of assumed after the fact.

Share

Privacy Is Not About Hiding

One of the most human moments in the conversation comes when Tor talks about privacy. Privacy, in his view, is not merely about hiding wrongdoing or keeping secrets. It is about preserving some part of individual identity. It is about the right to think, work, test, draft, fail, and reason without turning every private act into someone else’s data asset.

That distinction is crucial in the age of AI. If every prompt, draft, client record, family document, strategic memo, medical note, legal argument, tax file, and investment thesis becomes training exhaust, then the inner life of a person or firm begins to dissolve into the machinery of someone else’s platform.

Privacy is not paranoia.

Privacy is personhood.

And in professional services, privacy is also the foundation of duty. Lawyers cannot casually expose privilege. Accountants cannot casually expose client tax data. Advisors cannot casually expose family balance sheets. Boards cannot casually expose confidential deliberations. Families cannot casually expose their most sensitive documents and expect trust to remain intact.

Lovarys is built for that world, the world where AI is necessary but uncontrolled exposure is unacceptable.

The Human Decision Point

By the end of the conversation, it becomes clear that Tor is not betting on a fully agentic future where machines simply buy from machines and humans disappear from the loop. He is focused on the markets where human beings, boards, partners, professionals, and trusted advisors still make the final call on tradeoffs.

That is not nostalgia. It is realism.

In high-liability environments, machines may assist, accelerate, search, summarize, and surface patterns. But responsibility still lands somewhere. The lawyer signs the filing. The accountant signs the return. The advisor makes the recommendation. The board approves the system. The family decides what level of privacy and convenience it can tolerate.

AI should free humans from repetitive operational expenditure so they can focus on deeper, more meaningful client work. But the more powerful the machine becomes, the more important the human decision point becomes.

That is especially true when trust has already been broken across institutions. In a low-trust society, the winning technology may not simply be the most capable. It may be the technology that restores the user’s sense of control.

Leave a comment

Why You Should Listen

This ATOMIQ LEVEL conversation with Tor is not just about a hardware box. It is about the next battleground in artificial intelligence: ownership, privacy, custody, and trust.

It is about the Scandinavian kid who learned to run systems locally because the network could not be trusted, then went on to work at Palantir, study privacy and civil liberties from inside the data infrastructure world, build systems around proof and identity, and ultimately create a company designed to bring AI back under the user’s control.

It is about why lawyers, accountants, family offices, advisors, and small-to-medium businesses need to think harder about where their intelligence runs and who holds the keys. It is about why the convenience of cloud AI may not be enough for use cases where client confidentiality, privilege, proprietary data, and professional liability are on the line.

Most of all, it is about a future that does not have to be a choice between rejecting AI or surrendering everything to it.

There is another path.

Put the intelligence on the desk.

Keep the keys with the user.

Let the human decide what deserves to leave the room.

Press play on this episode with Tor Hagemann of Lovarys, and you will never look at AI, privacy, ownership, or professional trust quite the same way again.

The Real Risk Is Doing Nothing!

~Chris J Snook

Discussion about this episode

User's avatar

Ready for more?