BREAKING: While We Watched Musk and LA, Trump Redefined the Internet
Executive Order from 6 June quietly transforms the internet into a tool of regime control, building-in immunity for Big Tech.
What we’ll cover here:
Executive Order: Sustaining Select Efforts To Strengthen The Nation’s Cybersecurity
The cloud is no longer neutral
What this Executive Order does
The authoritarian internet
Dear friends
On 28 May, in response to a post that asked "Now that Elon is on the way out, who will be Trump’s ringleader?" I responded:
"Respectfully... I think the questions we should be asking ourselves are:
Now that Musk’s on his way out, what’s he doing with all that data DOGE stole?
Why is Musk building Colossus, the world’s biggest super-computer?
What part could Grok AI play in all of this?"
Now, we may have the answers.
As I wrote in my last post, on Friday — while the world watched Trump’s feud with Elon Musk and the deployment of the National Guard in Los Angeles — the White House released six new Executive Orders. Four of these EOs lay the foundation for the complete consolidation of state-aligned, tech-enabled authoritarian power.
I then wrote about the three orders that reshape American airspace for surveillance and control. Today, I want to unpack the cybersecurity Executive Order — Sustaining Select Efforts To Strengthen The Nation’s Cybersecurity — as part of the broader surveillance machinery being quietly assembled around us.
First, it’s important to reiterate something I’ve said many times before. I don’t write this to overwhelm or dishearten you, but because it’s essential we understand exactly what’s being built — and what must be dismantled. Without that clarity, we can’t develop a strategy to effectively oppose it. And there is still time and space to do that. (I’m working on outlining the broad strokes of a strategy, to get us started.)
Equally, I should warn you that this post is not for the faint of heart. If you’re already feeling overwhelmed by what’s unfolding, please take a beat. Practice ujjayi breath, take a walk, get yourself grounded. Then come back to it — it will wait, but not for ever. Because it’s important you know what you’re dealing with so you can prepare for it.
This Executive Order sets out to redefine the entire digital world: who builds it, who governs it, who gets to use it, and who is watched while doing so. I’m going to walk you through what this means — for you, for civil society, and for the world at large.
We need to start by understanding the terrain we’re actually standing on.
The Cloud Is Not Neutral
Most people still think of Amazon as a shop, Microsoft as “Office”, Google as a search engine, Apple as the phone in their hand, Musk as Tesla. But each has significantly expanded their services and reach since their early days. Amazon Web Services, Microsoft Azure, and Google Cloud (Alphabet) now form the backbone of what many think of as ‘the cloud’, and are the dominant global players.
Amazon Web Services (AWS) runs a third of the world’s internet infrastructure — including hosting data for hospitals, schools, police departments, immigration systems, and more.
Microsoft Azure hosts government agencies, big companies, public services, voting systems, and the world's largest professional surveillance network: LinkedIn.
Google/Alphabet runs vast parts of the internet’s backbone — from search to maps, Gmail to Android, YouTube to ad tech — and hosts much of it in its own cloud. That gives it deep control over what information people can find, how it’s delivered, and who gets tracked along the way.
Apple not only controls the hardware layer — the phones, watches, and tablets people use every day — but it also runs cloud services that sync data, store messages, and connect devices. That means Apple decides what apps can run, what gets backed up, and who can track what, all from inside a tightly controlled ecosystem.
Meta still shapes global narratives through Facebook, Instagram, and WhatsApp — quietly moderating, amplifying, or shadow-banning based on ever-changing, opaque rules.
And beyond the cloud infrastructure:
Elon Musk controls Starlink, a global satellite network that bypasses terrestrial infrastructure and serves over 70 countries. He’s now building Colossus, the world’s largest supercomputer. And he owns Grok, an AI model integrated with X (formerly Twitter), being groomed to compete with national intelligence services in speed and reach.
In the wings, we have Palantir — a data analytics company that builds surveillance and decision-making software for governments, militaries, and corporations. It's owned by Peter Thiel, a key funder of authoritarian-aligned projects, who is building surveillance logic into every corner of society and openly advocating for it. He co-founded Paypal with Musk, and is connected to many of the key players in Trump's government, most notably Thiel's protégé, J.D. Vance.
These companies no longer just provide services — they operate the infrastructure of our daily lives. And —crucially — they are not neutral providers.
Each one is politically entangled with Trump, having donated significant sums to his election campaign. They are not only aligned with a US administration that is dismantling democratic checks — they are now likely now to function as instruments of regime-aligned control.
What This Executive Order Does
This Executive Order rewrites the rules of digital governance quietly, but with sweeping consequences.. Here’s what it sets in motion:
1. The state now decides how software gets built
When the government controls how software is made, it also controls what can be built, who can build it, and how it’s allowed to function. This EO speeds up changes to federal cybersecurity frameworks which govern the standards for building and securing software used by government agencies, contractors, and critical infrastructure like power or transport. The preliminary version is now due by December 1, 2025, and a final version 120 days later.
While this won’t cover all software by law, in practice, these standards will ripple outward. If your software needs to connect to government networks, meet legal requirements, or run on widely used platforms, it has to follow the new standards. Otherwise, it may be blocked or made unusable.
For the end-users this matters because only approved software will continue to work as expected. That software can be designed to collect data, monitor activity, and control how your devices behave. Over time, alternatives may quietly disappear.
This isn’t just about losing options. It’s a way to embed government rules into the digital tools people use every day — like apps, websites, and devices — so they can silently shape what you’re allowed to do, say, or access online.
2. The state decides what tech you can use in your home
A new label is being introduced for internet-connected devices — things like smart doorbells, thermostats, fitness trackers, and baby monitors. Although this label is required for products sold to the government, because companies are unlikely to make two versions of the same product, the label is likely to become the standard for everyone.
If a device doesn’t meet the government’s criteria, it could be blocked not just from government use, but eventually from app stores, networks, or compatibility with other systems. That means the everyday devices you use at home could stop working — not because they’re broken, but because they’re not approved.
However, the real concern is the potential for surveillance. Many of these devices include always-on sensors, cameras, and microphones, often running without the user realising. They can record images, sound, biometrics, and activity data, sending that information to cloud servers.
Once these devices are tied to government-defined rules, it raises concerns about how much personal information the they collect, who can access it, and whether people will have any real way to opt out. Future rules could quietly reshape what they’re allowed to do, and what they’re required to record.
3. Enforcement is automated by “rules-as-code”
Agencies are directed to take the next step of converting cybersecurity rules into machine-readable code. This means the rules will no longer (just) be published in documents, but written in a format that software systems can read, interpret, and enforce automatically.
In practical terms, this is a shift to automated control. Devices and software will be programmed to apply the rules directly, for example, by blocking access to certain apps, issuing alerts, restricting updates, or shutting down functions. These actions are likely to happen without human review, and without warning.
There will be no time for discretion, no space for context, and often no appeal process. The system will simply act, because the rules are built into the code.
This isn’t a distant possibility. The Executive Order specifically launches a “rules-as-code” pilot programme by 2026. It directs the Office of Management and Budget (OMB), NIST, and CISA to create machine-readable versions of cybersecurity policy, and to align agency decision-making with those systems. It’s part of a broader plan to make governance more “efficient”, but in practice, it removes flexibility and hands decision-making over to automated systems.
In a surveillance state, this changes the role of technology. Devices no longer just collect data or follow user instructions — they enforce policy. A phone could refuse to install an app. A laptop could block a connection. A smart home device could go dark if it fails to meet security rules — rules you can’t see, written in code you can’t access, triggered by decisions you can’t question.
And since these systems operate silently and instantly, there may be no visible sign that enforcement is happening, other than that something has stopped working.
This is how digital systems begin to act on behalf of the state, not the user. And once the rules are embedded in code, even well-meaning officials may lose the ability to intervene.
4. State control of access to next-generation encryption
By 2030, the US government plans to replace current encryption systems with new ones designed to resist future threats from quantum computers. These are powerful machines that could one day break the encryption we use now to protect sensitive data.
This shift affects more than just national security. Encryption is what protects your bank accounts, medical records, passwords, text messages, and cloud backups. It’s the reason strangers can’t read your emails or steal your identity every time you go online.
Under this EO, the government isn’t just encouraging better encryption — it’s controlling the process. It decides which companies are allowed to offer these new protections, and which products are considered trustworthy. Tools that don’t meet government approval may be blocked, restricted, or even made illegal.
That gives the state enormous power over who is allowed to have digital privacy, and who isn’t. And if the government wants a back door in the encryption it approves for general use, it means your data may still look secure, but could be accessed without your knowledge or consent.
For most Americans, that removes the basic protection encryption is meant to provide. You wouldn’t be able to check whether your messages, health records, or stored files are truly private. You’d have no say in what protections your tools use, and no way to replace them if you’re not comfortable.
In a surveillance state, this is not just a security update. It’s a shift in power. The ability to speak, store, and move information securely — something most people take for granted — would no longer be yours to decide.
5. State controlled integration of AI into cybersecurity
The Executive Order directs federal agencies to make artificial intelligence (AI) a central part of how their digital systems are protected. AI will take over tasks like scanning for vulnerabilities, applying software patches, and identifying security threats — across all government networks.
To train these AI systems, federal agencies will begin sharing datasets with researchers. Some of these datasets may be public, but others will only be released under secure, restricted conditions. The Order specifies that access should be granted “to the maximum extent feasible” while considering “business confidentiality and national security”, which suggests some of the material may be sensitive. Ultimately, the government decides what gets released, to whom, and under what terms
This matters because an AI system can only recognise what it’s been trained to see. If the underlying data is biased, incomplete, or collected without proper safeguards, the system’s decisions will reflect that. Once deployed, the AI could block access, raise alerts, or limit digital activity — without human oversight and without users knowing why.
I have significant concerns about the data that might be used. This recent whistleblower report from inside the Department of Government Efficiency (DOGE) alleges that staff accessed internal systems and extracted sensitive records, including information about union complaints and whistleblower claims. This access reportedly bypassed oversight, used covert data exfiltration methods, and was later concealed by deleting logs.
If these claims are accurate — and if data gathered via the DOGE raids is used to train or inform AI — it’s a major red flag. What the government counts as a “threat” could be shaped not by public safety concerns, but by political interests. People flagged by government-run AI threat detection systems might never know why it's happened — or even that it has.
The risk isn’t only about how the technology works. It’s about who decides what it learns, and what happens when those decisions can’t be questioned. In a system where AI decides what behaviour is suspicious, and government agencies control the data that AI learns from, everyday people could be misclassified, restricted, or monitored — without recourse, and without transparency.
6. Big Tech is immune from cyber accountability
One of the most consequential changes in this Executive Order is an edit to an earlier one — Executive Order 13694, originally issued by President Obama in 2015. That order gave the U.S. government the power to impose sanctions on people or companies involved in major cyber-related harm, including actions that targeted Americans or U.S. institutions.
Originally, it applied to “any person.” The new amendment changes that to “any foreign person.”
This shift removes the government’s ability to use Obama’s cybersecurity order against US-based tech companies — even if their technology is used to cause digital harm inside the country. From now on, only foreign individuals or organisations can be penalised under that framework.
In practical terms, this creates a gap. If a domestic tech company helps identify protest organisers, suppress dissent, enable harassment, or interfere with democratic processes — like voter suppression — the government no longer has this tool to respond. Other legal routes may still apply, but this particular mechanism — which once allowed swift executive action — is now off the table for domestic misuse.
It’s a quiet but far-reaching change, narrowing the space for accountability and giving aligned companies more room to operate without scrutiny.
7. The regime's surveillance backbone need not comply
The Executive Order does not apply to systems labelled as National Security Systems (NSS). These include military networks, intelligence tools, and other classified technologies used for national defence or covert operations.
That means while civilian systems — from agency networks to public-facing infrastructure — must follow strict new rules for cybersecurity, software standards, and automation, these classified systems are not subject to the same requirements. They operate separately, outside the frameworks being imposed on everyone else.
In practice, this creates a two-tier system. The tools we use every day are tightly controlled and standardised. But the systems the government uses to monitor, track, or conduct surveillance face no such constraints under this order. Their design, use, and oversight remain hidden — and exempt from the rules that now govern public and commercial technology.
The Authoritarian Internet
The cloud is no longer neutral. It is becoming the architecture of soft coercion — capable of controlling speech, behaviour, access, and economic participation without a single shot being fired.
Amazon Web Services (AWS) If AWS decides your service isn’t trustworthy — or if the government tells them to cut you off — your entire business or platform could disappear overnight. No warning, no workaround, no appeal.
Microsoft Azure If you’re building tools to support voting rights, healthcare transparency, or public protest — and you’re not in favour with the regime — Azure can quietly deny access, flag your accounts, or block your integrations. And LinkedIn (which Microsoft owns) can silently limit your reach, hide job posts from you, or monitor what networks you’re trying to build.
Google (Alphabet) If your organisation is pushing back against authoritarian policies, your website might still exist — but it could drop off the first ten pages of search results. Your emails might go to spam. Your YouTube videos could be demonetised or age-restricted, making them disappear from view. Google doesn’t need to ban you to silence you — it can simply make you invisible.
Apple makes the phones, watches, and tablets millions of people use every day — and it controls what apps can run on them. If your group relies on a secure messaging app or encrypted organiser tool that Apple decides not to approve, your entire movement might be locked out of communication, especially if your supporters don’t know how to install alternatives. And Apple doesn’t need to tell you why.
Meta (Facebook, Instagram, WhatsApp) controls how people connect. If your post or campaign gains traction and the algorithm doesn’t like it, Meta can reduce its reach to zero. They don’t need to delete your content — they can simply make sure no one else sees it. WhatsApp can limit message forwarding, breaking protest organising chains. Facebook groups can be closed. Instagram posts can be hidden without notice. You won’t be told it’s censorship. You’ll just feel like no one’s listening.
Grok AI, under Musk, is not just a chatbot. It’s the new nerve centre of regime-aligned surveillance. Integrated directly with Starlink’s satellite network, Grok doesn’t just generate text — it watches, learns, and predicts. It hoovers up global data from Musk-controlled platforms — X, Tesla, Neuralink, Starlink, and anything linked to Colossus — and feeds that back into the regime’s digital nervous system. That means what you post, watch, drive, say, or even think aloud near one of these systems could be interpreted, scored, or flagged — automatically.
Grok is not just AI assistance — it’s AI observation.
Colossus, the supercomputer, is being built to outpace the computing power of any country in the world. It’s designed to train AI systems faster, on more data, with more complexity, than any government — or group of governments — can currently match. That gives whoever controls it a serious advantage: not just in tech innovation, but in surveillance, information control, and digital warfare. And it’s controlled by one man — Elon Musk — without oversight.
Starlink routes internet traffic through satellites, bypassing national networks and placing global connectivity under the control of one private individual. It currently operates in roughly 40 fully active countries, with service entering over 110 nations at various stages, and reaching users in more than 125 countries and territories.
But Starlink is not neutral infrastructure. It’s a system of control, with access selectively granted — or withheld — based on Musk’s political or strategic preferences.
Musk has restored internet access during blackouts — in Iran during protests, and in Bangladesh during a nationwide shutdown — and he's restricted its use, most notably in Ukraine. In 2022, after initially providing Starlink to Ukraine, Musk restricted access near Crimea to prevent its use in a Ukrainian drone strike. That decision, made unilaterally, revealed both the extent of his power, and the absence of any check on it.
In authoritarian hands, that same power could be used to block signals, monitor dissent, or cut off entire regions from communication — with no transparency, no legal boundaries, and no recourse.Palantir already builds software that helps governments track people, spot patterns, and flag possible threats. In the U.S., it’s been used by ICE to find and detain undocumented immigrants, and by the LAPD to run predictive policing. In England it's used by the NHS to monitor public health, and it's been used by Israeli authorities to keep tabs on Palestinians.
This Executive Order opens the door for those tools to be used in new ways — and on a much wider scale. By bringing AI into cybersecurity, turning agency rules into code that enforces itself, and setting strict standards for which systems are allowed to connect, the government is building an environment where Palantir’s kind of software could run in the background of everyday life.
That might mean systems automatically scoring people’s behaviour, merging data from different government agencies, or flagging someone as a risk — with no human decision, no warning, and no way to check if it was a mistake.
Palantir doesn’t decide how its tools get used. But this Order could make them part of the digital infrastructure that controls how the whole system works.
We're not talking about a slight shift, here. We're talking about a full-scale embedding of surveillance into everyday life. This EO seals the legal scaffolding — the platforms will enforce it, the algorithms watch it, and the rest of us can be logged, labelled, nudged, and ultimately rendered compliant by systems we didn’t choose and can’t escape.
I’d originally planned to end this post with a look at the global implications of these four Executive Orders — but I think we’ve all taken in enough for one sitting (myself included). Now’s a good time to step away, let it settle, and give our nervous systems time to recalibrate. This post will help.
I’ll return with the global angle as soon as I’m able (hopefully tomorrow) followed by something I know many of you have been waiting for — an outline for moving forward.
The next post on global implications will appear under Vantage Point, and the strategy piece will be published via The Strategy Room. If you’d prefer to go straight to the action plan, you can adjust your subscription settings for Your Time Starts NOW and opt out of Vantage Point for now.
Meanwhile, I still haven’t seen anyone else covering the impact of these four Executive Orders. So if this post has helped you see what’s unfolding more clearly, please share it widely. And please take a moment to like or comment here, because visibility feeds the algorithm — and that’s how we push this into view.
In solidarity, always
— Lori
📌The support of paid subscribers is what makes this work sustainable. It helps me go deeper, widen the lens, and keep offering clear, practical insight without compromising on care.
From 16 June all paid subscribers will receive Priority Access to all new posts — seven days early — plus full archive access. Free subscribers will still get every post, but only for a seven-day window, starting a week after publication. If you’d like full, ongoing access, now’s a good time: I’m offering 42% off both monthly and annual plans, for life. This is the last offer I’ll be making for a while.
If this work matters to you and you’re in a position to support it, please consider becoming a paid subscriber. It helps me keep this space open, thoughtful, and fiercely independent.
I think some caution is due. Without diving too deep, this EO is a change to existing rules. It’s about how the Federal government operates and secures its own IT infrastructure. It’s not about the commercial internet, as far as I can tell. For example, item #1 isn’t new, it’s an update to how the government decides if software is safe enough to be installed on government networks. It wouldn’t surprise me if the federal cybersecurity community will see #1 as a weakening of existing rules like FEDRAMP and various NIST standards. I suspect #1 is more of a gift to industry than an authoritarian move. Similarly, #7 is a “no duh it doesn’t apply.” You have to understand this is acknowledging the government’s classified and unclassified networks operate under different rule books. Classified networks use burdensome standards that would be overkill if you weren’t protecting national security information.
This isn't progress, it's the silent wiring of an Orwellian cage.
The Executive Order doesn't just secure systems, it mandates compliance, turning our own devices and platforms into tools for pervasive, automated monitoring. Big Tech gains immunity while enforcing obedience, and encryption is neutered. This digital panopticon is being coded into existence.
Resistance isn't optional, we must act before submission becomes the only option left.