Just because it’s old, it’s not necessarily bad.
At least not in the EU.
In a quite recent development, European officials dusted off the Product Liability Directive (PLD). Originally a 1985 relic – now “modernized” for the digital age.
Mercifully for EU consumers, the new rules even give courts “presumptions” that a glitchy Facebook feed or a cranky chatbot is defective if the maker withholds info or the malfunction is obvious.
In short, failing tech firms must come up with evidence of innocence, or risk being presumed guilty. A sort of “presumed defective until proven otherwise” principle – a suspiciously generous standard for injured users.
The PLD now expressly covers not just toasters and tractors, but software, AI systems, and even digital manufacturing files. In practice, an errant Instagram algorithm or buggy AI assistant can be subject to a court proceeding the same way as a burning toaster.
Victims of psychological harm or ruined data have new causes of action: the directive allows compensation for “medically certified damage to psychological health” (certainly bad news for Facebook and TikTok) and even “destruction or corruption of data”.
For decades, the tech relation between America and Europe was simple.
The United States invented the future. Europe regulated the charger cable.
America launched tech revolutions, AI platforms, social media empires and trillion-dollar monopolies. Europe responded by producing documents explaining why cookie banners should contain clearer typography.
Washington projected power with aircraft carriers, sanctions and economic threats. Brussels projected power with cautiously worded declarations written in twenty-four languages.
Everybody knew their role.
But this time something is in the air. Bureaucrats found a sensitive point.
The Product Liability Directive has become the new hot-topic.
Officially, the directive modernizes consumer protection rules for the digital age.
Unofficially, it is Europe (not so quietly) informing American tech giants that the European giant also has teeth.
Though the directive itself isn’t new (in fact, it entered into force in late 2024), but EU member states are still in the process of implementing it into national law, which means the real legal consequences are only now beginning to feel tangible.
No wonder American companies have started lobbying furiously: now that it’s becoming reality, they finally understand what Europe actually did.
The main concern is not that software and AI systems can now fall under product liability rules, though that alone would already make half of Silicon Valley reconsider a few decisions.
The truly horrifying part is that courts may presume liability more easily when products are excessively complex or when companies refuse to disclose information.
Which translates to: the EU’s baseline assumption is that trillion-dollar companies know more about their products than the injured consumer does, thus they need to prove that there’s no wrongdoing on their side instead of expecting users to prove that the system they cannot possibly understand, was malfunctioning in any ways.
For decades, tech companies operated under a magical doctrine that transformed them into metaphysical abstractions whenever accountability approached.
If a social media platform destabilized teenagers’ mental health, spread extremism, amplified conspiracy theories and optimized addiction patterns with supercomputer precision, executives could simply claim that they were merely “neutral platforms”, “providing open space for everyone”.
Developers of AI systems could hide behind the “it’s a language model, not a doctor” statement whenever someone acted on medical advice or even outright manipulation conjured by AI.
And indeed, it is nothing else.
Which is precisely why Europe is becoming suspicious.
The revised liability framework quietly introduces the proposition that if a company builds a system too complex for ordinary people to understand, perhaps the burden of proof should not fall entirely on ordinary people.
This is where the cultural divide becomes highlighted.
The American regulatory framework relies heavily on grand gestures: tariffs, threats, sanctions, trade wars and theatrical press conferences.
In Europe a company’s success depends on legal access to the European market. So instead of shouting, Brussels simply adjusts definitions inside directives and relies on enforcement via courts.
Tech lobbyists have told President Trump that Europe’s “discriminatory” digital rules demand retaliation.
The US Trade Rep quickly complained on X that the EU “persisted in a continuing course of discriminatory and harassing lawsuits, taxes, fines, and directives against U.S. service providers,” and even hinted at trade sanctions.
Meanwhile, back in Europe, Commission President von der Leyen shrugged off Trump’s demands, “Europe will always decide for itself” on digital standards.
The European Union long ago realized it cannot realistically outcompete the United States in technological scale.
Europe was late for the social media revolution. And the smartphone ecosystem. The cloud race. It even arrived late to generative AI.
There is no European Google, no European Meta, no European OpenAI powerful enough to dominate globally.
So Europe approach is fundamentally different:if you cannot own the platforms, regulate the civilization built on top of them.
That philosophy produced GDPR, the Digital Markets Act, the Digital Services Act and the AI Act.
Individually, each regulation seemed annoying but survivable to American firms.
Together, however, they form an entire legal architecture designed to slowly eliminate the doctrine of technological immunity.
The revised Product Liability Directive fits perfectly into this ecosystem, by asking a forbidden question: what if an algorithm can also be defective?