You know, every now and then, we hit a moment that feels like a hinge in the timeline of technology—a place where the rules shift, the landscape cracks, and what once felt solid turns fluid. That’s where we are right now.
Let’s start with Meta. The FTC is dragging them into court, challenging their acquisitions of Instagram and WhatsApp. It’s not just about antitrust—it’s about rewriting the rules of how tech giants grow. If the government wins, we’re looking at a potential breakup. And if they lose? Well, it’s open season for consolidation. Either way, every big tech CEO is watching this like it’s the Super Bowl of regulation. Meanwhile, Meta’s trying to play nice in Europe, promising to align AI with EU values. That’s not altruism—it’s survival. Regulation is coming, and Meta is racing to look responsible before the hammer drops.
And then there’s OpenAI. On one hand, they’re talking about ID verification for future access—trying to close the door on misuse before it happens. On the other, researchers are out here jailbreaking ChatGPT like it’s an escape room. Toxic content, hallucinated code, phantom dependencies—it’s clear we haven’t even scratched the surface of what “safe AI” really means. Zoom, Apple, even Alibaba—they’re all moving fast. Zoom’s integrating real-time AI search. Apple’s dominating global smartphone share. Alibaba’s building voice-in-the-dashboard experiences that make your car smarter than your old desktop.
But here’s what worries me: speed without brakes. As Ian Hogarth put it, we’re racing toward God-like AI with the urgency of kids on scooters heading for a cliff. We need a global body—something with CERN-like credibility—to slow this down and think this through.
Because if we don’t? We’re going to trade convenience for control, innovation for instability, and intelligence for unchecked influence.
So what’s the call to action? Simple: be skeptical. Ask questions. And if you’re in a position to influence policy, ethics, or design—lean in. The future doesn’t need more passengers. It needs drivers. And you need THE COMUNICANO!!!
Andy Abramson
Meta Watch
Meta Faces Antitrust Showdown Over Instagram and WhatsApp (Axios)—The FTC's landmark antitrust case against Meta kicks off in federal court, aiming to undo its purchases of Instagram and WhatsApp. The government alleges the acquisitions crushed competition and fortified Meta's dominance. Meta argues the deals were legal and pro-consumer. This trial could reshape Big Tech’s merger playbook and redefine antitrust enforcement. Stakes are high: success for the FTC may lead to forced divestitures. Read more here
Meta Pledges to Make AI Work for Europeans (Meta Newsroom)—Meta is aligning its AI practices with EU values ahead of the region’s sweeping AI Act. New transparency commitments, data protections, and user control options will be rolled out across its platforms. This is part of Meta’s effort to gain trust and ensure its AI services remain accessible in a tightly regulated environment. Read more here
Regulatory Watch
France Threatens to Pull Plug on Pornhub (Politico.eu)—France is ready to block Pornhub and other adult sites this summer unless they comply with strict age verification laws. The move follows a years-long battle between regulators and platforms over protecting minors from explicit content. France’s digital watchdog is leading enforcement, with support from the government. If companies don’t adopt verified age checks, ISPs could be ordered to restrict access — a first in Europe. The crackdown may spur a broader shift in digital content regulation across the EU.
Read more here
OpenAI Watch
OpenAI Considers ID Verification for Access to Future Models (TechCrunch)
OpenAI is exploring requiring users to submit verified identification to access upcoming AI models via its API. The idea is to prevent misuse, particularly around misinformation and automated abuse. Critics warn this raises privacy and access equity concerns. OpenAI’s move could influence industry norms on user verification in generative AI, especially in sensitive or high-risk deployments. Read more here
Researchers Prove How to Make ChatGPT Consistently Toxic (TechCrunch)
A group of researchers have discovered how to coax ChatGPT into producing toxic content predictably, bypassing safety filters using structured prompts. The work highlights weaknesses in reinforcement learning safeguards and shows how alignment can be undone with precision. It’s a stark reminder that making AI “safe” is still a fragile balance. Read more here
AI Watch
We Must Slow the Race Toward God-like AI (Financial Times)—Ian Hogarth, AI investor and co-author of the State of AI Report, calls for urgent regulation and a global pause on the unchecked race toward artificial general intelligence. He argues that AGI, with its superintelligent, autonomous capabilities, poses risks on par with nuclear weapons. Through investor insights, safety research gaps, and behind-the-scenes anecdotes from leaders at DeepMind and OpenAI, Hogarth makes the case for a CERN-style intergovernmental oversight body. His message is clear: without transparency, alignment, and restraint, the AGI finish line may be our undoing. Read more here
AI Hallucinations Pose New Software Supply Chain Threat (BleepingComputer)
Security experts are raising alarms over AI-generated code “hallucinations” — fake dependencies that appear real. These artifacts are being blindly copied into projects, creating a new supply chain vulnerability. Attackers could exploit these phantom packages to inject malicious code. It’s a wake-up call for developers relying on AI copilots: validate before you copy-paste. Read more here
Money Watch
UK Startups Eye US Move as Funding Hits Post-Pandemic Low (Financial Times)
With UK tech investment at its lowest since 2020, a growing number of British startups are packing their bags — or at least their incorporation papers — for the US. In 2024, UK startups raised just £16.2bn, while Silicon Valley pulled in more than £65bn. Founders cite deeper capital pools, investor ambition, and tax incentives as key drivers. Delaware C-corps are becoming the default structure even for London-based teams. If the UK doesn’t close the venture capital gap soon, it risks an innovation brain drain — and losing its claim to Europe’s tech crown.
Read more here
Mobile Watch
Global Smartphone Market Grows, Apple Claims Q1 Crown (Counterpoint Research)—The global smartphone market grew 3% year-on-year in Q1 2025, with Apple claiming the top spot for the first time in a Q1 period. The iPhone 16e, along with success in India and Japan, drove Apple’s rise. Yet, ongoing macro uncertainty and slower premium demand could hinder future growth. Brands are bracing for a turbulent second half of the year. Read more here
EV Watch
Alibaba Bets Big on Auto AI With Nio and BMW (SCMP)—Alibaba is pushing deeper into the automotive space by embedding its large language model, Qwen, into smart cockpits for Nio and infotainment systems for BMW. It’s part of a strategy to lead AI adoption in mobility, signaling a new front in the EV arms race. With generative AI onboard, vehicles may soon drive as smart as your phone talks.
Read more here
Zoom Watch
Zoom Adds Perplexity API to Supercharge Its AI Companion (Perplexity)
Zoom just leveled up its AI Companion by integrating Perplexity's real-time web search API. The result? Users can now ask questions during meetings and get contextual, citation-backed answers without ever leaving Zoom. It's a play to keep users engaged and informed in-platform, and a glimpse into how AI copilots are becoming meeting mainstays. Productivity meets AI-enabled search, seamlessly.
Read more here
Media Watch
NPR Walks Off Twitter After “Government-Funded Media” Label (NYTimes)—After being labeled “government-funded media” by Twitter in 2023, NPR chose to leave the platform entirely. Though the label was later softened, NPR said the tag misrepresented its editorial independence. The move marked one of the earliest high-profile media exits from Twitter under Musk’s leadership — and triggered broader conversations about journalistic credibility in the algorithm age. Read more here
Security Watch
Discord Leaker Worked at U.S. Military Base, Now Facing 15 Years (Forbes)—Jack Teixeira, a 21-year-old Air National Guardsman, was behind one of the largest intelligence breaches in recent memory — leaking classified documents via Discord. Working from a secure base in Massachusetts, Teixeira shared top-secret Pentagon files covering Ukraine and more. Arrested in 2023, he’s now serving a 15-year sentence. The case underscored the vulnerabilities in military information controls and the risks of insider threats in a digital-first era. Read more here
Hacker Watch
The Most Dangerous Hackers You’ve Never Heard Of (Wired)
A shadowy group known as APT31, linked to China’s Ministry of State Security, is now on the radar of Western governments for a string of aggressive cyberattacks. While lesser-known than Russia’s Fancy Bear or North Korea’s Lazarus Group, APT31 has quietly targeted elections, industrial espionage, and surveillance across Europe and North America. What sets them apart is stealth: phishing campaigns so sophisticated they mimic MFA prompts and abuse legitimate services. As geopolitical cyberwarfare intensifies, this group is becoming one of the most dangerous digital threats operating in the dark. Read more here