Hello!
AI dominated the week again - but the more interesting story is what’s changing underneath the headlines. OpenAI loosened its exclusive cloud ties with Microsoft, GPT-5.5 narrowly took a benchmark lead over Claude Mythos Preview, and Next.js took another step toward reducing platform lock-in. On the infrastructure side, a compact NVIDIA cluster showed how much serious AI compute can now fit into a smaller footprint.
The common thread: flexibility is becoming a competitive advantage. The ability to switch providers, run across platforms, and adapt architecture faster is starting to matter as much as raw capability.
At Pagepro, we help enterprises make those shifts happen through Next.js and Sanity migrations built for scale. Pour a coffee and enjoy this week’s Frictionless!
In the Queue
Deepen Your Expertise

In the latest Syntax podcast, Scott and Wes chat with Tim Neutkens, Lead Dev at Next.js, and Jimmi Lai, Head pf Next.js, about the new Adapters API.
This update means more choices for hosting across platforms like Cloudflare and Netlify, breaking free from vendor lock-in.

Messaging apps are where the magic happens for conference organizers, but navigating between chat and content backends can be a nightmare. This Telegram agent bridges the gap and interacts with your content backend seamlessly.
This video walks through the three layers that make it work:
Sanity Content Agent for knowledge and permissions
Vercel AI SDK for streaming and conversation
Chat SDK for platform routing

Next.js just upped its game with native page transitions, saying goodbye to Framer Motion dependencies. In this video, you'll learn how to use the new View Transition API in Next.js with React's built-in ViewTransition component to add buttery smooth 60fps page and route animations, zero JavaScript overhead.
Vercel has rolled out Native Deployment Checks for every deployment. Now, you can lint and typecheck alongside your build process, utilizing existing GitHub and Marketplace integrations. This keeps your codebase sharp and errors minimal.
Reduce Friction

One-Size-Fits-All enterprise software is no longer the default, companies start build tailored systems with AI
Enterprise software is ditching the cookie-cutter approach. Instead of settling for rigid SaaS, companies can now build or blend to fit their unique needs. Think of it as tailoring your tech stack like a suit - cut to fit without compromise.
Why this happens:
adaptability is king: custom solutions mean you can pivot faster when the market shifts
team collaboration just got a boost: building together unites goals and vision, aligning tech with strategy.

AI projects are stalling. Why? Nimisha Asthagiri of Thoughtworks reveals it’s not about speed, but about asking the right questions. Instead of slapping AI on old systems, success comes from reimagining possibilities.
1) Stop asking 'How do we go faster?' → Start asking 'What new can we build?'
2) Fundamentals matter. Test-driven development and organizational literacy are making a comeback.

Walking into a team that didn't pick you isn't a comfy stroll. It's like being a stranger at a family dinner. The article details one engineering manager’s journey from an abandoned plan to finding the right approach.
AI Corner

AI code audits just got a serious upgrade. Claude Code's UltraReview tool performs full codebase audits like a senior engineer. In the demo, watch it spot bugs faster than you can say 'merge conflict.'
You can slash debugging time, freeing your devs for real challenges. It outpaces GitHub Copilot in detailed analysis, setting a new standard for AI in development.

Microsoft and OpenAI have rewritten one of tech’s most important partnerships. Under the new deal, Microsoft keeps access to OpenAI’s technology and remains its primary cloud partner, but exclusivity is gone - allowing OpenAI to offer models through other cloud providers.

OpenAI has launched GPT-5.5, and the headline is how tight the frontier race has become. According to VentureBeat, the model scored 82.7% on Terminal-Bench 2.0, narrowly edging Anthropic’s Claude Mythos Preview at 82.0% while clearly surpassing Claude Opus 4.7 (69.4%).
Just Cool

A small AI cluster no longer needs data-center power or enterprise budgets. ServeTheHome built an 8-node NVIDIA GB10 cluster with 1TB of shared memory, 160 Arm cores, and 400GbE networking - powerful enough to run massive local models like Kimi K2.5 and K2.6.
It’s a glimpse of how serious AI infrastructure is shrinking in size and complexity.

A popular open-source package with more than 1 million monthly downloads was compromised and used to steal user credentials, turning a trusted dependency into an attack channel overnight. The malicious update harvested sensitive data from developer environments after attackers exploited weaknesses in the project’s workflow.
Let’s Stay in Touch! 📨
Do you have any comments about this newsletter issue or questions you want to ask? Drop me a message or book a meeting.






