Hello!
OpenAI has just unleashed GPT-5.5 Instant as ChatGPT’s new default, promising a real-time revolution in how we interact with AI. Meanwhile, Next.js v16.2.6 is here, tackling multiple critical security vulnerabilities - an essential update for anyone concerned about their application's safety. And beware: TanStack's npm packages have been hit by the Mini Shai-Hulud supply-chain attack, putting a spotlight on the importance of securing your dependencies.
This week's stories underscore a pivotal theme: the need for rapid adaptability in an era of constant change. CTOs should keep an eye on evolving AI capabilities while ensuring robust security measures are in place to protect their tech stacks.
At Pagepro, we specialize in Next.js & Sanity migrations for enterprises.
Grab your coffee, settle in, and enjoy Frictionless!
In the Queue
Deepen Your Expertise
Next.js 16.2.6 is a small release with unusually high stakes. Vercel shipped the update with important security fixes addressing multiple vulnerabilities across Next.js 15 and 16, urging teams to upgrade immediately because earlier minor versions will not receive patches.

Vercel is turning firewall configuration into a prompt. Its WAF now lets developers create custom security rules using natural language, generating rate limits and traffic filters directly from plain-English descriptions inside the dashboard.
Security tooling is becoming far more accessible to non-specialists, lowering the barrier to configuring advanced protections, and infrastructure platforms are increasingly embedding AI directly into operational workflows, not just application development.

Next.js is becoming far less tied to a single hosting platform. In this podcast, Jimmy Lai from Vercel explains how the new Adapters API helps reduce long-standing self-hosting friction across:
- Cloudflare
- Netlify
- AWS Amplify
- other non-Vercel platforms
while also laying the groundwork for features like partial pre-rendering and request-time feature flags.
Reduce Friction

A compromised npm token gave attackers publish access to TanStack packages, briefly turning trusted developer tools into a malware distribution channel. In this postmortem, the TanStack team details how the attack happened, how quickly it spread, and the steps they took to contain it
The incident shows how fragile the JavaScript supply chain remains when a single maintainer credential can compromise widely used packages, and it highlights the growing need for stronger publisher security defaults like hardware-backed authentication, scoped permissions, and provenance verification.

The “agentic era” is changing what engineering teams optimize for. In this Stack Overflow piece, Braze CTO Jon Hyman explains how AI agents are pushing software development away from manual implementation and toward orchestration, verification, and systems thinking.
The 90-day disclosure policy was designed for a slower era of security research - before LLMs could help researchers and attackers find, analyze, and weaponize vulnerabilities in hours.
This piece argues that the old assumptions behind “responsible disclosure” are collapsing as AI compresses the time between patch, exploit, and active attacks.
AI Corner
Coding agents are starting to change more than implementation speed - they’re reshaping how software gets improved after it ships. In this post, Ashpreet Bedi shares an agent platform designed to let coding agents:
- automatically pick up engineering tasks,
- run inside isolated sandboxes,
- review code,
- merge changes back to production workflows
All this with minimal human intervention.

OpenAI is expanding its voice stack with a new set of API models built for real-time conversational systems. The release includes GPT-Realtime-2 for live voice interactions, GPT-Realtime-Translate for speech translation, and GPT-Realtime-Whisper for transcription and captioning.
The new models are designed to support:
- long-running voice conversations
- interruption handling during live dialogue
- real-time translation across 70+ languages
- live transcription and captioning
- lower-latency voice experiences in production apps

Anthropic is pushing Claude Managed Agents toward longer-running, more autonomous workflows. The latest update introduces “dreaming,” a research-preview feature that lets agents review past sessions to identify patterns and improve future behavior, alongside new capabilities for outcomes, multi-agent orchestration, and webhooks.
The new features let developers build agents that can:
- learn across sessions
- delegate work to sub-agents
- verify outputs against quality thresholds
- parallelize complex workflows
- react to external events through webhooks

OpenAI is positioning GPT-5.5 Instant as a smarter, more dependable default model for ChatGPT, focused on clearer answers, lower hallucination rates, and better personalization. According to OpenAI, the model produces 52.5% fewer hallucinated claims on high-stakes prompts in areas like medicine, law, and finance, while also improving image analysis, STEM reasoning, and web search decisions.
Just Cool

What looked like a normal recruiting process turned into a real-world malware investigation. In this post, Andrii describes how a fake recruiter shared a Git repository through Google Drive that contained malicious Git hooks and obfuscated JavaScript designed to steal developer files and compromise machines.
The most unsettling part is how believable the setup was: realistic recruiter conversations, a legitimate-looking codebase, and a workflow that felt completely routine. If malicious code can hide inside ordinary developer processes this easily, it’s worth asking: how many tools in your workflow already execute with more trust than they should?
Let’s Stay in Touch! 📨
Do you have any comments about this newsletter issue or questions you want to ask? Drop me a message or book a meeting.






