🤖 Agentic Architecture & AI Orchestration
The transition from static LLM integration to production-ready AI agents requires a new tier of 'agentic middleware' focused on continuous evaluation and extensible integration kits.
- A Developer's Guide to Production-Ready AI Agents: Five comprehensive guides providing practical frameworks and code samples to transition agents from experimental "vibe checks" to scalable, real-world action.
- From "Vibe Checks" to Continuous Evaluation: Engineering Reliable AI Agents: A rigorous engineering discipline utilising ADK and Vertex AI evaluation to ensure agent reliability through data-driven assessment.
- The New ADK Integrations Ecosystem: The Agent Development Kit (ADK) now supports third-party tool connectivity for GitHub, Notion, and Hugging Face, enabling more capable autonomous workflows.
- Turning API Sprawl into an Agent-Ready Catalog: Strategies for using Apigee API Hub to make complex API specifications readable for AI agents, effectively indexing your internal ecosystem for autonomous discovery.
- Automating GCP Safely with Gemini CLI and MCP: Details on building secure automation workflows using the Developer Knowledge Model Context Protocol (MCP) server to power Gemini CLI skills.
- Troubleshooting with Cloud Logging MCP Servers: Fully-managed MCP servers allow agents to connect directly to live infrastructure logs and official documentation to automate error analysis.
- Fine-Tuning Gemma 3 with Cloud Run Jobs: A cost-efficient approach to fine-tuning large models using serverless NVIDIA RTX 6000 PRO GPUs that scale to zero post-task.
- Give Agentic Chatbots Fast Long-Term Memory: Solutions for addressing real-time context updates and efficient retrieval of historical interaction data for persistent agent memory.
- Pro-level image generation with Nano Banana 2: High-speed, enterprise-accessible image generation and editing now available on Vertex AI and the Gemini CLI.
- Vertex AI Agent Engine GA: Secure, governed agentic environments are now generally available for regulated industries.
⚡ Modernising the Serverless Edge
Serverless environments are evolving beyond isolated functions into unified global systems with native service mesh integration and hardware-optimised runtimes.
- Cloud Service Mesh & Cloud Run: Utilising the Service Routing API to enable client-side load balancing and regional failover for highly-available internal connectivity.
- 5 More Things for Cloud Run Day 1: Advanced techniques for local development and sophisticated service management for new Cloud Run practitioners.
- Modernise with the Universal OS-Only Runtime: The new
osonly24runtime allows developers to bypass traditional container builds by deploying pre-compiled binaries (Go, Rust, Dart) directly. - Multi-Region Cloud Run with Automated Failover: Preview of automated failover and failback for external traffic using native Cloud Run service health monitoring.
- Direct VPC Egress for 2nd Gen Functions: The ability to configure Direct VPC egress for second-generation Cloud Functions has reached General Availability.
- Eventarc Advanced, Centralised Policy Meets Distributed Logic: Restoring governance to microservices and AI agent-based eventing without sacrificing developer agility.
- Deploying LLM Inference on GKE: Tackling production-level hurdles such as VPC policies and organisational constraints during Kubernetes-based LLM deployment.
- H4D VMs: High Performance Computing GA: 5th Gen AMD EPYC-based VMs with Cloud RDMA networking are now GA for tightly-coupled, multi-node workloads.
- Dataflow support for C4A Arm Processors: Generally available support for Arm-based VMs in Dataflow, offering improved price-performance and power efficiency.
- Cloud Deploy: New Region Availability: Expansion of Cloud Deploy services to the
asia-southeast3(Bangkok) region.
🔐 Identity Frontier & Zero Trust
Modern IAM is shifting toward 'Autonomous Trust,' where identity provides the perimeter and short-lived tokens eliminate the risk of static credential leaks.
- Okta Cross App Access (XAA): A new framework allowing AI agents to securely act on behalf of users across enterprise apps without credential exposure or excessive consent prompts.
- Okta Fine-Grained Authorisation (FGA): A globally replicated, scalable managed service that allows developers to manage permissions for millions of users and billions of resources via API.
- Connecting Azure DevOps to GCP with Workload Identity Federation: Securing multi-cloud CI/CD by leveraging short-lived OIDC tokens, removing the need for risky static service account keys.
- Okta Individual & Time-Bound Entitlements: Admins can now grant specific, expiring access to individual users without creating broad entitlement bundles, supporting a stricter Least Privilege model.
- Okta OIN API Service Integration Wizard: ISVs can now directly configure and test API service integrations within the Okta Integration Network, accelerating secure app-to-app connectivity.
- Saving $140K in Silently Ingested Logs: A FinOps guide to implementing exclusion filters and routing essential logs to GCS to avoid default ingestion costs.
- Popular IDE extension with 1.6M downloads is leaking Google Cloud credentials: Warning regarding a popular IDE extension storing tokens in plaintext; immediate revocation is recommended.
- VPC Service Controls Support: New preview support for VPC Service Controls to minimise data exfiltration risks from Google Security Operations.
- Okta FastPass: Passwordless Biometrics: Adoption of phishing-resistant authentication using on-device biometrics like FaceID and TouchID to modernise the sign-in experience.
- Security Command Center: App Hub Integration: Enhanced filtering allows security teams to view findings and compliance data specific to resources registered within App Hub.
📊 High-Performance Data & Analytics
The convergence of relational and graph data models, alongside continuous query engines, is transforming databases into real-time context engines for AI.
- Uncovering Hidden Patterns with Spanner Graph: Leveraging new graph capabilities with a single DDL statement to transform relational data into traversable insights like popular product pairings.
- Serving Iceberg Lakehouses with Spanner Columnar Engine: A new preview feature allowing low-latency, real-time AI workloads to serve cold data directly from Iceberg lakehouses.
- BigQuery Continuous Queries to Spanner: Generally available support for streaming BigQuery data directly to Spanner in real-time for immediate transactional reads.
- PayPal's Record Teradata Migration: Insights from one of the largest data migrations ever undertaken to provide the bedrock for financial service GenAI innovation.
- BigQuery: Reducing Spend by 40% with Slot Eaters: A guide to using advanced runtime metrics and partitioned join optimisation to identify and eliminate expensive query patterns.
- Connect Serverless Spark to Antigravity Notebooks: Configuring Spark runtime templates and executing PySpark code directly within an agentic development environment.
- AlloyDB AI Intelligent SQL Functions: New array-based functions for semantic ranking, forecasting, and generation directly within SQL workflows.
- Spanner: Managed Autoscaler & Hotspot Insights: Native autoscaling based on total CPU utilisation and new "unsplittable reason" metrics to identify schema anti-patterns.
- BigQuery Materialised Views Cost Reduction: Optimising dashboard performance by pre-calculating results and using BigQuery's intelligent query rerouting.
- NetApp Volumes: Flex Unified Large Volumes: Preview support for 20 PiB storage pools delivering up to 22 GiBps throughput for massive datasets.