Skip to main content

Accelerating Science with JAX - Simulations, Physics, and Beyond

This post explores how JAX is reshaping modern scientific computing by enabling high-performance, differentiable, and hardware-accelerated workflows across a wide range of applications. From modeling complex systems and solving differential equations to training large-scale geospatial and machine learning models, we introduce powerful libraries like Diffrax, fdtdx, jax-md, Equinox, and Jeo. Learn how JAX supports scalable, composable research pipelines that span domains such as engineering, biology, Earth observation, and AI.

Wednesday, June 4, 2025

9 min read

Implementing Anthropic’s Agent Design Patterns with Google ADK

Agentic systems are rapidly becoming a core design pattern for LLM-powered applications, enabling dynamic reasoning, decision-making, and tool use. Inspired by Anthropic’s influential Building Effective Agents article, this post demonstrates the implementation of their proposed agent design patterns—such as prompt chaining, routing, and parallelization—using Google’s open-source Agent Development Kit (ADK). The guide provides practical, hands-on examples that illustrate how these patterns work and where they can be effectively applied.

Tuesday, April 29, 2025

26 min read

Getting Started with Ray on Google Cloud Platform

As AI and machine learning workloads continue to grow in scale and complexity, the need for flexible and efficient distributed computing frameworks becomes increasingly important. Ray is an open-source framework built to simplify the development and execution of distributed applications using familiar Python syntax. This post introduces how to get started with Ray on Google Cloud Platform, covering the fundamentals of Ray’s distributed architecture, core components, and scaling strategies. You’ll learn how to deploy and manage Ray clusters on Vertex AI, configure autoscaling, and run distributed Python and machine learning workloads with practical code examples.

Monday, March 31, 2025

20 min read

Building Trustworthy RAG Systems with In Text Citations

Retrieval-Augmented Generation (RAG) has revolutionized how we build question-answering and content creation systems. By combining the power of large language models (LLMs) with external knowledge retrieval, RAG systems can generate more accurate, informative, and up-to-date responses. However, a critical aspect often overlooked is trustworthiness. This is where citations come in. Without citations, a RAG system is a "black box,".This post will explain the importance of citations in RAG systems and provide some implementations using Google's Generative AI SDK, LangChain, and LlamaIndex, with detailed code walkthroughs.

Monday, March 24, 2025

17 min read

Python for Data Science Series - Exploring the syntax

In the last post, we discussed the importance of programming in the data science context and why Python is considered one of the top languages used by data scientists. In this week's post, we will explore the syntax of Python and create a simple program that uses Google Cloud Vision API to detect faces in an image.

Tuesday, October 25, 2022

20 min read