Announcing Data API Client v2
A complete TypeScript rewrite with drop-in ORM support, full mysql2/pg compatibility layers, and smarter parsing for Aurora Serverless v2's Data API.

I started building the Data API Client because I was frustrated. Working with the Amazon Aurora Serverless Data API felt clunky and awkward. Too much boilerplate, too many type annotations, and not enough focus on the way developers actually build applications. I wanted something that let me write queries the way I always had, without fighting the API.
V1 solved some of those problems. It wrapped the low-level calls and let you work with native JavaScript types instead of nested objects. It integrated named parameters and made transaction handling much easier. But the more I used it, the more I saw how much further it needed to go.
V2 is that step forward. It’s a complete rewrite that’s faster, smarter, fully-typed, and designed to fit naturally into modern serverless applications. It now plays nicely with your favorite ORMs, handles Aurora’s scale-to-zero behavior automatically, and even lets you drop it in as a replacement for libraries like mysql2
or pg
.
Built for Modern TypeScript
The entire v2 library is written in TypeScript. Every function, parameter, and return type is strongly typed, so you always know what to expect.
tsimport dataApiClient from "data-api-client"; import type { DataAPIClientConfig, QueryResult } from "data-api-client/types"; interface User { id: number; name: string; email: string; tags: string[]; } const client = dataApiClient({ secretArn: "arn:aws:secretsmanager:...", resourceArn: "arn:aws:rds:...", database: "mydb", engine: "pg", }); const result: QueryResult<User> = await client.query<User>( "SELECT * FROM users WHERE id = :id", { id: 123 } );
No more guessing which properties exist or how the results are shaped. The compiler now tells you.
A Smaller, Faster Core
Data API Client v2 now uses the AWS SDK v3 as a peer dependency. That change alone brings smaller bundle sizes, faster Lambda cold starts, and better tree-shaking. It also aligns with modern async/await patterns, which makes code cleaner and easier to reason about.
If you’re deploying to Lambda (Node.js 18+), the SDK is already part of the runtime, so you don’t even have to install it.
Drop-In Compatibility
One of the most requested features was the ability to use Data API Client as a direct replacement for popular database clients. V2 adds compatibility layers for both mysql2/promise
and pg
, so you can often swap them out without touching the rest of your code.
PostgreSQL example
tsimport { createPgClient } from "data-api-client/compat/pg"; const client = createPgClient({ resourceArn: "arn:aws:rds:...", secretArn: "arn:aws:secretsmanager:...", database: "myDatabase", }); const result = await client.query("SELECT * FROM users WHERE id = $1", [123]); console.log(result.rows);
MySQL example
tsimport { createMySQLConnection } from "data-api-client/compat/mysql2"; const connection = createMySQLConnection({ resourceArn: "arn:aws:rds:...", secretArn: "arn:aws:secretsmanager:...", database: "myDatabase", }); const [rows] = await connection.query("SELECT * FROM users WHERE id = ?", [ 123, ]);
Named placeholders are supported too:
ts// Enable named placeholders for cleaner query syntax const connection = createMySQLConnection({ resourceArn: "arn:aws:rds:...", secretArn: "arn:aws:secretsmanager:...", database: "myDatabase", namedPlaceholders: true, // Like mysql2's namedPlaceholders option }); // Use :name syntax with object parameters const [users] = await connection.query( "SELECT * FROM users WHERE name = :name AND age > :age", { name: "Alice", age: 25, } ); // Works with INSERT, UPDATE, DELETE too await connection.query("UPDATE users SET age = :newAge WHERE id = :id", { id: 123, newAge: 30, });
Full ORM Support
These compatibility layers unlock seamless integration with modern ORMs and query builders like Drizzle and Kysely.
Here’s Drizzle with MySQL:
tsimport { drizzle } from 'drizzle-orm/mysql2' import { mysqlTable, varchar, int } from 'drizzle-orm/mysql-core' import { createMySQLPool } from 'data-api-client/compat/mysql2' const pool = createMySQLPool({ ... }) const db = drizzle(pool as any) const users = mysqlTable('users', { id: int('id').primaryKey().autoincrement(), name: varchar('name', { length: 255 }).notNull(), email: varchar('email', { length: 255 }).notNull() }) const result = await db.select().from(users).where(users.id.eq(123))
Kysely works just as well and remains fully type-safe.
Solving the Scale-to-Zero Problem
Aurora Serverless v2 can pause when idle to save costs. That’s a powerful feature, but it also means the first query after a pause fails with a DatabaseResumingException
. Most developers either write their own retry logic, accept the failure, or worse, leave their clusters running 24/7, which defeats the purpose.
V2 handles this automatically. When the database is waking up, the client retries intelligently, with tuned delays based on real-world behavior. Most clusters are ready in 15–20 seconds, and you don’t have to write a single line of retry logic.
tsconst client = dataApiClient({ ... }) const result = await client.query('SELECT * FROM users')
And here’s the key point: this isn’t just for direct dataApiClient
queries. The automatic wake-up handling is built into everything: the raw client, the mysql2
and pg
compatibility layers, and any ORM you connect through them. Whether you’re running a single query, a transaction, or a complex set of operations through Drizzle or Kysely, the retry logic is always there, quietly doing its job.
Better PostgreSQL Support
V2 now automatically converts PostgreSQL arrays into native JavaScript arrays:
tsconst result = await client.query("SELECT tags FROM products WHERE id = :id", { id: 123, }); console.log(result.records[0].tags); // ['new', 'featured', 'sale']
It works for all array types (including nested ones) so you can use them naturally without manual parsing.
Doesn’t the Data API Already Support a JSON response format?
It does. The AWS Data API's built-in JSON format option (via formatRecordsAs: 'JSON'
) is great for simple use cases and a nice step forward. It converts most column types into native JavaScript values and removes some of the boilerplate that used to make working with the Data API painful. But it stops at the surface.
The parser in v2 goes much further. It preserves database-specific type information, handles complex types like PostgreSQL arrays and binary data intelligently, supports configurable date deserialization, and gives you richer metadata about query results. You also have control over output format: fully hydrated objects or raw arrays for performance-critical paths. In other words, the built-in JSON support is a convenience feature, v2’s parsing is designed for production systems that need flexibility, fidelity, and control.
Tested for the Real World
My 25+ years of building software has taught me to take testing seriously. This release includes unit tests for the core features, integration tests against real Aurora clusters, ORM tests with Drizzle and Kysely, and wake-up handling tests against paused databases. Over 390 integration tests run as part of the build to make sure everything works as expected in production.
A Complete Example
Here’s a simple Lambda function that uses Drizzle with Data API Client v2:
tsimport { drizzle } from "drizzle-orm/node-postgres"; import { pgTable, serial, text } from "drizzle-orm/pg-core"; import { createPgClient } from "data-api-client/compat/pg"; const users = pgTable("users", { id: serial("id").primaryKey(), email: text("email").notNull(), name: text("name").notNull(), }); const client = createPgClient({ resourceArn: process.env.DB_RESOURCE_ARN!, secretArn: process.env.DB_SECRET_ARN!, database: "myapp", }); const db = drizzle(client as any); export const handler = async (event: any) => { const newUser = await db .insert(users) .values({ email: event.email, name: event.name, }) .returning(); const activeUsers = await db.select().from(users); return { statusCode: 200, body: JSON.stringify({ newUser, activeUsers }) }; };
This code is type-safe, works with scale-to-zero databases, requires no VPC, and is ready for production.
Looking Ahead
I’m already exploring Prisma and TypeORM adapters, built-in query performance metrics, and optional query caching. The goal is to make Data API Client the simplest and most powerful way to work with Aurora Serverless v2, regardless of the stack you’re using.
A Few Final Thoughts
I built the first version of this library because I needed it. I built v2 because I wanted it to become something even bigger. Hopefully this helps close the gap between the flexibility of serverless and the power of relational databases.
There’s still more to do. I want this project to help developers rethink how they connect to data in a world where databases don’t have to be always-on. I want it to make writing queries feel natural again, whether you’re building with SQL directly, composing queries in Kysely, or relying on an ORM like Drizzle. And I want the infrastructure — the retries, the wake-ups, the scaling — to fade into the background so you can focus entirely on the application in front of you.
V2 isn’t the finish line. It’s a foundation for everything that comes next. And I’m excited to keep building on it with your feedback, your use cases, and your ideas.
Let me know what you think: https://github.com/jeremydaly/data-api-client