What I Learned at Work this Week: JDBI

Mike Diaz
5 min readJan 15, 2023

--

Photo by Mohamed Almari: https://www.pexels.com/photo/silhouette-of-golden-gate-bridge-during-golden-hour-1485894/

Last week, I wrote about my experience using gRPC to process an API call that would make a SQL update. Since we were focusing on the gRPC part, we didn’t closely examine how we were using Java to make the update. This week, I had to debug an issue with that part of the code, so this weekend, we’re looking at JDBI.

JDBI

According to the docs, JDBI is a SQL convenience library for Java. It exposes two different style APIs that we can use to execute Java: a fluent style and a sql object style. Since I already had code that was using JDBI, my goal was to figure out which API we were invoking and read the docs with that in mind. Of course, neither of the examples matched exactly what I was doing, but fluent API came closer. Here’s the example from jdbi.org:

// using in-memory H2 database
DataSource ds = JdbcConnectionPool.create("jdbc:h2:mem:test",
"username",
"password");
DBI dbi = new DBI(ds);
Handle h = dbi.open();
h.execute("create table something (id int primary key, name varchar(100))");

h.execute("insert into something (id, name) values (?, ?)", 1, "Brian");

String name = h.createQuery("select name from something where id = :id")
.bind("id", 1)
.map(StringColumnMapper.INSTANCE)
.first();

assertThat(name, equalTo("Brian"));

h.close();

This seems pretty straightforward. We:

  1. Initialize a “connection pool” and then pass that into our DBI (database interface), establishing the source we’re going to write to.
  2. Create a handle, which executes SQL.
  3. Run various methods of our handle. We can pass SQL as a direct string, pass multiple arguments to make variables dynamic, or even use createQuery together with bind, map, and first to return the result of our query.
  4. Clean up by closing the DBI.

I’ve got something similar, so let’s examine the differences.

The DBI

The example code has four distinct steps, but mine (which I know is working) only has two:

public OptOutConfigRepository(@Qualifier("solutionsDBJdbi") Jdbi jdbi) {
this.jdbi = jdbi;
}

public Optional<Integer> insertNewOptOutConfig(OptOutConfig optOutConfig) {
try {
return jdbi.withHandle(
handle ->
handle.createUpdate(INSERT_OPT_OUT_CONFIG)
.bindBean(optOutConfig)
.executeAndReturnGeneratedKeys("opt_out_id")
.mapTo(Integer.class)
.findFirst());
} catch (JdbiException | NoSuchElementException e) {
String message = "Error inserting opt out config.";
LOG.error(message, e);
throw new RepositoryException(message);
}
}
  1. Define the Repository, which accepts a JDBI as an argument.
  2. Define a method which executes a query using withHandle, and catches an error if necessary (it was often necessary).

In the docs, we saw that a DBI should be instantiated with a connection pool, so where is that happening? I noticed that my OptOutConfigRepository method had a @Qualifier annotation that I had obviously copied from a different example because I didn’t know what it did. When I looked it up on Baeldung, I found that it could be used to specify between two similar Spring beans. Seeing that we were specifically calling out that this method would receive what was called a solutionsDbJdbi, I could look that up and find…

@Profile("!test")
@Bean
@Autowired
public Jdbi solutionsDBJdbi(@Qualifier("solutionsDBDataSource") DataSource postgresDataSource) {
return Jdbi.create(postgresDataSource);
}

This isn’t my code, but I realized that I was referencing it when I used JDBI to access the Solutions database (solutionsDb). Here we can see the create method found in the JDBI documentation being used to pass in a connection pool of postgresDataSource. The Qualifier annotation is used in Spring to specify the bean that we’re importing. In this case, we have defined multiple DataSources in our project, but this method should expect a solutionsDBDataSource. Now we know that the JDBI instance passed into our Repository is going to be correctly constructed according to the fluent API style.

withHandle and bindBean

The next difference I found was that the documentation used handle and I was using withHandle. Good news there, it seems that withHandle is actually a more current option than what I had originally read. According to this part of the JDBI documentation, withHandle uses callbacks and provide a fully managed handle that is correctly closed and all related resources are released. This explains why my code didn’t have a close invocation. If you’re not sure what is meant by “callback,” check out the syntax here:

return jdbi.withHandle(
handle ->
handle.createUpdate(INSERT_OPT_OUT_CONFIG)
.bindBean(optOutConfig)
.executeAndReturnGeneratedKeys("opt_out_id")
.mapTo(Integer.class)
.findFirst());

The argument is a stream that identifies handle as the variable for JDBI calls, and then uses createUpdate to execute SQL. This is where we see the next difference — bindBean vs bind.

We use bind to attach variables to our queries. For example, the string might say:

INSERT INTO opt_out_configs (is_active, company_id)
VALUES (:isActive, :companyId)

And we have to bind isActive and companyId to the query, which we do as part of our handle chain. As we saw in the example, we can use bind and pass the column name and value as arguments. With bind, we can only attach one variable at a time, so if our query is large, it could get tedious. Since we’re already using Spring, we can use bindBean to bind all of our values at once by passing a bean, as defined by our OptOutConfig class.

One important detail I learned is that the properties in my class must match the column names in my DB. So for example, if my class looks like this:

public class OptOutConfig {
private final boolean optOutIsActive = true;
private String companyId;

JDBI will expect my table to have columns called opt_out_is_active and company_id. In this case, my table column is is_active, so the bean won’t match and will throw an error.

After bindBean, the further chained methods are intuitive:

  • executeAndReturnGeneratedKeys: once the update runs, we want to return the new row key ID.
  • mapTo: We transform this ID into an Integer.
  • findFirst: We return the first, in case the update creates multiple rows. Looking at it now, this doesn’t really make sense because my query should always create only one new row. I probably copied this from code where it was needed, but when I go back to work next week, I’ll consider a refactor.

Building Blocks

Today, we went from a very basic, possibly outdated example from documentation, to a slightly updated version that I wrote/pattern matched. My version still has very simple functionality, but through these small steps forward, we can increase the depth of our knowledge. As always, I’m more confident now that I’ll be able to handle more and more complex implementations of Java APIs.

Sources

--

--