dependabot[bot] opened a new pull request, #2768:
URL: https://github.com/apache/camel-kamelets/pull/2768

   Bumps 
[com.databricks:databricks-jdbc](https://github.com/databricks/databricks-jdbc) 
from 3.2.1 to 3.3.1.
   <details>
   <summary>Release notes</summary>
   <p><em>Sourced from <a 
href="https://github.com/databricks/databricks-jdbc/releases";>com.databricks:databricks-jdbc's
 releases</a>.</em></p>
   <blockquote>
   <h2>Release Databricks OSS JDBC driver version 3.3.1</h2>
   <h3>Added</h3>
   <ul>
   <li>Added <code>DatabaseMetaData.getProcedures()</code> and 
<code>DatabaseMetaData.getProcedureColumns()</code> to discover stored 
procedures and their parameters. Queries 
<code>information_schema.routines</code> and 
<code>information_schema.parameters</code> using parameterized SQL for both SEA 
and Thrift transports.</li>
   <li>Added connection property <code>OAuthWebServerTimeout</code> to 
configure the OAuth browser authentication timeout for U2M (user-to-machine) 
flows, and also updated hardcoded 1-hour timeout to default 120 seconds 
timeout.</li>
   <li>Added connection property <code>UseQueryForMetadata</code> to use SQL 
SHOW commands instead of Thrift RPCs for metadata operations (getCatalogs, 
getSchemas, getTables, getColumns, getFunctions). This fixes incorrect wildcard 
matching where <code>_</code> was treated as a single-character wildcard in 
Thrift metadata pattern filters.</li>
   <li>Added connection property <code>TreatMetadataCatalogNameAsPattern</code> 
to control whether catalog names are treated as patterns in Thrift metadata 
RPCs. When disabled (default), unescaped <code>_</code> in catalog names is 
escaped to prevent single-character wildcard matching. This aligns with JDBC 
spec which treats catalogName as identifier and not pattern.</li>
   </ul>
   <h3>Updated</h3>
   <ul>
   <li>Bumped <code>com.fasterxml.jackson.core:jackson-core</code> from 2.18.3 
to 2.18.6.</li>
   <li>Fat jar now routes SDK and Apache HTTP client logs through Java Util 
Logging (JUL), removing the need for external logging libraries.</li>
   <li>Added Apache Arrow on-heap memory management for processing Arrow query 
results. Previously, Arrow result processing was unusable on JDK 16+ without 
passing the <code>--add-opens=java.base/java.nio=ALL-UNNAMED</code> JVM 
argument, due to stricter encapsulation of internal APIs. With this change, 
there is no JVM argument required - the driver automatically falls back to an 
on-heap memory path that uses standard JVM heap allocation instead of direct 
memory access.</li>
   <li>Log timestamps now explicitly display timezone.</li>
   <li><strong>[Breaking Change]</strong> 
<code>PreparedStatement.setTimestamp(int, Timestamp, Calendar)</code> now 
properly applies Calendar timezone conversion using LocalDateTime pattern 
(inline with <code>getTimestamp</code>). Previously Calendar parameter was 
ineffective.</li>
   <li><code>DatabaseMetaData.getColumns()</code> with null catalog parameter 
now retrieves columns from all available catalogs when using SQL Execution 
API.</li>
   </ul>
   <h3>Fixed</h3>
   <ul>
   <li>Fixed statement timeout when the server returns 
<code>TIMEDOUT_STATE</code> directly in the <code>ExecuteStatement</code> 
response (e.g. query queued under load), the driver now throws 
<code>SQLTimeoutException</code> instead of 
<code>DatabricksHttpException</code>.</li>
   <li>Fixed Thrift polling infinite loop when server restarts invalidate 
operation handles, and added configurable timeout 
(<code>MetadataOperationTimeout</code>, default 300s) with sleep between polls 
for metadata operations.</li>
   <li>Fixed <code>DatabricksParameterMetaData.countParameters</code> and 
<code>DatabricksStatement.trimCommentsAndWhitespaces</code> with a 
<code>SqlCommentParser</code> utility class.</li>
   <li>Fixed <code>rollback()</code> to throw <code>SQLException</code> when 
called in auto-commit mode (no active transaction), aligning with JDBC spec. 
Previously it silently sent a ROLLBACK command to the server.</li>
   <li>Fixed <code>fetchAutoCommitStateFromServer()</code> to accept both 
<code>&quot;1&quot;</code>/<code>&quot;0&quot;</code> and 
<code>&quot;true&quot;</code>/<code>&quot;false&quot;</code> responses from 
<code>SET AUTOCOMMIT</code> query, since different server implementations 
return different formats.</li>
   <li>Fixed socket leak in SDK HTTP client that prevented CRaC checkpointing. 
The SDK's connection pool was not shut down on <code>connection.close()</code>, 
leaving TCP sockets open.</li>
   <li>Fixed <code>IdleConnectionEvictor</code> thread leak in long-running 
services. The feature-flags context shared per host was ref-counted incorrectly 
and held a stale connection UUID after the owning connection closed; on the 
next 15-minute refresh it silently recreated an HTTP client (and its evictor 
thread) that was never cleaned up. Connection UUIDs are now tracked 
idempotently and the stored connection context is updated when the owning 
connection closes.</li>
   <li>Fixed Date fields within complex types (ARRAY, STRUCT, MAP) being 
returned as epoch day integers instead of proper date values.</li>
   <li>Fixed <code>DatabaseMetaData.getColumns()</code> returning the column 
type name in <code>COLUMN_DEF</code> for columns with no default value. 
<code>COLUMN_DEF</code> now correctly returns <code>null</code> per the JDBC 
specification.</li>
   <li>Coalesce concurrent expired cloud fetch link refreshes into a single 
batch FetchResults RPC to prevent thread pool exhaustion under high 
concurrency.</li>
   </ul>
   </blockquote>
   </details>
   <details>
   <summary>Changelog</summary>
   <p><em>Sourced from <a 
href="https://github.com/databricks/databricks-jdbc/blob/main/CHANGELOG.md";>com.databricks:databricks-jdbc's
 changelog</a>.</em></p>
   <blockquote>
   <h2>[v3.3.1] - 2026-03-17</h2>
   <h3>Added</h3>
   <ul>
   <li>Added <code>DatabaseMetaData.getProcedures()</code> and 
<code>DatabaseMetaData.getProcedureColumns()</code> to discover stored 
procedures and their parameters. Queries 
<code>information_schema.routines</code> and 
<code>information_schema.parameters</code> using parameterized SQL for both SEA 
and Thrift transports.</li>
   <li>Added connection property <code>OAuthWebServerTimeout</code> to 
configure the OAuth browser authentication timeout for U2M (user-to-machine) 
flows, and also updated hardcoded 1-hour timeout to default 120 seconds 
timeout.</li>
   <li>Added connection property <code>UseQueryForMetadata</code> to use SQL 
SHOW commands instead of Thrift RPCs for metadata operations (getCatalogs, 
getSchemas, getTables, getColumns, getFunctions). This fixes incorrect wildcard 
matching where <code>_</code> was treated as a single-character wildcard in 
Thrift metadata pattern filters.</li>
   <li>Added connection property <code>TreatMetadataCatalogNameAsPattern</code> 
to control whether catalog names are treated as patterns in Thrift metadata 
RPCs. When disabled (default), unescaped <code>_</code> in catalog names is 
escaped to prevent single-character wildcard matching. This aligns with JDBC 
spec which treats catalogName as identifier and not pattern.</li>
   </ul>
   <h3>Updated</h3>
   <ul>
   <li>Bumped <code>com.fasterxml.jackson.core:jackson-core</code> from 2.18.3 
to 2.18.6.</li>
   <li>Fat jar now routes SDK and Apache HTTP client logs through Java Util 
Logging (JUL), removing the need for external logging libraries.</li>
   <li>Added Apache Arrow on-heap memory management for processing Arrow query 
results. Previously, Arrow result processing was unusable on JDK 16+ without 
passing the <code>--add-opens=java.base/java.nio=ALL-UNNAMED</code> JVM 
argument, due to stricter encapsulation of internal APIs. With this change, 
there is no JVM argument required - the driver automatically falls back to an 
on-heap memory path that uses standard JVM heap allocation instead of direct 
memory access.</li>
   <li>Log timestamps now explicitly display timezone.</li>
   <li><strong>[Breaking Change]</strong> 
<code>PreparedStatement.setTimestamp(int, Timestamp, Calendar)</code> now 
properly applies Calendar timezone conversion using LocalDateTime pattern 
(inline with <code>getTimestamp</code>). Previously Calendar parameter was 
ineffective.</li>
   <li><code>DatabaseMetaData.getColumns()</code> with null catalog parameter 
now retrieves columns from all available catalogs when using SQL Execution 
API.</li>
   </ul>
   <h3>Fixed</h3>
   <ul>
   <li>Fixed statement timeout when the server returns 
<code>TIMEDOUT_STATE</code> directly in the <code>ExecuteStatement</code> 
response (e.g. query queued under load), the driver now throws 
<code>SQLTimeoutException</code> instead of 
<code>DatabricksHttpException</code>.</li>
   <li>Fixed Thrift polling infinite loop when server restarts invalidate 
operation handles, and added configurable timeout 
(<code>MetadataOperationTimeout</code>, default 300s) with sleep between polls 
for metadata operations.</li>
   <li>Fixed <code>DatabricksParameterMetaData.countParameters</code> and 
<code>DatabricksStatement.trimCommentsAndWhitespaces</code> with a 
<code>SqlCommentParser</code> utility class.</li>
   <li>Fixed <code>rollback()</code> to throw <code>SQLException</code> when 
called in auto-commit mode (no active transaction), aligning with JDBC spec. 
Previously it silently sent a ROLLBACK command to the server.</li>
   <li>Fixed <code>fetchAutoCommitStateFromServer()</code> to accept both 
<code>&quot;1&quot;</code>/<code>&quot;0&quot;</code> and 
<code>&quot;true&quot;</code>/<code>&quot;false&quot;</code> responses from 
<code>SET AUTOCOMMIT</code> query, since different server implementations 
return different formats.</li>
   <li>Fixed socket leak in SDK HTTP client that prevented CRaC checkpointing. 
The SDK's connection pool was not shut down on <code>connection.close()</code>, 
leaving TCP sockets open.</li>
   <li>Fixed <code>IdleConnectionEvictor</code> thread leak in long-running 
services. The feature-flags context shared per host was ref-counted incorrectly 
and held a stale connection UUID after the owning connection closed; on the 
next 15-minute refresh it silently recreated an HTTP client (and its evictor 
thread) that was never cleaned up. Connection UUIDs are now tracked 
idempotently and the stored connection context is updated when the owning 
connection closes.</li>
   <li>Fixed Date fields within complex types (ARRAY, STRUCT, MAP) being 
returned as epoch day integers instead of proper date values.</li>
   <li>Fixed <code>DatabaseMetaData.getColumns()</code> returning the column 
type name in <code>COLUMN_DEF</code> for columns with no default value. 
<code>COLUMN_DEF</code> now correctly returns <code>null</code> per the JDBC 
specification.</li>
   <li>Coalesce concurrent expired cloud fetch link refreshes into a single 
batch FetchResults RPC to prevent thread pool exhaustion under high 
concurrency.</li>
   </ul>
   </blockquote>
   </details>
   <details>
   <summary>Commits</summary>
   <ul>
   <li><a 
href="https://github.com/databricks/databricks-jdbc/commit/c6ab4de4ba8e5ff04cb290bcfbc477fb6caa1b87";><code>c6ab4de</code></a>
 Skip OWASP dependency check in release build step (<a 
href="https://redirect.github.com/databricks/databricks-jdbc/issues/1286";>#1286</a>)</li>
   <li><a 
href="https://github.com/databricks/databricks-jdbc/commit/aa2904251882e248387d1c2b604ef043d5e4b572";><code>aa29042</code></a>
 Cut new release 3.3.1 (<a 
href="https://redirect.github.com/databricks/databricks-jdbc/issues/1281";>#1281</a>)</li>
   <li><a 
href="https://github.com/databricks/databricks-jdbc/commit/28d6a134ad8506ded049bf5641792c4d06ed037a";><code>28d6a13</code></a>
 [PECOBLR-1746] Implementing support for listing procedures (<a 
href="https://redirect.github.com/databricks/databricks-jdbc/issues/1238";>#1238</a>)</li>
   <li><a 
href="https://github.com/databricks/databricks-jdbc/commit/dd0c7b92a214fa9a47258987b65027872b780fb0";><code>dd0c7b9</code></a>
 Fix IdleConnectionEvictor thread leak in long-running services (<a 
href="https://redirect.github.com/databricks/databricks-jdbc/issues/1271";>#1271</a>)</li>
   <li><a 
href="https://github.com/databricks/databricks-jdbc/commit/03ca3c106558e1dd7fa489192d94c55beb6e6027";><code>03ca3c1</code></a>
 [ES-1717770] Fix TIMEDOUT_STATE not recognized as error on interactive 
cluste...</li>
   <li><a 
href="https://github.com/databricks/databricks-jdbc/commit/b069b4e911b072f1523fa922d717a27a937d1b6f";><code>b069b4e</code></a>
 [ES-1765150] Coalesce concurrent expired-link refreshes into single batch 
RPC...</li>
   <li><a 
href="https://github.com/databricks/databricks-jdbc/commit/c7246422860760586dad94fadb9a60c2dca0abaa";><code>c724642</code></a>
 [ES-1774740] Fix Thrift polling infinite loop on invalid operation handle (<a 
href="https://redirect.github.com/databricks/databricks-jdbc/issues/1";>#1</a>...</li>
   <li><a 
href="https://github.com/databricks/databricks-jdbc/commit/7741c3c681dd13df1d7c43a4a8520e4631cf0968";><code>7741c3c</code></a>
 Add /sync-jdk8-branch slash command + update the jdk8 action workflow (<a 
href="https://redirect.github.com/databricks/databricks-jdbc/issues/1274";>#1274</a>)</li>
   <li><a 
href="https://github.com/databricks/databricks-jdbc/commit/3ef86030329e93d2a0450b4b11c8e3b725521285";><code>3ef8603</code></a>
 Fix DatabaseMetaData.getColumns() returning type name in COLUMN_DEF for 
colum...</li>
   <li><a 
href="https://github.com/databricks/databricks-jdbc/commit/93cf3047880a4f795c832764330693898ba1d83e";><code>93cf304</code></a>
 Fix the JDK8 github action + make local build developer friendly + fix 
covera...</li>
   <li>Additional commits viewable in <a 
href="https://github.com/databricks/databricks-jdbc/compare/v3.2.1...v3.3.1";>compare
 view</a></li>
   </ul>
   </details>
   <br />
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=com.databricks:databricks-jdbc&package-manager=maven&previous-version=3.2.1&new-version=3.3.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   <details>
   <summary>Dependabot commands and options</summary>
   <br />
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot show <dependency name> ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   
   
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to