This is an automated email from the ASF dual-hosted git repository.
github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/arrow-adbc.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 48a73d3 publish documentation
48a73d3 is described below
commit 48a73d3b3f686b6d2ef3737c22182154d0dad1c7
Author: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
AuthorDate: Fri Apr 28 16:25:55 2023 +0000
publish documentation
---
main/_sources/driver/go/flight_sql.rst.txt | 2 +-
main/_sources/driver/go/snowflake.rst.txt | 325 ++++++++++++++++
main/driver/go/flight_sql.html | 2 +-
main/driver/go/{flight_sql.html => snowflake.html} | 424 ++++++++++++---------
main/objects.inv | Bin 7467 -> 7485 bytes
main/python/api/adbc_driver_flightsql.html | 2 +-
main/searchindex.js | 2 +-
7 files changed, 577 insertions(+), 180 deletions(-)
diff --git a/main/_sources/driver/go/flight_sql.rst.txt
b/main/_sources/driver/go/flight_sql.rst.txt
index a54f371..2b487e3 100644
--- a/main/_sources/driver/go/flight_sql.rst.txt
+++ b/main/_sources/driver/go/flight_sql.rst.txt
@@ -175,7 +175,7 @@ of the partitions.
The queue size can be changed by setting an option on the
:cpp:class:`AdbcStatement`:
-``adbc.flight.sql.rpc.queue_size``
+``adbc.rpc.result_queue_size``
The number of batches to queue per partition. Defaults to 5.
Metadata
diff --git a/main/_sources/driver/go/snowflake.rst.txt
b/main/_sources/driver/go/snowflake.rst.txt
new file mode 100644
index 0000000..3a233b5
--- /dev/null
+++ b/main/_sources/driver/go/snowflake.rst.txt
@@ -0,0 +1,325 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements. See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership. The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License. You may obtain a copy of the License at
+..
+.. http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied. See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+
+================
+Snowflake Driver
+================
+
+The Snowflake Driver provides access to Snowflake Database Warehouses.
+
+Installation
+============
+
+The Snowflake Driver is shipped as a standalone library
+
+.. tab-set::
+
+ .. tab-item:: Go
+ :sync: go
+
+ .. code-block:: shell
+
+ go get github.com/apache/arrow-adbc/go/adbc/driver/snowflake
+
+Usage
+=====
+
+To connect to a Snowflake database you can supply the "uri" parameter when
+constructing the :cpp::class:`AdbcDatabase`.
+
+.. tab-set::
+
+ .. tab-item:: C++
+ :sync: cpp
+
+ .. code-block:: cpp
+
+ #include "adbc.h"
+
+ // Ignoring error handling
+ struct AdbcDatabase database;
+ AdbcDatabaseNew(&database, nullptr);
+ AdbcDatabaseSetOption(&database, "driver", "adbc_driver_snowflake",
nullptr);
+ AdbcDatabaseSetOption(&database, "uri", "<snowflake uri>", nullptr);
+ AdbcDatabaseInit(&database, nullptr);
+
+URI Format
+----------
+
+The Snowflake URI should be of one of the following formats:
+
+- ``user[:password]@account/database/schema[?param1=value1¶mN=valueN]``
+- ``user[:password]@account/database[?param1=value1¶mN=valueN]``
+-
``user[:password]@host:port/database/schema?account=user_account[¶m1=value1¶mN=valueN]``
+-
``host:port/database/schema?account=user_account[¶m1=value1¶mN=valueN]``
+
+Alternately, instead of providing a full URI, the configuration can
+be entirely supplied using the other available options or some combination
+of the URI and other options. If a URI is provided, it will be parsed first
+and any explicit options provided will override anything parsed from the URI.
+
+Supported Features
+==================
+
+The Snowflake driver generally supports features defined in the ADBC API
+specification 1.0.0, as well as some additional, custom options.
+
+Authentication
+--------------
+
+Snowflake requires some form of authentication to be enabled. By default
+it will attempt to use Username/Password authentication. The username and
+password can be provided in the URI or via the ``username`` and ``password``
+options to the :cpp:class:`AdbcDatabase`.
+
+Alternately, other types of authentication can be specified and customized.
+See "Client Options" below.
+
+Bulk Ingestion
+--------------
+
+Bulk ingestion is supported. The mapping from Arrow types to Snowflake types
+is provided below.
+
+Partitioned Result Sets
+-----------------------
+
+Partitioned result sets are not currently supported.
+
+Performance
+-----------
+
+Formal benchmarking is forthcoming. Snowflake does provide an Arrow native
+format for requesting results, but bulk ingestion is still currently executed
+using the REST API. As described in the `Snowflake Documentation
+<https://pkg.go.dev/github.com/snowflakedb/gosnowflake#hdr-Batch_Inserts_and_Binding_Parameters>`
+the driver will potentially attempt to improve performance by streaming the
data
+(without creating files on the local machine) to a temporary stage for
ingestion
+if the number of values exceeds some threshold.
+
+In order for the driver to leverage this temporary stage, the user must have
+the ``CREATE STAGE`` privilege on the schema. If the user does not have this
+privilege, the driver will fall back to sending the data with the query
+to the snowflake database.
+
+In addition, the current database and schema for the session must be set. If
+these are not set, the ``CREATE TEMPORARY STAGE`` command executed by the
driver
+can fail with the following error:
+
+.. code-block::
+ CREATE TEMPORARY STAGE SYSTEM$BIND file_format=(type=csv
field_optionally_enclosed_by='"')
+ CANNOT perform CREATE STAGE. This session does not have a current schema.
Call 'USE SCHEMA' or use a qualified name.
+
+In addition, results are potentially fetched in parallel from multiple
endpoints.
+A limited number of batches are queued per endpoint, though data is always
+returned to the client in the order of the endpoints.
+
+The queue size can be changed by setting an option on the
:cpp:class:`AdbcStatement`:
+
+``adbc.rpc.result_queue_size``
+ The number of batches to queue per endpoint. Defaults to 5.
+
+Transactions
+------------
+
+Transactions are supported. Keep in mind that Snowflake transactions will
+implicitly commit if any DDL statements are run, such as ``CREATE TABLE``.
+
+Client Options
+--------------
+
+The options used for creating a Snowflake Database connection can be
customized.
+These options map 1:1 with the Snowflake `Config object
<https://pkg.go.dev/github.com/snowflakedb/gosnowflake#Config>`.
+
+``adbc.snowflake.sql.db``
+ The database this session should default to using.
+
+``adbc.snowflake.sql.schema``
+ The schema this session should default to using.
+
+``adbc.snowflake.sql.warehouse``
+ The warehouse this session should default to using.
+
+``adbc.snowflake.sql.role``
+ The role that should be used for authentication.
+
+``adbc.snowflake.sql.region``
+ The Snowflake region to use for constructing the connection URI.
+
+``adbc.snowflake.sql.account``
+ The Snowflake account that should be used for authentication and building
the
+ connection URI.
+
+``adbc.snowflake.sql.uri.protocol``
+ This should be either `http` or `https`.
+
+``adbc.snowflake.sql.uri.port``
+ The port to use for constructing the URI for connection.
+
+``adbc.snowflake.sql.uri.host``
+ The explicit host to use for constructing the URL to connect to.
+
+``adbc.snowflake.sql.auth_type``
+ Allows specifying alternate types of authentication, the allowed values
are:
+
+ - ``auth_snowflake``: General username/password authentication (this is
the default)
+ - ``auth_oauth``: Use OAuth authentication for the snowflake connection.
+ - ``auth_ext_browser``: Use an external browser to access a FED and
perform SSO auth.
+ - ``auth_okta``: Use a native Okta URL to perform SSO authentication using
Okta
+ - ``auth_jwt``: Use a provided JWT to perform authentication.
+ - ``auth_mfa``: Use a username and password with MFA.
+
+``adbc.snowflake.sql.client_option.auth_token``
+ If using OAuth or another form of authentication, this option is how you
can
+ explicitly specify the token to be used for connection.
+
+``adbc.snowflake.sql.client_option.okta_url``
+ If using ``auth_okta``, this option is required in order to specify the
+ Okta URL to connect to for SSO authentication.
+
+``adbc.snowflake.sql.client_option.login_timeout``
+ Specify login retry timeout *excluding* network roundtrip and reading http
responses.
+ Value should be formatted as described `here
<https://pkg.go.dev/time#ParseDuration>`,
+ such as ``300ms``, ``1.5s`` or ``1m30s``. Even though negative values are
accepted,
+ the absolute value of such a duration will be used.
+
+``adbc.snowflake.sql.client_option.request_timeout``
+ Specify request retry timeout *excluding* network roundtrip and reading
http responses.
+ Value should be formatted as described `here
<https://pkg.go.dev/time#ParseDuration>`,
+ such as ``300ms``, ``1.5s`` or ``1m30s``. Even though negative values are
accepted,
+ the absolute value of such a duration will be used.
+
+``adbc.snowflake.sql.client_option.jwt_expire_timeout``
+ JWT expiration will occur after this timeout.
+ Value should be formatted as described `here
<https://pkg.go.dev/time#ParseDuration>`,
+ such as ``300ms``, ``1.5s`` or ``1m30s``. Even though negative values are
accepted,
+ the absolute value of such a duration will be used.
+
+``adbc.snowflake.sql.client_option.client_timeout``
+ Specify timeout for network roundtrip and reading http responses.
+ Value should be formatted as described `here
<https://pkg.go.dev/time#ParseDuration>`,
+ such as ``300ms``, ``1.5s`` or ``1m30s``. Even though negative values are
accepted,
+ the absolute value of such a duration will be used.
+
+``adbc.snowflake.sql.client_option.app_name``
+ Allows specifying the Application Name to Snowflake for the connection.
+
+``adbc.snowflake.sql.client_option.tls_skip_verify``
+ Disable verification of the server's TLS certificate. Value should be
``true``
+ or ``false``.
+
+``adbc.snowflake.sql.client_option.ocsp_fail_open_mode``
+ Control the fail open mode for OCSP. Default is ``true``. Value should
+ be either ``true`` or ``false``.
+
+``adbc.snowflake.sql.client_option.keep_session_alive``
+ Enable the session to persist even after the connection is closed. Value
+ should be either ``true`` or ``false``.
+
+``adbc.snowflake.sql.client_option.jwt_private_key``
+ Specify the RSA private key which should be used to sign the JWT for
+ authentication. This should be a path to a file containing a PKCS1
+ private key to be read in and parsed. Commonly encoded in PEM blocks
+ of type "RSA PRIVATE KEY".
+
+``adbc.snowflake.sql.client_option.disable_telemetry``
+ The Snowflake driver allows for telemetry information which can be
+ disabled by setting this to ``true``. Value should be either ``true``
+ or ``false``.
+
+``adbc.snowflake.sql.client_option.tracing``
+ Set the logging level
+
+``adbc.snowflake.sql.client_option.cache_mfa_token``
+ When ``true``, the MFA token is cached in the credential manager. Defaults
+ to ``true`` on Windows/OSX, ``false`` on Linux.
+
+``adbc.snowflake.sql.client_option.store_temp_creds``
+ When ``true``, the ID token is cached in the credential manager. Defaults
+ to ``true`` on Windows/OSX, ``false`` on Linux.
+
+
+Metadata
+--------
+
+When calling :cpp:`AdbcConnectionGetTableSchema`, the returned Arrow Schema
+will contain metadata on each field:
+
+``DATA_TYPE``
+ This will be a string containing the raw Snowflake data type of this column
+
+``PRIMARY_KEY``
+ This will be either ``Y`` or ``N`` to indicate a column is a primary key.
+
+In addition, the schema on the stream of results from a query will contain
+the following metadata keys on each field:
+
+``logicalType``
+ The Snowflake logical type of this column. Will be one of ``fixed``,
+ ``real``, ``text``, ``date``, ``variant``, ``timestamp_ltz``,
``timestamp_ntz``,
+ ``timestamp_tz``, ``object``, ``array``, ``binary``, ``time``, ``boolean``.
+
+``precision``
+ An integer representing the Snowflake precision of the field.
+
+``scale``
+ An integer representing the Snowflake scale of the values in this field.
+
+``charLength``
+ If a text field, this will be equivalent to the ``VARCHAR(#)`` parameter
``#``.
+
+``byteLength``
+ Will contain the length, in bytes, of the raw data sent back from Snowflake
+ regardless of the type of the field in Arrow.
+
+Type Support
+------------
+
+Because Snowflake types do not necessary match up 1-to-1 with Arrow types
+the following is what should be expected when requesting data. Any conversions
+indicated are done to ensure consistency of the stream of record batches.
+
++----------------+---------------+-----------------------------------------+
+| Snowflake Type | Arrow Type | Notes |
++----------------+---------------+-----------------------------------------+
+| Integral Types | Int64 | All integral types in snowflake are |
+| | | stored as 64-bit integers. |
++----------------+---------------+-----------------------------------------+
+| Float/Double | Float64 | Snowflake does not distinguish between |
+| | | float or double. All are 64-bit values |
++----------------+---------------+-----------------------------------------+
+| Decimal/Numeric| Int64/Float64 | If Scale == 0 then Int64 is used, else |
+| | | Float64 is returned. |
++----------------+---------------+-----------------------------------------+
+| Time | Time64(ns) | For ingestion, time32 will also work |
++----------------+---------------+-----------------------------------------+
+| Date | Date32 | For ingestion, Date64 will also work |
++----------------+---------------+-----------------------------------------+
+| Timestamp_LTZ | Timestamp(ns) | Local time zone will be used. |
+| Timestamp_NTZ | | No timezone specified in Arrow type info|
+| Timestamp_TZ | | Values will be converted to UTC |
++----------------+---------------+-----------------------------------------+
+| Variant | String | Snowflake does not provide nested type |
+| Object | | information. So each value will be a |
+| Array | | string, similar to JSON, which can be |
+| | | parsed. The ``logicalType`` metadata key|
+| | | will contain the snowflake field type. |
++----------------+---------------+-----------------------------------------+
+| Geography | String | There is no canonical Arrow type for |
+| Geometry | | these and snowflake returns them as |
+| | | strings. |
++----------------+---------------+-----------------------------------------+
diff --git a/main/driver/go/flight_sql.html b/main/driver/go/flight_sql.html
index 41774c1..7ea0f1e 100644
--- a/main/driver/go/flight_sql.html
+++ b/main/driver/go/flight_sql.html
@@ -469,7 +469,7 @@ of the partitions.</p>
<p>The queue size can be changed by setting an option on the
<a class="reference internal"
href="../../cpp/api/adbc.html#_CPPv413AdbcStatement"
title="AdbcStatement"><code class="xref cpp cpp-class docutils literal
notranslate"><span class="pre">AdbcStatement</span></code></a>:</p>
<dl class="simple">
-<dt><code class="docutils literal notranslate"><span
class="pre">adbc.flight.sql.rpc.queue_size</span></code></dt><dd><p>The number
of batches to queue per partition. Defaults to 5.</p>
+<dt><code class="docutils literal notranslate"><span
class="pre">adbc.rpc.result_queue_size</span></code></dt><dd><p>The number of
batches to queue per partition. Defaults to 5.</p>
</dd>
</dl>
</section>
diff --git a/main/driver/go/flight_sql.html b/main/driver/go/snowflake.html
similarity index 54%
copy from main/driver/go/flight_sql.html
copy to main/driver/go/snowflake.html
index 41774c1..93723ff 100644
--- a/main/driver/go/flight_sql.html
+++ b/main/driver/go/snowflake.html
@@ -5,10 +5,10 @@
<head><meta charset="utf-8"/>
<meta name="viewport" content="width=device-width,initial-scale=1"/>
<meta name="color-scheme" content="light dark"><meta name="generator"
content="Docutils 0.19: https://docutils.sourceforge.io/" />
-<link rel="index" title="Index" href="../../genindex.html" /><link
rel="search" title="Search" href="../../search.html" /><link rel="next"
title="Java" href="../java/index.html" /><link rel="prev" title="Go"
href="index.html" />
+<link rel="index" title="Index" href="../../genindex.html" /><link
rel="search" title="Search" href="../../search.html" />
<!-- Generated with Sphinx 5.3.0 and Furo 2023.03.27 -->
- <title>Flight SQL Driver - ADBC 0.4.0 (dev) documentation</title>
+ <title>Snowflake Driver - ADBC 0.4.0 (dev) documentation</title>
<link rel="stylesheet" type="text/css" href="../../_static/pygments.css"
/>
<link rel="stylesheet" type="text/css"
href="../../_static/styles/furo.css?digest=fad236701ea90a88636c2a8c73b44ae642ed2a53"
/>
<link rel="stylesheet" type="text/css" href="../../_static/copybutton.css"
/>
@@ -226,7 +226,7 @@
<li class="toctree-l1"><a class="reference internal"
href="../../faq.html">Frequently Asked Questions (FAQ)</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Drivers</span></p>
-<ul class="current">
+<ul>
<li class="toctree-l1"><a class="reference internal"
href="../installation.html">Installation</a></li>
<li class="toctree-l1"><a class="reference internal"
href="../status.html">Driver Feature Support/Implementation Status</a></li>
<li class="toctree-l1 has-children"><a class="reference internal"
href="../cpp/index.html">C/C++/Python</a><input class="toctree-checkbox"
id="toctree-checkbox-1" name="toctree-checkbox-1" role="switch"
type="checkbox"/><label for="toctree-checkbox-1"><div
class="visually-hidden">Toggle child pages in navigation</div><i
class="icon"><svg><use href="#svg-arrow-right"></use></svg></i></label><ul>
@@ -234,8 +234,8 @@
<li class="toctree-l2"><a class="reference internal"
href="../cpp/sqlite.html">SQLite Driver</a></li>
</ul>
</li>
-<li class="toctree-l1 current has-children"><a class="reference internal"
href="index.html">Go</a><input checked="" class="toctree-checkbox"
id="toctree-checkbox-2" name="toctree-checkbox-2" role="switch"
type="checkbox"/><label for="toctree-checkbox-2"><div
class="visually-hidden">Toggle child pages in navigation</div><i
class="icon"><svg><use href="#svg-arrow-right"></use></svg></i></label><ul
class="current">
-<li class="toctree-l2 current current-page"><a class="current reference
internal" href="#">Flight SQL Driver</a></li>
+<li class="toctree-l1 has-children"><a class="reference internal"
href="index.html">Go</a><input class="toctree-checkbox" id="toctree-checkbox-2"
name="toctree-checkbox-2" role="switch" type="checkbox"/><label
for="toctree-checkbox-2"><div class="visually-hidden">Toggle child pages in
navigation</div><i class="icon"><svg><use
href="#svg-arrow-right"></use></svg></i></label><ul>
+<li class="toctree-l2"><a class="reference internal"
href="flight_sql.html">Flight SQL Driver</a></li>
</ul>
</li>
<li class="toctree-l1 has-children"><a class="reference internal"
href="../java/index.html">Java</a><input class="toctree-checkbox"
id="toctree-checkbox-3" name="toctree-checkbox-3" role="switch"
type="checkbox"/><label for="toctree-checkbox-3"><div
class="visually-hidden">Toggle child pages in navigation</div><i
class="icon"><svg><use href="#svg-arrow-right"></use></svg></i></label><ul>
@@ -304,7 +304,7 @@
</a>
<div class="content-icon-container">
<div class="edit-this-page">
- <a class="muted-link"
href="https://github.com/apache/arrow-adbc/edit/main/docs/source/driver/go/flight_sql.rst"
title="Edit this page">
+ <a class="muted-link"
href="https://github.com/apache/arrow-adbc/edit/main/docs/source/driver/go/snowflake.rst"
title="Edit this page">
<svg aria-hidden="true" viewBox="0 0 24 24" stroke-width="1.5"
stroke="currentColor" fill="none" stroke-linecap="round"
stroke-linejoin="round">
<path stroke="none" d="M0 0h24v24H0z" fill="none"/>
<path d="M4 20h4l10.5 -10.5a1.5 1.5 0 0 0 -4 -4l-10.5 10.5v4" />
@@ -326,25 +326,17 @@
</label>
</div>
<article role="main">
- <section id="flight-sql-driver">
-<h1>Flight SQL Driver<a class="headerlink" href="#flight-sql-driver"
title="Permalink to this heading">#</a></h1>
-<p>The Flight SQL Driver provides access to any database implementing a
-<a class="reference external"
href="https://arrow.apache.org/docs/format/FlightSql.html" title="(in Apache
Arrow v11.0.0)"><span>Arrow Flight SQL</span></a> compatible endpoint.</p>
+ <section id="snowflake-driver">
+<h1>Snowflake Driver<a class="headerlink" href="#snowflake-driver"
title="Permalink to this heading">#</a></h1>
+<p>The Snowflake Driver provides access to Snowflake Database Warehouses.</p>
<section id="installation">
<h2>Installation<a class="headerlink" href="#installation" title="Permalink to
this heading">#</a></h2>
-<p>The Flight SQL driver is shipped as a standalone library.</p>
+<p>The Snowflake Driver is shipped as a standalone library</p>
<div class="sd-tab-set docutils">
<input checked="checked" id="sd-tab-item-0" name="sd-tab-set-0" type="radio">
</input><label class="sd-tab-label" data-sync-id="go" for="sd-tab-item-0">
Go</label><div class="sd-tab-content docutils">
-<div class="highlight-shell notranslate"><div
class="highlight"><pre><span></span>go<span class="w"> </span>get<span
class="w"> </span>github.com/apache/arrow-adbc/go
-</pre></div>
-</div>
-</div>
-<input id="sd-tab-item-1" name="sd-tab-set-0" type="radio">
-</input><label class="sd-tab-label" data-sync-id="python" for="sd-tab-item-1">
-Python</label><div class="sd-tab-content docutils">
-<div class="highlight-shell notranslate"><div
class="highlight"><pre><span></span>pip<span class="w"> </span>install<span
class="w"> </span>adbc_driver_sqlite
+<div class="highlight-shell notranslate"><div
class="highlight"><pre><span></span>go<span class="w"> </span>get<span
class="w"> </span>github.com/apache/arrow-adbc/go/adbc/driver/snowflake
</pre></div>
</div>
</div>
@@ -352,176 +344,272 @@ Python</label><div class="sd-tab-content docutils">
</section>
<section id="usage">
<h2>Usage<a class="headerlink" href="#usage" title="Permalink to this
heading">#</a></h2>
-<p>To connect to a database, supply the “uri” parameter when constructing
-the <a class="reference internal"
href="../../cpp/api/adbc.html#_CPPv412AdbcDatabase" title="AdbcDatabase"><code
class="xref cpp cpp-class docutils literal notranslate"><span
class="pre">AdbcDatabase</span></code></a>.</p>
+<p>To connect to a Snowflake database you can supply the “uri” parameter when
+constructing the :cpp:<code class="xref py py-class docutils literal
notranslate"><span class="pre">AdbcDatabase</span></code>.</p>
<div class="sd-tab-set docutils">
-<input checked="checked" id="sd-tab-item-2" name="sd-tab-set-1" type="radio">
-</input><label class="sd-tab-label" data-sync-id="cpp" for="sd-tab-item-2">
+<input checked="checked" id="sd-tab-item-1" name="sd-tab-set-1" type="radio">
+</input><label class="sd-tab-label" data-sync-id="cpp" for="sd-tab-item-1">
C++</label><div class="sd-tab-content docutils">
<div class="highlight-cpp notranslate"><div
class="highlight"><pre><span></span><span class="cp">#include</span><span
class="w"> </span><span class="cpf">"adbc.h"</span>
<span class="c1">// Ignoring error handling</span>
<span class="k">struct</span><span class="w"> </span><span
class="nc">AdbcDatabase</span><span class="w"> </span><span
class="n">database</span><span class="p">;</span>
<span class="n">AdbcDatabaseNew</span><span class="p">(</span><span
class="o">&</span><span class="n">database</span><span
class="p">,</span><span class="w"> </span><span class="k">nullptr</span><span
class="p">);</span>
-<span class="n">AdbcDatabaseSetOption</span><span class="p">(</span><span
class="o">&</span><span class="n">database</span><span
class="p">,</span><span class="w"> </span><span
class="s">"driver"</span><span class="p">,</span><span class="w">
</span><span class="s">"adbc_driver_flightsql"</span><span
class="p">,</span><span class="w"> </span><span class="k">nullptr</span><span
class="p">);</span>
-<span class="n">AdbcDatabaseSetOption</span><span class="p">(</span><span
class="o">&</span><span class="n">database</span><span
class="p">,</span><span class="w"> </span><span
class="s">"uri"</span><span class="p">,</span><span class="w">
</span><span class="s">"grpc://localhost:8080"</span><span
class="p">,</span><span class="w"> </span><span class="k">nullptr</span><span
class="p">);</span>
+<span class="n">AdbcDatabaseSetOption</span><span class="p">(</span><span
class="o">&</span><span class="n">database</span><span
class="p">,</span><span class="w"> </span><span
class="s">"driver"</span><span class="p">,</span><span class="w">
</span><span class="s">"adbc_driver_snowflake"</span><span
class="p">,</span><span class="w"> </span><span class="k">nullptr</span><span
class="p">);</span>
+<span class="n">AdbcDatabaseSetOption</span><span class="p">(</span><span
class="o">&</span><span class="n">database</span><span
class="p">,</span><span class="w"> </span><span
class="s">"uri"</span><span class="p">,</span><span class="w">
</span><span class="s">"<snowflake uri>"</span><span
class="p">,</span><span class="w"> </span><span class="k">nullptr</span><span
class="p">);</span>
<span class="n">AdbcDatabaseInit</span><span class="p">(</span><span
class="o">&</span><span class="n">database</span><span
class="p">,</span><span class="w"> </span><span class="k">nullptr</span><span
class="p">);</span>
</pre></div>
</div>
</div>
-<input id="sd-tab-item-3" name="sd-tab-set-1" type="radio">
-</input><label class="sd-tab-label" data-sync-id="python" for="sd-tab-item-3">
-Python</label><div class="sd-tab-content docutils">
-<div class="highlight-python notranslate"><div
class="highlight"><pre><span></span><span class="kn">import</span> <span
class="nn">adbc_driver_flightsql.dbapi</span>
-
-<span class="k">with</span> <span class="n">adbc_driver_flightsql</span><span
class="o">.</span><span class="n">dbapi</span><span class="o">.</span><span
class="n">connect</span><span class="p">(</span><span
class="s2">"grpc://localhost:8080"</span><span class="p">)</span>
<span class="k">as</span> <span class="n">conn</span><span class="p">:</span>
- <span class="k">pass</span>
-</pre></div>
-</div>
-</div>
</div>
+<section id="uri-format">
+<h3>URI Format<a class="headerlink" href="#uri-format" title="Permalink to
this heading">#</a></h3>
+<p>The Snowflake URI should be of one of the following formats:</p>
+<ul class="simple">
+<li><p><code class="docutils literal notranslate"><span
class="pre">user[:password]@account/database/schema[?param1=value1&paramN=valueN]</span></code></p></li>
+<li><p><code class="docutils literal notranslate"><span
class="pre">user[:password]@account/database[?param1=value1&paramN=valueN]</span></code></p></li>
+<li><p><code class="docutils literal notranslate"><span
class="pre">user[:password]@host:port/database/schema?account=user_account[&param1=value1&paramN=valueN]</span></code></p></li>
+<li><p><code class="docutils literal notranslate"><span
class="pre">host:port/database/schema?account=user_account[&param1=value1&paramN=valueN]</span></code></p></li>
+</ul>
+<p>Alternately, instead of providing a full URI, the configuration can
+be entirely supplied using the other available options or some combination
+of the URI and other options. If a URI is provided, it will be parsed first
+and any explicit options provided will override anything parsed from the
URI.</p>
+</section>
</section>
<section id="supported-features">
<h2>Supported Features<a class="headerlink" href="#supported-features"
title="Permalink to this heading">#</a></h2>
-<p>The Flight SQL driver generally supports features defined in the ADBC
-API specification 1.0.0, as well as some additional, custom options.</p>
+<p>The Snowflake driver generally supports features defined in the ADBC API
+specification 1.0.0, as well as some additional, custom options.</p>
<section id="authentication">
<h3>Authentication<a class="headerlink" href="#authentication"
title="Permalink to this heading">#</a></h3>
-<p>The driver does no authentication by default. The driver implements a
-few optional authentication schemes:</p>
-<ul>
-<li><p>Mutual TLS (mTLS): see “Client Options” below.</p></li>
-<li><p>An HTTP-style scheme mimicking the Arrow Flight SQL JDBC driver.</p>
-<p>Set the options <code class="docutils literal notranslate"><span
class="pre">username</span></code> and <code class="docutils literal
notranslate"><span class="pre">password</span></code> on the
-<a class="reference internal"
href="../../cpp/api/adbc.html#_CPPv412AdbcDatabase" title="AdbcDatabase"><code
class="xref cpp cpp-class docutils literal notranslate"><span
class="pre">AdbcDatabase</span></code></a>. Alternatively, set the option
-<code class="docutils literal notranslate"><span
class="pre">adbc.flight.sql.authorization_header</span></code> for full
control.</p>
-<p>The client provides credentials sending an <code class="docutils literal
notranslate"><span class="pre">authorization</span></code> from
-client to server. The server then responds with an
-<code class="docutils literal notranslate"><span
class="pre">authorization</span></code> header on the first request. The value
of this
-header will then be sent back as the <code class="docutils literal
notranslate"><span class="pre">authorization</span></code> header on all
-future requests.</p>
-</li>
-</ul>
+<p>Snowflake requires some form of authentication to be enabled. By default
+it will attempt to use Username/Password authentication. The username and
+password can be provided in the URI or via the <code class="docutils literal
notranslate"><span class="pre">username</span></code> and <code class="docutils
literal notranslate"><span class="pre">password</span></code>
+options to the <a class="reference internal"
href="../../cpp/api/adbc.html#_CPPv412AdbcDatabase" title="AdbcDatabase"><code
class="xref cpp cpp-class docutils literal notranslate"><span
class="pre">AdbcDatabase</span></code></a>.</p>
+<p>Alternately, other types of authentication can be specified and customized.
+See “Client Options” below.</p>
</section>
<section id="bulk-ingestion">
<h3>Bulk Ingestion<a class="headerlink" href="#bulk-ingestion"
title="Permalink to this heading">#</a></h3>
-<p>Flight SQL does not have a dedicated API for bulk ingestion of Arrow
-data into a given table. The driver does not currently implement bulk
-ingestion as a result.</p>
+<p>Bulk ingestion is supported. The mapping from Arrow types to Snowflake types
+is provided below.</p>
+</section>
+<section id="partitioned-result-sets">
+<h3>Partitioned Result Sets<a class="headerlink"
href="#partitioned-result-sets" title="Permalink to this heading">#</a></h3>
+<p>Partitioned result sets are not currently supported.</p>
+</section>
+<section id="performance">
+<h3>Performance<a class="headerlink" href="#performance" title="Permalink to
this heading">#</a></h3>
+<p>Formal benchmarking is forthcoming. Snowflake does provide an Arrow native
+format for requesting results, but bulk ingestion is still currently executed
+using the REST API. As described in the <cite>Snowflake Documentation
+<https://pkg.go.dev/github.com/snowflakedb/gosnowflake#hdr-Batch_Inserts_and_Binding_Parameters></cite>
+the driver will potentially attempt to improve performance by streaming the
data
+(without creating files on the local machine) to a temporary stage for
ingestion
+if the number of values exceeds some threshold.</p>
+<p>In order for the driver to leverage this temporary stage, the user must have
+the <code class="docutils literal notranslate"><span class="pre">CREATE</span>
<span class="pre">STAGE</span></code> privilege on the schema. If the user does
not have this
+privilege, the driver will fall back to sending the data with the query
+to the snowflake database.</p>
+<p>In addition, the current database and schema for the session must be set. If
+these are not set, the <code class="docutils literal notranslate"><span
class="pre">CREATE</span> <span class="pre">TEMPORARY</span> <span
class="pre">STAGE</span></code> command executed by the driver
+can fail with the following error:</p>
+<p>In addition, results are potentially fetched in parallel from multiple
endpoints.
+A limited number of batches are queued per endpoint, though data is always
+returned to the client in the order of the endpoints.</p>
+<p>The queue size can be changed by setting an option on the <a
class="reference internal" href="../../cpp/api/adbc.html#_CPPv413AdbcStatement"
title="AdbcStatement"><code class="xref cpp cpp-class docutils literal
notranslate"><span class="pre">AdbcStatement</span></code></a>:</p>
+<dl class="simple">
+<dt><code class="docutils literal notranslate"><span
class="pre">adbc.rpc.result_queue_size</span></code></dt><dd><p>The number of
batches to queue per endpoint. Defaults to 5.</p>
+</dd>
+</dl>
+</section>
+<section id="transactions">
+<h3>Transactions<a class="headerlink" href="#transactions" title="Permalink to
this heading">#</a></h3>
+<p>Transactions are supported. Keep in mind that Snowflake transactions will
+implicitly commit if any DDL statements are run, such as <code class="docutils
literal notranslate"><span class="pre">CREATE</span> <span
class="pre">TABLE</span></code>.</p>
</section>
<section id="client-options">
<h3>Client Options<a class="headerlink" href="#client-options"
title="Permalink to this heading">#</a></h3>
-<p>The options used for creating the Flight RPC client can be customized.
-These options map 1:1 with the options in FlightClientOptions:</p>
+<p>The options used for creating a Snowflake Database connection can be
customized.
+These options map 1:1 with the Snowflake <cite>Config object
<https://pkg.go.dev/github.com/snowflakedb/gosnowflake#Config></cite>.</p>
<dl class="simple">
-<dt><code class="docutils literal notranslate"><span
class="pre">adbc.flight.sql.client_option.mtls_cert_chain</span></code></dt><dd><p>The
certificate chain to use for mTLS.</p>
+<dt><code class="docutils literal notranslate"><span
class="pre">adbc.snowflake.sql.db</span></code></dt><dd><p>The database this
session should default to using.</p>
</dd>
-<dt><code class="docutils literal notranslate"><span
class="pre">adbc.flight.sql.client_option.mtls_private_key</span></code></dt><dd><p>The
private key to use for mTLS.</p>
+<dt><code class="docutils literal notranslate"><span
class="pre">adbc.snowflake.sql.schema</span></code></dt><dd><p>The schema this
session should default to using.</p>
</dd>
-<dt><code class="docutils literal notranslate"><span
class="pre">adbc.flight.sql.client_option.tls_override_hostname</span></code></dt><dd><p>Override
the hostname used to verify the server’s TLS certificate.</p>
+<dt><code class="docutils literal notranslate"><span
class="pre">adbc.snowflake.sql.warehouse</span></code></dt><dd><p>The warehouse
this session should default to using.</p>
</dd>
-<dt><code class="docutils literal notranslate"><span
class="pre">adbc.flight.sql.client_option.tls_skip_verify</span></code></dt><dd><p>Disable
verification of the server’s TLS certificate. Value
-should be <code class="docutils literal notranslate"><span
class="pre">true</span></code> or <code class="docutils literal
notranslate"><span class="pre">false</span></code>.</p>
+<dt><code class="docutils literal notranslate"><span
class="pre">adbc.snowflake.sql.role</span></code></dt><dd><p>The role that
should be used for authentication.</p>
</dd>
-<dt><code class="docutils literal notranslate"><span
class="pre">adbc.flight.sql.client_option.tls_root_certs</span></code></dt><dd><p>Override
the root certificates used to validate the server’s TLS
-certificate.</p>
+<dt><code class="docutils literal notranslate"><span
class="pre">adbc.snowflake.sql.region</span></code></dt><dd><p>The Snowflake
region to use for constructing the connection URI.</p>
</dd>
-<dt><code class="docutils literal notranslate"><span
class="pre">adbc.flight.sql.client_option.with_block</span></code></dt><dd><p>Whether
connections should wait until connections are established,
-or connect lazily when used. The latter is gRPC’s default
-behavior, but the driver defaults to eager connection to surface
-errors earlier. Value should be <code class="docutils literal
notranslate"><span class="pre">true</span></code> or <code class="docutils
literal notranslate"><span class="pre">false</span></code>.</p>
+<dt><code class="docutils literal notranslate"><span
class="pre">adbc.snowflake.sql.account</span></code></dt><dd><p>The Snowflake
account that should be used for authentication and building the
+connection URI.</p>
</dd>
-<dt><code class="docutils literal notranslate"><span
class="pre">adbc.flight.sql.client_option.with_max_msg_size</span></code></dt><dd><p>The
maximum message size to accept from the server. The driver
-defaults to 16 MiB since Flight services tend to return larger
-reponse payloads. Should be a positive integer number of bytes.</p>
+<dt><code class="docutils literal notranslate"><span
class="pre">adbc.snowflake.sql.uri.protocol</span></code></dt><dd><p>This
should be either <cite>http</cite> or <cite>https</cite>.</p>
</dd>
-</dl>
-</section>
-<section id="custom-call-headers">
-<h3>Custom Call Headers<a class="headerlink" href="#custom-call-headers"
title="Permalink to this heading">#</a></h3>
-<p>Custom HTTP headers can be attached to requests via options that apply
-to <a class="reference internal"
href="../../cpp/api/adbc.html#_CPPv412AdbcDatabase" title="AdbcDatabase"><code
class="xref cpp cpp-class docutils literal notranslate"><span
class="pre">AdbcDatabase</span></code></a>, <a class="reference internal"
href="../../cpp/api/adbc.html#_CPPv414AdbcConnection"
title="AdbcConnection"><code class="xref cpp cpp-class docutils literal
notranslate"><span class="pre">AdbcConnection</span></code></a>, and
-<a class="reference internal"
href="../../cpp/api/adbc.html#_CPPv413AdbcStatement"
title="AdbcStatement"><code class="xref cpp cpp-class docutils literal
notranslate"><span class="pre">AdbcStatement</span></code></a>.</p>
-<dl>
-<dt><code class="docutils literal notranslate"><span
class="pre">adbc.flight.sql.rpc.call_header.<HEADER</span> <span
class="pre">NAME></span></code></dt><dd><p>Add the header <code
class="docutils literal notranslate"><span class="pre"><HEADER</span> <span
class="pre">NAME></span></code> to outgoing requests with the given
-value.</p>
-<div class="admonition warning">
-<p class="admonition-title">Warning</p>
-<p>Header names must be in all lowercase.</p>
-</div>
+<dt><code class="docutils literal notranslate"><span
class="pre">adbc.snowflake.sql.uri.port</span></code></dt><dd><p>The port to
use for constructing the URI for connection.</p>
</dd>
-</dl>
-</section>
-<section id="distributed-result-sets">
-<h3>Distributed Result Sets<a class="headerlink"
href="#distributed-result-sets" title="Permalink to this heading">#</a></h3>
-<p>The driver will fetch all partitions (FlightEndpoints) returned by the
-server, in an unspecified order (note that Flight SQL itself does not
-define an ordering on these partitions). If an endpoint has no
-locations, the data will be fetched using the original server
-connection. Else, the driver will try each location given, in order,
-until a request succeeds. If the connection or request fails, it will
-try the next location.</p>
-<p>The driver does not currently cache or pool these secondary
-connections. It also does not retry connections or requests.</p>
-<p>All partitions are fetched in parallel. A limited number of batches
-are queued per partition. Data is returned to the client in the order
-of the partitions.</p>
-<p>The queue size can be changed by setting an option on the
-<a class="reference internal"
href="../../cpp/api/adbc.html#_CPPv413AdbcStatement"
title="AdbcStatement"><code class="xref cpp cpp-class docutils literal
notranslate"><span class="pre">AdbcStatement</span></code></a>:</p>
-<dl class="simple">
-<dt><code class="docutils literal notranslate"><span
class="pre">adbc.flight.sql.rpc.queue_size</span></code></dt><dd><p>The number
of batches to queue per partition. Defaults to 5.</p>
+<dt><code class="docutils literal notranslate"><span
class="pre">adbc.snowflake.sql.uri.host</span></code></dt><dd><p>The explicit
host to use for constructing the URL to connect to.</p>
+</dd>
+<dt><code class="docutils literal notranslate"><span
class="pre">adbc.snowflake.sql.auth_type</span></code></dt><dd><p>Allows
specifying alternate types of authentication, the allowed values are:</p>
+<ul class="simple">
+<li><p><code class="docutils literal notranslate"><span
class="pre">auth_snowflake</span></code>: General username/password
authentication (this is the default)</p></li>
+<li><p><code class="docutils literal notranslate"><span
class="pre">auth_oauth</span></code>: Use OAuth authentication for the
snowflake connection.</p></li>
+<li><p><code class="docutils literal notranslate"><span
class="pre">auth_ext_browser</span></code>: Use an external browser to access a
FED and perform SSO auth.</p></li>
+<li><p><code class="docutils literal notranslate"><span
class="pre">auth_okta</span></code>: Use a native Okta URL to perform SSO
authentication using Okta</p></li>
+<li><p><code class="docutils literal notranslate"><span
class="pre">auth_jwt</span></code>: Use a provided JWT to perform
authentication.</p></li>
+<li><p><code class="docutils literal notranslate"><span
class="pre">auth_mfa</span></code>: Use a username and password with
MFA.</p></li>
+</ul>
+</dd>
+<dt><code class="docutils literal notranslate"><span
class="pre">adbc.snowflake.sql.client_option.auth_token</span></code></dt><dd><p>If
using OAuth or another form of authentication, this option is how you can
+explicitly specify the token to be used for connection.</p>
+</dd>
+<dt><code class="docutils literal notranslate"><span
class="pre">adbc.snowflake.sql.client_option.okta_url</span></code></dt><dd><p>If
using <code class="docutils literal notranslate"><span
class="pre">auth_okta</span></code>, this option is required in order to
specify the
+Okta URL to connect to for SSO authentication.</p>
+</dd>
+<dt><code class="docutils literal notranslate"><span
class="pre">adbc.snowflake.sql.client_option.login_timeout</span></code></dt><dd><p>Specify
login retry timeout <em>excluding</em> network roundtrip and reading http
responses.
+Value should be formatted as described <cite>here
<https://pkg.go.dev/time#ParseDuration></cite>,
+such as <code class="docutils literal notranslate"><span
class="pre">300ms</span></code>, <code class="docutils literal
notranslate"><span class="pre">1.5s</span></code> or <code class="docutils
literal notranslate"><span class="pre">1m30s</span></code>. Even though
negative values are accepted,
+the absolute value of such a duration will be used.</p>
+</dd>
+<dt><code class="docutils literal notranslate"><span
class="pre">adbc.snowflake.sql.client_option.request_timeout</span></code></dt><dd><p>Specify
request retry timeout <em>excluding</em> network roundtrip and reading http
responses.
+Value should be formatted as described <cite>here
<https://pkg.go.dev/time#ParseDuration></cite>,
+such as <code class="docutils literal notranslate"><span
class="pre">300ms</span></code>, <code class="docutils literal
notranslate"><span class="pre">1.5s</span></code> or <code class="docutils
literal notranslate"><span class="pre">1m30s</span></code>. Even though
negative values are accepted,
+the absolute value of such a duration will be used.</p>
+</dd>
+<dt><code class="docutils literal notranslate"><span
class="pre">adbc.snowflake.sql.client_option.jwt_expire_timeout</span></code></dt><dd><p>JWT
expiration will occur after this timeout.
+Value should be formatted as described <cite>here
<https://pkg.go.dev/time#ParseDuration></cite>,
+such as <code class="docutils literal notranslate"><span
class="pre">300ms</span></code>, <code class="docutils literal
notranslate"><span class="pre">1.5s</span></code> or <code class="docutils
literal notranslate"><span class="pre">1m30s</span></code>. Even though
negative values are accepted,
+the absolute value of such a duration will be used.</p>
+</dd>
+<dt><code class="docutils literal notranslate"><span
class="pre">adbc.snowflake.sql.client_option.client_timeout</span></code></dt><dd><p>Specify
timeout for network roundtrip and reading http responses.
+Value should be formatted as described <cite>here
<https://pkg.go.dev/time#ParseDuration></cite>,
+such as <code class="docutils literal notranslate"><span
class="pre">300ms</span></code>, <code class="docutils literal
notranslate"><span class="pre">1.5s</span></code> or <code class="docutils
literal notranslate"><span class="pre">1m30s</span></code>. Even though
negative values are accepted,
+the absolute value of such a duration will be used.</p>
+</dd>
+<dt><code class="docutils literal notranslate"><span
class="pre">adbc.snowflake.sql.client_option.app_name</span></code></dt><dd><p>Allows
specifying the Application Name to Snowflake for the connection.</p>
+</dd>
+<dt><code class="docutils literal notranslate"><span
class="pre">adbc.snowflake.sql.client_option.tls_skip_verify</span></code></dt><dd><p>Disable
verification of the server’s TLS certificate. Value should be <code
class="docutils literal notranslate"><span class="pre">true</span></code>
+or <code class="docutils literal notranslate"><span
class="pre">false</span></code>.</p>
+</dd>
+<dt><code class="docutils literal notranslate"><span
class="pre">adbc.snowflake.sql.client_option.ocsp_fail_open_mode</span></code></dt><dd><p>Control
the fail open mode for OCSP. Default is <code class="docutils literal
notranslate"><span class="pre">true</span></code>. Value should
+be either <code class="docutils literal notranslate"><span
class="pre">true</span></code> or <code class="docutils literal
notranslate"><span class="pre">false</span></code>.</p>
+</dd>
+<dt><code class="docutils literal notranslate"><span
class="pre">adbc.snowflake.sql.client_option.keep_session_alive</span></code></dt><dd><p>Enable
the session to persist even after the connection is closed. Value
+should be either <code class="docutils literal notranslate"><span
class="pre">true</span></code> or <code class="docutils literal
notranslate"><span class="pre">false</span></code>.</p>
+</dd>
+<dt><code class="docutils literal notranslate"><span
class="pre">adbc.snowflake.sql.client_option.jwt_private_key</span></code></dt><dd><p>Specify
the RSA private key which should be used to sign the JWT for
+authentication. This should be a path to a file containing a PKCS1
+private key to be read in and parsed. Commonly encoded in PEM blocks
+of type “RSA PRIVATE KEY”.</p>
+</dd>
+<dt><code class="docutils literal notranslate"><span
class="pre">adbc.snowflake.sql.client_option.disable_telemetry</span></code></dt><dd><p>The
Snowflake driver allows for telemetry information which can be
+disabled by setting this to <code class="docutils literal notranslate"><span
class="pre">true</span></code>. Value should be either <code class="docutils
literal notranslate"><span class="pre">true</span></code>
+or <code class="docutils literal notranslate"><span
class="pre">false</span></code>.</p>
+</dd>
+<dt><code class="docutils literal notranslate"><span
class="pre">adbc.snowflake.sql.client_option.tracing</span></code></dt><dd><p>Set
the logging level</p>
+</dd>
+<dt><code class="docutils literal notranslate"><span
class="pre">adbc.snowflake.sql.client_option.cache_mfa_token</span></code></dt><dd><p>When
<code class="docutils literal notranslate"><span
class="pre">true</span></code>, the MFA token is cached in the credential
manager. Defaults
+to <code class="docutils literal notranslate"><span
class="pre">true</span></code> on Windows/OSX, <code class="docutils literal
notranslate"><span class="pre">false</span></code> on Linux.</p>
+</dd>
+<dt><code class="docutils literal notranslate"><span
class="pre">adbc.snowflake.sql.client_option.store_temp_creds</span></code></dt><dd><p>When
<code class="docutils literal notranslate"><span
class="pre">true</span></code>, the ID token is cached in the credential
manager. Defaults
+to <code class="docutils literal notranslate"><span
class="pre">true</span></code> on Windows/OSX, <code class="docutils literal
notranslate"><span class="pre">false</span></code> on Linux.</p>
</dd>
</dl>
</section>
<section id="metadata">
<h3>Metadata<a class="headerlink" href="#metadata" title="Permalink to this
heading">#</a></h3>
-<p>The driver currently will not populate column constraint info (foreign
-keys, primary keys, etc.) in <a class="reference internal"
href="../../cpp/api/adbc.html#_CPPv424AdbcConnectionGetObjectsP14AdbcConnectioniPKcPKcPKcPPKcPKcP16ArrowArrayStreamP9AdbcError"
title="AdbcConnectionGetObjects"><code class="xref cpp cpp-func docutils
literal notranslate"><span
class="pre">AdbcConnectionGetObjects()</span></code></a>.
-Also, catalog filters are evaluated as simple string matches, not
-<code class="docutils literal notranslate"><span
class="pre">LIKE</span></code>-style patterns.</p>
-</section>
-<section id="partitioned-result-sets">
-<h3>Partitioned Result Sets<a class="headerlink"
href="#partitioned-result-sets" title="Permalink to this heading">#</a></h3>
-<p>The Flight SQL driver supports ADBC’s partitioned result sets. When
-requested, each partition of a result set contains a serialized
-FlightInfo, containing one of the FlightEndpoints of the original
-response. Clients who may wish to introspect the partition can do so
-by deserializing the contained FlightInfo from the ADBC partitions.
-(For example, a client that wishes to distribute work across multiple
-workers or machines may want to try to take advantage of locality
-information that ADBC does not have.)</p>
-</section>
-<section id="timeouts">
-<h3>Timeouts<a class="headerlink" href="#timeouts" title="Permalink to this
heading">#</a></h3>
-<p>By default, timeouts are not used for RPC calls. They can be set via
-special options on <a class="reference internal"
href="../../cpp/api/adbc.html#_CPPv414AdbcConnection"
title="AdbcConnection"><code class="xref cpp cpp-class docutils literal
notranslate"><span class="pre">AdbcConnection</span></code></a>. In general,
it is
-best practice to set timeouts to avoid unexpectedly getting stuck.
-The options are as follows:</p>
-<dl>
-<dt><code class="docutils literal notranslate"><span
class="pre">adbc.flight.sql.rpc.timeout_seconds.fetch</span></code></dt><dd><p>A
timeout (in floating-point seconds) for any API calls that fetch
-data. This corresponds to Flight <code class="docutils literal
notranslate"><span class="pre">DoGet</span></code> calls.</p>
-<p>For example, this controls the timeout of the underlying Flight
-calls that fetch more data as a result set is consumed.</p>
-</dd>
-<dt><code class="docutils literal notranslate"><span
class="pre">adbc.flight.sql.rpc.timeout_seconds.query</span></code></dt><dd><p>A
timeout (in floating-point seconds) for any API calls that
-execute a query. This corresponds to Flight <code class="docutils literal
notranslate"><span class="pre">GetFlightInfo</span></code>
-calls.</p>
-<p>For example, this controls the timeout of the underlying Flight
-calls that implement <code class="xref py py-func docutils literal
notranslate"><span class="pre">AdbcStatementExecuteQuery()</span></code>.</p>
-</dd>
-<dt><code class="docutils literal notranslate"><span
class="pre">adbc.flight.sql.rpc.timeout_seconds.update</span></code></dt><dd><p>A
timeout (in floating-point seconds) for any API calls that
-upload data or perform other updates.</p>
-<p>For example, this controls the timeout of the underlying Flight
-calls that implement bulk ingestion, or transaction support.</p>
+<p>When calling <a href="#id1"><span class="problematic"
id="id2">:cpp:`AdbcConnectionGetTableSchema`</span></a>, the returned Arrow
Schema
+will contain metadata on each field:</p>
+<dl class="simple">
+<dt><code class="docutils literal notranslate"><span
class="pre">DATA_TYPE</span></code></dt><dd><p>This will be a string containing
the raw Snowflake data type of this column</p>
+</dd>
+<dt><code class="docutils literal notranslate"><span
class="pre">PRIMARY_KEY</span></code></dt><dd><p>This will be either <code
class="docutils literal notranslate"><span class="pre">Y</span></code> or <code
class="docutils literal notranslate"><span class="pre">N</span></code> to
indicate a column is a primary key.</p>
+</dd>
+</dl>
+<p>In addition, the schema on the stream of results from a query will contain
+the following metadata keys on each field:</p>
+<dl class="simple">
+<dt><code class="docutils literal notranslate"><span
class="pre">logicalType</span></code></dt><dd><p>The Snowflake logical type of
this column. Will be one of <code class="docutils literal notranslate"><span
class="pre">fixed</span></code>,
+<code class="docutils literal notranslate"><span
class="pre">real</span></code>, <code class="docutils literal
notranslate"><span class="pre">text</span></code>, <code class="docutils
literal notranslate"><span class="pre">date</span></code>, <code
class="docutils literal notranslate"><span class="pre">variant</span></code>,
<code class="docutils literal notranslate"><span
class="pre">timestamp_ltz</span></code>, <code class="docutils literal
notranslate"><span class="pre">timestamp_ntz< [...]
+<code class="docutils literal notranslate"><span
class="pre">timestamp_tz</span></code>, <code class="docutils literal
notranslate"><span class="pre">object</span></code>, <code class="docutils
literal notranslate"><span class="pre">array</span></code>, <code
class="docutils literal notranslate"><span class="pre">binary</span></code>,
<code class="docutils literal notranslate"><span
class="pre">time</span></code>, <code class="docutils literal
notranslate"><span class="pre">boolean</span [...]
+</dd>
+<dt><code class="docutils literal notranslate"><span
class="pre">precision</span></code></dt><dd><p>An integer representing the
Snowflake precision of the field.</p>
+</dd>
+<dt><code class="docutils literal notranslate"><span
class="pre">scale</span></code></dt><dd><p>An integer representing the
Snowflake scale of the values in this field.</p>
+</dd>
+<dt><code class="docutils literal notranslate"><span
class="pre">charLength</span></code></dt><dd><p>If a text field, this will be
equivalent to the <code class="docutils literal notranslate"><span
class="pre">VARCHAR(#)</span></code> parameter <code class="docutils literal
notranslate"><span class="pre">#</span></code>.</p>
+</dd>
+<dt><code class="docutils literal notranslate"><span
class="pre">byteLength</span></code></dt><dd><p>Will contain the length, in
bytes, of the raw data sent back from Snowflake
+regardless of the type of the field in Arrow.</p>
</dd>
</dl>
</section>
-<section id="transactions">
-<h3>Transactions<a class="headerlink" href="#transactions" title="Permalink to
this heading">#</a></h3>
-<p>The driver supports transactions. It will first check the server’s
-SqlInfo to determine whether this is supported. Otherwise,
-transaction-related ADBC APIs will return
-<a class="reference internal"
href="../../cpp/api/adbc.html#c.ADBC_STATUS_NOT_IMPLEMENTED"
title="ADBC_STATUS_NOT_IMPLEMENTED"><code class="xref c c-type docutils literal
notranslate"><span
class="pre">ADBC_STATUS_NOT_IMPLEMENTED</span></code></a>.</p>
+<section id="type-support">
+<h3>Type Support<a class="headerlink" href="#type-support" title="Permalink to
this heading">#</a></h3>
+<p>Because Snowflake types do not necessary match up 1-to-1 with Arrow types
+the following is what should be expected when requesting data. Any conversions
+indicated are done to ensure consistency of the stream of record batches.</p>
+<div class="table-wrapper docutils container">
+<table class="docutils align-default">
+<tbody>
+<tr class="row-odd"><td><p>Snowflake Type</p></td>
+<td><p>Arrow Type</p></td>
+<td><p>Notes</p></td>
+</tr>
+<tr class="row-even"><td><p>Integral Types</p></td>
+<td><p>Int64</p></td>
+<td><p>All integral types in snowflake are
+stored as 64-bit integers.</p></td>
+</tr>
+<tr class="row-odd"><td><p>Float/Double</p></td>
+<td><p>Float64</p></td>
+<td><p>Snowflake does not distinguish between
+float or double. All are 64-bit values</p></td>
+</tr>
+<tr class="row-even"><td><p>Decimal/Numeric</p></td>
+<td><p>Int64/Float64</p></td>
+<td><p>If Scale == 0 then Int64 is used, else
+Float64 is returned.</p></td>
+</tr>
+<tr class="row-odd"><td><p>Time</p></td>
+<td><p>Time64(ns)</p></td>
+<td><p>For ingestion, time32 will also work</p></td>
+</tr>
+<tr class="row-even"><td><p>Date</p></td>
+<td><p>Date32</p></td>
+<td><p>For ingestion, Date64 will also work</p></td>
+</tr>
+<tr class="row-odd"><td><p>Timestamp_LTZ
+Timestamp_NTZ
+Timestamp_TZ</p></td>
+<td><p>Timestamp(ns)</p></td>
+<td><p>Local time zone will be used.
+No timezone specified in Arrow type info
+Values will be converted to UTC</p></td>
+</tr>
+<tr class="row-even"><td><p>Variant
+Object
+Array</p></td>
+<td><p>String</p></td>
+<td><p>Snowflake does not provide nested type
+information. So each value will be a
+string, similar to JSON, which can be
+parsed. The <code class="docutils literal notranslate"><span
class="pre">logicalType</span></code> metadata key
+will contain the snowflake field type.</p></td>
+</tr>
+<tr class="row-odd"><td><p>Geography
+Geometry</p></td>
+<td><p>String</p></td>
+<td><p>There is no canonical Arrow type for
+these and snowflake returns them as
+strings.</p></td>
+</tr>
+</tbody>
+</table>
+</div>
</section>
</section>
</section>
@@ -531,26 +619,8 @@ transaction-related ADBC APIs will return
<footer>
<div class="related-pages">
- <a class="next-page" href="../java/index.html">
- <div class="page-info">
- <div class="context">
- <span>Next</span>
- </div>
- <div class="title">Java</div>
- </div>
- <svg class="furo-related-icon"><use
href="#svg-arrow-right"></use></svg>
- </a>
- <a class="prev-page" href="index.html">
- <svg class="furo-related-icon"><use
href="#svg-arrow-right"></use></svg>
- <div class="page-info">
- <div class="context">
- <span>Previous</span>
- </div>
-
- <div class="title">Go</div>
-
- </div>
- </a>
+
+
</div>
<div class="bottom-of-page">
<div class="left-details">
@@ -581,19 +651,21 @@ transaction-related ADBC APIs will return
<div class="toc-tree-container">
<div class="toc-tree">
<ul>
-<li><a class="reference internal" href="#">Flight SQL Driver</a><ul>
+<li><a class="reference internal" href="#">Snowflake Driver</a><ul>
<li><a class="reference internal" href="#installation">Installation</a></li>
-<li><a class="reference internal" href="#usage">Usage</a></li>
+<li><a class="reference internal" href="#usage">Usage</a><ul>
+<li><a class="reference internal" href="#uri-format">URI Format</a></li>
+</ul>
+</li>
<li><a class="reference internal" href="#supported-features">Supported
Features</a><ul>
<li><a class="reference internal"
href="#authentication">Authentication</a></li>
<li><a class="reference internal" href="#bulk-ingestion">Bulk
Ingestion</a></li>
-<li><a class="reference internal" href="#client-options">Client
Options</a></li>
-<li><a class="reference internal" href="#custom-call-headers">Custom Call
Headers</a></li>
-<li><a class="reference internal" href="#distributed-result-sets">Distributed
Result Sets</a></li>
-<li><a class="reference internal" href="#metadata">Metadata</a></li>
<li><a class="reference internal" href="#partitioned-result-sets">Partitioned
Result Sets</a></li>
-<li><a class="reference internal" href="#timeouts">Timeouts</a></li>
+<li><a class="reference internal" href="#performance">Performance</a></li>
<li><a class="reference internal" href="#transactions">Transactions</a></li>
+<li><a class="reference internal" href="#client-options">Client
Options</a></li>
+<li><a class="reference internal" href="#metadata">Metadata</a></li>
+<li><a class="reference internal" href="#type-support">Type Support</a></li>
</ul>
</li>
</ul>
diff --git a/main/objects.inv b/main/objects.inv
index 034e3e7..8a4e5b6 100644
Binary files a/main/objects.inv and b/main/objects.inv differ
diff --git a/main/python/api/adbc_driver_flightsql.html
b/main/python/api/adbc_driver_flightsql.html
index db77ecf..47f13a4 100644
--- a/main/python/api/adbc_driver_flightsql.html
+++ b/main/python/api/adbc_driver_flightsql.html
@@ -463,7 +463,7 @@ floating-point seconds).</p>
<p>Statement options specific to the Flight SQL driver.</p>
<dl class="py attribute">
<dt class="sig sig-object py"
id="adbc_driver_flightsql.StatementOptions.QUEUE_SIZE">
-<span class="sig-name descname"><span class="pre">QUEUE_SIZE</span></span><em
class="property"><span class="w"> </span><span class="p"><span
class="pre">=</span></span><span class="w"> </span><span
class="pre">'adbc.flight.sql.rpc.queue_size'</span></em><a class="headerlink"
href="#adbc_driver_flightsql.StatementOptions.QUEUE_SIZE" title="Permalink to
this definition">#</a></dt>
+<span class="sig-name descname"><span class="pre">QUEUE_SIZE</span></span><em
class="property"><span class="w"> </span><span class="p"><span
class="pre">=</span></span><span class="w"> </span><span
class="pre">'adbc.rpc.result_queue_size'</span></em><a class="headerlink"
href="#adbc_driver_flightsql.StatementOptions.QUEUE_SIZE" title="Permalink to
this definition">#</a></dt>
<dd><p>The number of batches to queue per partition. Defaults to 5.</p>
<p>This controls how much we read ahead on result sets.</p>
</dd></dl>
diff --git a/main/searchindex.js b/main/searchindex.js
index a70a9b8..e10e915 100644
--- a/main/searchindex.js
+++ b/main/searchindex.js
@@ -1 +1 @@
-Search.setIndex({"docnames": ["cpp/api/adbc", "cpp/api/adbc_driver_manager",
"cpp/api/index", "cpp/concurrency", "cpp/driver_manager", "cpp/index",
"development/contributing", "development/nightly", "development/releasing",
"driver/cpp/index", "driver/cpp/postgresql", "driver/cpp/sqlite",
"driver/go/flight_sql", "driver/go/index", "driver/installation",
"driver/java/flight_sql", "driver/java/index", "driver/java/jdbc",
"driver/status", "faq", "format/comparison", "format/specification", [...]
\ No newline at end of file
+Search.setIndex({"docnames": ["cpp/api/adbc", "cpp/api/adbc_driver_manager",
"cpp/api/index", "cpp/concurrency", "cpp/driver_manager", "cpp/index",
"development/contributing", "development/nightly", "development/releasing",
"driver/cpp/index", "driver/cpp/postgresql", "driver/cpp/sqlite",
"driver/go/flight_sql", "driver/go/index", "driver/go/snowflake",
"driver/installation", "driver/java/flight_sql", "driver/java/index",
"driver/java/jdbc", "driver/status", "faq", "format/comparison", " [...]
\ No newline at end of file