This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/arrow-datafusion.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 3767afa3c1 Publish built docs triggered by 
bd1c76c7c8e91463daf56c576094e12e4705c8cd
3767afa3c1 is described below

commit 3767afa3c1e55f998d38d10b4ee6927cfb6894a1
Author: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
AuthorDate: Tue Feb 13 14:32:54 2024 +0000

    Publish built docs triggered by bd1c76c7c8e91463daf56c576094e12e4705c8cd
---
 _sources/user-guide/cli.md.txt | 14 ++++++++++++--
 searchindex.js                 |  2 +-
 user-guide/cli.html            | 12 ++++++++++--
 3 files changed, 23 insertions(+), 5 deletions(-)

diff --git a/_sources/user-guide/cli.md.txt b/_sources/user-guide/cli.md.txt
index a8a9d6f212..a94e2427ea 100644
--- a/_sources/user-guide/cli.md.txt
+++ b/_sources/user-guide/cli.md.txt
@@ -194,8 +194,9 @@ DataFusion CLI v16.0.0
 2 rows in set. Query took 0.007 seconds.
 ```
 
-You can also query directly from the remote location via HTTP(S) without
-registering the location as a table
+You can also query directly from any remote location supported by DataFusion 
without
+registering the location as a table.
+For example, to read from a remote parquet file via HTTP(S) you can use the 
following:
 
 ```sql
 select count(*) from 
'https://datasets.clickhouse.com/hits_compatible/athena_partitioned/hits_1.parquet'
@@ -207,6 +208,15 @@ select count(*) from 
'https://datasets.clickhouse.com/hits_compatible/athena_par
 1 row in set. Query took 0.595 seconds.
 ```
 
+To read from an AWS S3 or GCS, use `s3` or `gs` as a protocol prefix. For 
example, this will read a file  
+in S3 bucket named `my-data-bucket`. Note that this is not a real file 
location and therefore the query
+will fail, you need to use your own file location in S3. Also, you need to set 
the relevent access credentials
+as environmental variables (e.g. for AWS S3 you need to at least 
`AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`)
+
+```sql
+select count(*) from 's3://my-data-bucket/athena_partitioned/hits.parquet'
+```
+
 ## Creating External Tables
 
 It is also possible to create a table backed by files by explicitly
diff --git a/searchindex.js b/searchindex.js
index 792942b52f..b1e3cbc8ae 100644
--- a/searchindex.js
+++ b/searchindex.js
@@ -1 +1 @@
-Search.setIndex({"docnames": ["contributor-guide/architecture", 
"contributor-guide/communication", "contributor-guide/index", 
"contributor-guide/quarterly_roadmap", "contributor-guide/roadmap", 
"contributor-guide/specification/index", 
"contributor-guide/specification/invariants", 
"contributor-guide/specification/output-field-name-semantic", "index", 
"library-user-guide/adding-udfs", "library-user-guide/building-logical-plans", 
"library-user-guide/catalogs", "library-user-guide/custom-tab [...]
\ No newline at end of file
+Search.setIndex({"docnames": ["contributor-guide/architecture", 
"contributor-guide/communication", "contributor-guide/index", 
"contributor-guide/quarterly_roadmap", "contributor-guide/roadmap", 
"contributor-guide/specification/index", 
"contributor-guide/specification/invariants", 
"contributor-guide/specification/output-field-name-semantic", "index", 
"library-user-guide/adding-udfs", "library-user-guide/building-logical-plans", 
"library-user-guide/catalogs", "library-user-guide/custom-tab [...]
\ No newline at end of file
diff --git a/user-guide/cli.html b/user-guide/cli.html
index 6ff7d944ab..41812f7f96 100644
--- a/user-guide/cli.html
+++ b/user-guide/cli.html
@@ -653,8 +653,9 @@ DataFusion<span class="w"> </span>CLI<span class="w"> 
</span>v16.0.0
 <span class="m">2</span><span class="w"> </span>rows<span class="w"> 
</span><span class="k">in</span><span class="w"> </span>set.<span class="w"> 
</span>Query<span class="w"> </span>took<span class="w"> </span><span 
class="m">0</span>.007<span class="w"> </span>seconds.
 </pre></div>
 </div>
-<p>You can also query directly from the remote location via HTTP(S) without
-registering the location as a table</p>
+<p>You can also query directly from any remote location supported by 
DataFusion without
+registering the location as a table.
+For example, to read from a remote parquet file via HTTP(S) you can use the 
following:</p>
 <div class="highlight-sql notranslate"><div 
class="highlight"><pre><span></span><span class="k">select</span><span 
class="w"> </span><span class="k">count</span><span class="p">(</span><span 
class="o">*</span><span class="p">)</span><span class="w"> </span><span 
class="k">from</span><span class="w"> </span><span 
class="s1">&#39;https://datasets.clickhouse.com/hits_compatible/athena_partitioned/hits_1.parquet&#39;</span>
 <span class="o">+</span><span class="c1">----------+</span>
 <span class="o">|</span><span class="w"> </span><span 
class="k">COUNT</span><span class="p">(</span><span class="o">*</span><span 
class="p">)</span><span class="w"> </span><span class="o">|</span>
@@ -664,6 +665,13 @@ registering the location as a table</p>
 <span class="mi">1</span><span class="w"> </span><span 
class="k">row</span><span class="w"> </span><span class="k">in</span><span 
class="w"> </span><span class="k">set</span><span class="p">.</span><span 
class="w"> </span><span class="n">Query</span><span class="w"> </span><span 
class="n">took</span><span class="w"> </span><span class="mi">0</span><span 
class="p">.</span><span class="mi">595</span><span class="w"> </span><span 
class="n">seconds</span><span class="p">.</span>
 </pre></div>
 </div>
+<p>To read from an AWS S3 or GCS, use <code class="docutils literal 
notranslate"><span class="pre">s3</span></code> or <code class="docutils 
literal notranslate"><span class="pre">gs</span></code> as a protocol prefix. 
For example, this will read a file<br />
+in S3 bucket named <code class="docutils literal notranslate"><span 
class="pre">my-data-bucket</span></code>. Note that this is not a real file 
location and therefore the query
+will fail, you need to use your own file location in S3. Also, you need to set 
the relevent access credentials
+as environmental variables (e.g. for AWS S3 you need to at least <code 
class="docutils literal notranslate"><span 
class="pre">AWS_ACCESS_KEY_ID</span></code> and <code class="docutils literal 
notranslate"><span class="pre">AWS_SECRET_ACCESS_KEY</span></code>)</p>
+<div class="highlight-sql notranslate"><div 
class="highlight"><pre><span></span><span class="k">select</span><span 
class="w"> </span><span class="k">count</span><span class="p">(</span><span 
class="o">*</span><span class="p">)</span><span class="w"> </span><span 
class="k">from</span><span class="w"> </span><span 
class="s1">&#39;s3://my-data-bucket/athena_partitioned/hits.parquet&#39;</span>
+</pre></div>
+</div>
 </section>
 <section id="creating-external-tables">
 <h2>Creating External Tables<a class="headerlink" 
href="#creating-external-tables" title="Link to this heading">ΒΆ</a></h2>

Reply via email to