This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/orc.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new abdf9ea  ORC-1071: Update adopters page (#985)
abdf9ea is described below

commit abdf9ea27a80abeb6c4f300ce5a4f0a03abe4615
Author: Dongjoon Hyun <[email protected]>
AuthorDate: Thu Dec 30 18:34:58 2021 -0800

    ORC-1071: Update adopters page (#985)
---
 docs/adopters.html | 48 +++++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 41 insertions(+), 7 deletions(-)

diff --git a/docs/adopters.html b/docs/adopters.html
index 0d09752..b812e4a 100644
--- a/docs/adopters.html
+++ b/docs/adopters.html
@@ -831,6 +831,33 @@ but with the ORC 1.1.0 release it is now easier than ever 
without pulling in
 Hive’s exec jar and all of its dependencies. OrcStruct now also implements
 WritableComparable and can be serialized through the MapReduce shuffle.</p>
 
+<h3 id="apache-spark"><a href="https://spark.apache.org/";>Apache Spark</a></h3>
+
+<p>Apache Spark has <a 
href="https://databricks.com/blog/2015/07/16/joint-blog-post-bringing-orc-support-into-apache-spark.html";>added
+support</a>
+for reading and writing ORC files with support for column project and
+predicate push down.</p>
+
+<h3 id="apache-arrow"><a href="https://arrow.apache.org/";>Apache Arrow</a></h3>
+
+<p>Apache Arrow supports reading and writing <a 
href="https://arrow.apache.org/docs/index.html?highlight=orc#apache-arrow";>ORC 
file format</a>.</p>
+
+<h3 id="apache-flink"><a href="https://flink.apache.org/";>Apache Flink</a></h3>
+
+<p>Apache Flink supports
+<a 
href="https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/connectors/table/formats/orc/";>ORC
 format in Table API</a>
+for reading and writing ORC files.</p>
+
+<h3 id="apache-iceberg"><a href="https://iceberg.apache.org/";>Apache 
Iceberg</a></h3>
+
+<p>Apache Iceberg supports <a href="https://iceberg.apache.org/#spec/#orc";>ORC 
spec</a> to use ORC tables.</p>
+
+<h3 id="apache-druid"><a href="https://druid.apache.org/";>Apache Druid</a></h3>
+
+<p>Apache Druid supports
+<a 
href="https://druid.apache.org/docs/0.22.1/development/extensions-core/orc.html#orc-extension";>ORC
 extension</a>
+to ingest and understand the Apache ORC data format.</p>
+
 <h3 id="apache-hive"><a href="https://hive.apache.org/";>Apache Hive</a></h3>
 
 <p>Apache Hive was the original use case and home for ORC.  ORC’s strong
@@ -839,6 +866,12 @@ down, and vectorization support make Hive <a 
href="https://hortonworks.com/blog/
 better</a>
 than any other format for your data.</p>
 
+<h3 id="apache-gobblin"><a href="https://gobblin.apache.org/";>Apache 
Gobblin</a></h3>
+
+<p>Apache Gobblin supports
+<a 
href="https://gobblin.apache.org/docs/case-studies/Writing-ORC-Data/";>writing 
data to ORC files</a>
+by leveraging Apache Hive’s SerDe library.</p>
+
 <h3 id="apache-nifi"><a href="https://nifi.apache.org/";>Apache Nifi</a></h3>
 
 <p>Apache Nifi is <a 
href="https://issues.apache.org/jira/browse/NIFI-1663";>adding
@@ -850,13 +883,6 @@ ORC files.</p>
 <p>Apache Pig added support for reading and writing ORC files in <a 
href="https://hortonworks.com/blog/announcing-apache-pig-0-14-0/";>Pig
 14.0</a>.</p>
 
-<h3 id="apache-spark"><a href="https://spark.apache.org/";>Apache Spark</a></h3>
-
-<p>Apache Spark has <a 
href="https://databricks.com/blog/2015/07/16/joint-blog-post-bringing-orc-support-into-apache-spark.html";>added
-support</a>
-for reading and writing ORC files with support for column project and
-predicate push down.</p>
-
 <h3 id="eel"><a href="https://github.com/51zero/eel-sdk";>EEL</a></h3>
 
 <p>EEL is a Scala BigData API that supports reading and writing data for
@@ -875,6 +901,14 @@ or directly into Hive tables backed by an ORC file 
format.</p>
 <p>With more than 300 PB of data, Facebook was an <a 
href="https://code.facebook.com/posts/229861827208629/scaling-the-facebook-data-warehouse-to-300-pb/";>early
 adopter of
 ORC</a> and quickly put it into production.</p>
 
+<h3 id="linkedin"><a href="https://linkedin.com";>LinkedIn</a></h3>
+
+<p>LinkedIn uses
+<a 
href="https://engineering.linkedin.com/blog/2021/fastingest-low-latency-gobblin";>the
 ORC file format</a>
+with Apache Iceberg metadata catalog and Apache Gobblin to provide our data 
customers with high-query performance.</p>
+
+<p>https://engineering.linkedin.com/blog/2021/fastingest-low-latency-gobblin</p>
+
 <h3 id="trino-formerly-presto-sql"><a href="https://trino.io/";>Trino (formerly 
Presto SQL)</a></h3>
 
 <p>The Trino team has done a lot of work <a 
href="https://code.facebook.com/posts/370832626374903/even-faster-data-at-the-speed-of-presto-orc/";>integrating

Reply via email to