This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/arrow-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 834cb4c24ab Updating built site
834cb4c24ab is described below

commit 834cb4c24abfc70b069aa0d4d34b354e27c6ceaa
Author: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
AuthorDate: Mon Feb 9 08:37:24 2026 +0000

    Updating built site
---
 .../2026/02/09/arrow-security-model}/index.html    |  97 ++++---
 blog/index.html                                    |  28 ++
 feed.xml                                           | 290 ++-------------------
 release/index.html                                 |   4 +-
 security/index.html                                |  13 +-
 5 files changed, 118 insertions(+), 314 deletions(-)

diff --git a/security/index.html 
b/blog/2026/02/09/arrow-security-model/index.html
similarity index 71%
copy from security/index.html
copy to blog/2026/02/09/arrow-security-model/index.html
index 4d2a2aba060..14597cee58a 100644
--- a/security/index.html
+++ b/blog/2026/02/09/arrow-security-model/index.html
@@ -6,25 +6,27 @@
     <meta name="viewport" content="width=device-width, initial-scale=1">
     <!-- The above meta tags *must* come first in the head; any other head 
content must come *after* these tags -->
     
-    <title>Security | Apache Arrow</title>
+    <title>Introducing a Security Model for Arrow | Apache Arrow</title>
     
 
     <!-- Begin Jekyll SEO tag v2.8.0 -->
 <meta name="generator" content="Jekyll v4.4.1" />
-<meta property="og:title" content="Security" />
+<meta property="og:title" content="Introducing a Security Model for Arrow" />
+<meta name="author" content="pmc" />
 <meta property="og:locale" content="en_US" />
-<meta name="description" content="Security" />
-<meta property="og:description" content="Security" />
-<link rel="canonical" href="https://arrow.apache.org/security/"; />
-<meta property="og:url" content="https://arrow.apache.org/security/"; />
+<meta name="description" content="We are thrilled to announce the official 
publication of a Security Model for Apache Arrow. The Arrow security model 
covers a core subset of the Arrow specifications: the Arrow Columnar Format, 
the Arrow C Data Interface and the Arrow IPC Format. It sets expectations and 
gives guidelines for handling data coming from untrusted sources. The 
specifications covered by the Arrow security model are building blocks for all 
the other Arrow specifications, such a [...]
+<meta property="og:description" content="We are thrilled to announce the 
official publication of a Security Model for Apache Arrow. The Arrow security 
model covers a core subset of the Arrow specifications: the Arrow Columnar 
Format, the Arrow C Data Interface and the Arrow IPC Format. It sets 
expectations and gives guidelines for handling data coming from untrusted 
sources. The specifications covered by the Arrow security model are building 
blocks for all the other Arrow specifications, [...]
+<link rel="canonical" 
href="https://arrow.apache.org/blog/2026/02/09/arrow-security-model/"; />
+<meta property="og:url" 
content="https://arrow.apache.org/blog/2026/02/09/arrow-security-model/"; />
 <meta property="og:site_name" content="Apache Arrow" />
 <meta property="og:image" 
content="https://arrow.apache.org/img/arrow-logo_horizontal_black-txt_white-bg.png";
 />
-<meta property="og:type" content="website" />
+<meta property="og:type" content="article" />
+<meta property="article:published_time" content="2026-02-09T00:00:00-05:00" />
 <meta name="twitter:card" content="summary_large_image" />
 <meta property="twitter:image" 
content="https://arrow.apache.org/img/arrow-logo_horizontal_black-txt_white-bg.png";
 />
-<meta property="twitter:title" content="Security" />
+<meta property="twitter:title" content="Introducing a Security Model for 
Arrow" />
 <script type="application/ld+json">
-{"@context":"https://schema.org","@type":"WebPage","description":"Security","headline":"Security","image":"https://arrow.apache.org/img/arrow-logo_horizontal_black-txt_white-bg.png","publisher":{"@type":"Organization","logo":{"@type":"ImageObject","url":"https://arrow.apache.org/img/logo.png"}},"url":"https://arrow.apache.org/security/"}</script>
+{"@context":"https://schema.org","@type":"BlogPosting","author":{"@type":"Person","name":"pmc"},"dateModified":"2026-02-09T00:00:00-05:00","datePublished":"2026-02-09T00:00:00-05:00","description":"We
 are thrilled to announce the official publication of a Security Model for 
Apache Arrow. The Arrow security model covers a core subset of the Arrow 
specifications: the Arrow Columnar Format, the Arrow C Data Interface and the 
Arrow IPC Format. It sets expectations and gives guidelines for ha [...]
 <!-- End Jekyll SEO tag -->
 
 
@@ -238,38 +240,53 @@
   </header>
 
   <div class="container p-4 pt-5">
-    <main role="main" class="pb-5">
-      <h1>Reporting Security Issues</h1>
-<p>Apache Arrow uses the standard process outlined by the <a 
href="https://www.apache.org/security/"; target="_blank" rel="noopener">Apache 
Security Team</a> for reporting vulnerabilities. Note that vulnerabilities 
should not be publicly disclosed until the project has responded.</p>
-<p>To report a possible security vulnerability, please email <a 
href="mailto:[email protected]";>[email protected]</a>.</p>
-<hr class="my-5">
-<h3>
-<a href="https://www.cve.org/CVERecord?id=CVE-2023-47248"; target="_blank" 
rel="noopener">CVE-2023-47248</a>: Arbitrary code execution when loading a 
malicious data file in PyArrow</h3>
-<p><strong>Severity</strong>: Critical</p>
-<p><strong>Vendor</strong>: The Apache Software Foundation</p>
-<p><strong>Versions affected</strong>: 0.14.0 to 14.0.0</p>
-<p><strong>Description</strong>: Deserialization of untrusted data in IPC and 
Parquet readers
-in PyArrow versions 0.14.0 to 14.0.0 allows arbitrary code execution.
-An application is vulnerable if it reads Arrow IPC, Feather or Parquet data
-from untrusted sources (for example user-supplied input files).</p>
-<p><strong>Mitigation</strong>: Upgrade to version 14.0.1 or greater. If not 
possible, use the
-provided <a href="https://pypi.org/project/pyarrow-hotfix/"; target="_blank" 
rel="noopener">hotfix package</a>.</p>
-<h3>
-<a href="https://www.cve.org/CVERecord?id=CVE-2019-12408"; target="_blank" 
rel="noopener">CVE-2019-12408</a>: Uninitialized Memory in C++ ArrayBuilder</h3>
-<p><strong>Severity</strong>: High</p>
-<p><strong>Vendor</strong>: The Apache Software Foundation</p>
-<p><strong>Versions affected</strong>: 0.14.x</p>
-<p><strong>Description</strong>: It was discovered that the C++ implementation 
(which underlies the R, Python and Ruby implementations) of Apache Arrow 0.14.0 
to 0.14.1 had a uninitialized memory bug when building arrays with null values 
in some cases. This can lead to uninitialized memory being unintentionally 
shared if Arrow Arrays are transmitted over the wire (for instance with Flight) 
or persisted in the streaming IPC and file formats.</p>
-<p><strong>Mitigation</strong>: Upgrade to version 0.15.1 or greater.</p>
-<h3>
-<a href="https://www.cve.org/CVERecord?id=CVE-2019-12410"; target="_blank" 
rel="noopener">CVE-2019-12410</a>: Uninitialized Memory in C++ Reading from 
Parquet</h3>
-<p><strong>Severity</strong>: High</p>
-<p><strong>Vendor</strong>: The Apache Software Foundation</p>
-<p><strong>Versions affected</strong>: 0.12.0 - 0.14.1</p>
-<p><strong>Description</strong>: While investigating UBSAN errors in <a 
href="https://github.com/apache/arrow/pull/5365"; target="_blank" 
rel="noopener">ARROW-6549</a> it was discovered Apache Arrow versions 0.12.0 to 
0.14.1 left memory Array data uninitialized when reading RLE null data from 
parquet. This affected the C++, Python, Ruby, and R implementations. The 
uninitialized memory could potentially be shared if are transmitted over the 
wire (for instance with Flight) or persisted in t [...]
-<p><strong>Mitigation</strong>: Upgrade to version 0.15.1 or greater.</p>
+    <div class="col-md-8 mx-auto">
+      <main role="main" class="pb-5">
+        
+<h1>
+  Introducing a Security Model for Arrow
+</h1>
+<hr class="mt-4 mb-3">
 
-    </main>
+
+
+<p class="mb-4 pb-1">
+  <span class="badge badge-secondary">Published</span>
+  <span class="published mr-3">
+    09 Feb 2026
+  </span>
+  <br>
+  <span class="badge badge-secondary">By</span>
+  
+    <a class="mr-3" href="https://arrow.apache.org";>The Apache Arrow PMC (pmc) 
</a>
+  
+
+  
+</p>
+
+
+        <!--
+
+-->
+<p>We are thrilled to announce the official publication of a
+<a href="https://arrow.apache.org/docs/dev/format/Security.html";>Security 
Model</a> for Apache Arrow.</p>
+<p>The Arrow security model covers a core subset of the Arrow specifications:
+the <a href="https://arrow.apache.org/docs/dev/format/Columnar.html";>Arrow 
Columnar Format</a>,
+the <a 
href="https://arrow.apache.org/docs/dev/format/CDataInterface.html";>Arrow C 
Data Interface</a> and the
+<a 
href="https://arrow.apache.org/docs/dev/format/Columnar.html#serialization-and-interprocess-communication-ipc";>Arrow
 IPC Format</a>.
+It sets expectations and gives guidelines for handling data coming from
+untrusted sources.</p>
+<p>The specifications covered by the Arrow security model are building blocks 
for
+all the other Arrow specifications, such as Flight and ADBC.</p>
+<p>The ideas underlying the Arrow security model were informally shared between
+Arrow maintainers and have informed decisions for years, but they were left
+undocumented until now.</p>
+<p>Implementation-specific security considerations, such as proper API usage 
and
+runtime safety guarantees, will later be covered in the documentation of the
+respective implementations.</p>
+
+      </main>
+    </div>
 
     <hr>
 <footer class="footer">
diff --git a/blog/index.html b/blog/index.html
index c572914d2b8..26fb5604086 100644
--- a/blog/index.html
+++ b/blog/index.html
@@ -248,6 +248,34 @@
 
 
   
+  <p>
+    </p>
+<h3>
+      <a href="/blog/2026/02/09/arrow-security-model/">Introducing a Security 
Model for Arrow</a>
+    </h3>
+    
+    <p>
+    <span class="blog-list-date">
+      9 February 2026
+    </span>
+    </p>
+    
+We are thrilled to announce the official publication of a
+Security Model for Apache Arrow.
+The Arrow security model covers a core subset of the Arrow specifications:
+the Arrow Columnar Format,
+the Arrow C Data Interface and the
+Arrow IPC Format.
+It sets expectations and gives guidelines for handling data coming from
+untrusted sources.
+The speci...
+     
+    <a href="/blog/2026/02/09/arrow-security-model/">Read More →</a>
+
+  
+  
+
+  
   <p>
     </p>
 <h3>
diff --git a/feed.xml b/feed.xml
index 813f977675f..caa6af54107 100644
--- a/feed.xml
+++ b/feed.xml
@@ -1,4 +1,22 @@
-<?xml version="1.0" encoding="utf-8"?><feed 
xmlns="http://www.w3.org/2005/Atom"; ><generator uri="https://jekyllrb.com/"; 
version="4.4.1">Jekyll</generator><link 
href="https://arrow.apache.org/feed.xml"; rel="self" type="application/atom+xml" 
/><link href="https://arrow.apache.org/"; rel="alternate" type="text/html" 
/><updated>2026-02-07T07:17:37-05:00</updated><id>https://arrow.apache.org/feed.xml</id><title
 type="html">Apache Arrow</title><subtitle>Apache Arrow is the universal 
columnar fo [...]
+<?xml version="1.0" encoding="utf-8"?><feed 
xmlns="http://www.w3.org/2005/Atom"; ><generator uri="https://jekyllrb.com/"; 
version="4.4.1">Jekyll</generator><link 
href="https://arrow.apache.org/feed.xml"; rel="self" type="application/atom+xml" 
/><link href="https://arrow.apache.org/"; rel="alternate" type="text/html" 
/><updated>2026-02-09T03:33:22-05:00</updated><id>https://arrow.apache.org/feed.xml</id><title
 type="html">Apache Arrow</title><subtitle>Apache Arrow is the universal 
columnar fo [...]
+
+-->
+<p>We are thrilled to announce the official publication of a
+<a href="https://arrow.apache.org/docs/dev/format/Security.html";>Security 
Model</a> for Apache Arrow.</p>
+<p>The Arrow security model covers a core subset of the Arrow specifications:
+the <a href="https://arrow.apache.org/docs/dev/format/Columnar.html";>Arrow 
Columnar Format</a>,
+the <a 
href="https://arrow.apache.org/docs/dev/format/CDataInterface.html";>Arrow C 
Data Interface</a> and the
+<a 
href="https://arrow.apache.org/docs/dev/format/Columnar.html#serialization-and-interprocess-communication-ipc";>Arrow
 IPC Format</a>.
+It sets expectations and gives guidelines for handling data coming from
+untrusted sources.</p>
+<p>The specifications covered by the Arrow security model are building blocks 
for
+all the other Arrow specifications, such as Flight and ADBC.</p>
+<p>The ideas underlying the Arrow security model were informally shared between
+Arrow maintainers and have informed decisions for years, but they were left
+undocumented until now.</p>
+<p>Implementation-specific security considerations, such as proper API usage 
and
+runtime safety guarantees, will later be covered in the documentation of the
+respective 
implementations.</p>]]></content><author><name>pmc</name></author><category 
term="arrow" /><category term="security" /><summary type="html"><![CDATA[We are 
thrilled to announce the official publication of a Security Model for Apache 
Arrow. The Arrow security model covers a core subset of the Arrow 
specifications: the Arrow Columnar Format, the Arrow C Data Interface and the 
Arrow IPC Format. It sets expectations and gives guidelines for handling data 
coming from untrusted sour [...]
 
 -->
 <p>The Apache Arrow team is pleased to announce the v18.5.1 release of Apache 
Arrow Go.
@@ -1115,272 +1133,4 @@ changelog</a>.</li>
 <li>For notes on the latest release of the <a 
href="https://github.com/apache/arrow-dotnet";>.NET
 implementation</a>, see the latest <a 
href="https://github.com/apache/arrow-dotnet/releases";>Arrow  .NET 
changelog</a>.</li>
 <li>For notes on the latest release of the <a 
href="https://github.com/apache/arrow-swift";>Swift implementation</a>, see the 
latest <a href="https://github.com/apache/arrow-swift/releases";>Arrow Swift 
changelog</a>.</li>
-</ul>]]></content><author><name>pmc</name></author><category term="release" 
/><summary type="html"><![CDATA[The Apache Arrow team is pleased to announce 
the 22.0.0 release. This release covers over 3 months of development work and 
includes 213 resolved issues on 255 distinct commits from 60 distinct 
contributors. See the Install Page to learn how to get the libraries for your 
platform. The release notes below are not exhaustive and only expose selected 
highlights of the release. Many oth [...]
-
--->
-<p><a href="https://crates.io/crates/arrow-avro";><code>arrow-avro</code></a>, 
a newly rewritten Rust crate that reads and writes <a 
href="https://avro.apache.org/";>Apache Avro</a> data directly as Arrow 
<code>RecordBatch</code>es, is now available. It supports <a 
href="https://avro.apache.org/docs/1.11.1/specification/#object-container-files";>Avro
 Object Container Files</a> (OCF), <a 
href="https://avro.apache.org/docs/1.11.1/specification/#single-object-encoding";>Single‑Object
 Encoding</ [...]
-<h2>Motivation</h2>
-<p>Apache Avro’s row‑oriented design is effective for encoding one record at a 
time, while Apache Arrow’s columnar layout is optimized for vectorized 
analytics. A major challenge lies in converting between these formats without 
reintroducing row‑wise overhead. Decoding Avro a row at a time and then 
building Arrow arrays incurs extra allocations and cache‑unfriendly access (the 
very costs Arrow is designed to avoid). In the real world, this overhead 
commonly shows up in analytical hot pat [...]
-<h3>Why not use the existing <code>apache-avro</code> crate?</h3>
-<p>Rust already has a mature, general‑purpose Avro crate, <a 
href="https://crates.io/crates/apache-avro";>apache-avro</a>. It reads and 
writes Avro records as Avro value types and provides Object Container File 
readers and writers. What it does not do is decode directly into Arrow arrays, 
so any Arrow integration must materialize rows and then build columns.</p>
-<p>What’s needed is a complementary approach that decodes column‑by‑column 
straight into Arrow builders and emits <code>RecordBatch</code>es. This would 
enable projection pushdown while keeping execution vectorized end to end. For 
projects such as <a href="https://datafusion.apache.org/";>Apache 
DataFusion</a>, access to a mature, upstream Arrow‑native reader and writer 
would help simplify the code path and reduce duplication.</p>
-<p>Modern pipelines heighten this need because <a 
href="https://www.confluent.io/blog/avro-kafka-data/";>Avro is also used on the 
wire</a>, not just in files. Kafka ecosystems commonly use Confluent’s Schema 
Registry framing, and many services adopt the Avro Single‑Object Encoding 
format. An approach that enables decoding straight into Arrow batches (rather 
than through per‑row values) would let downstream compute remain vectorized at 
streaming rates.</p>
-<h3>Why this matters</h3>
-<p>Apache Avro is a first‑class format across stream processors and cloud 
services:</p>
-<ul>
-<li>Confluent Schema Registry supports <a 
href="https://docs.confluent.io/platform/current/schema-registry/fundamentals/serdes-develop/serdes-avro.html";>Avro
 across multiple languages and tooling</a>.</li>
-<li>Apache Flink exposes an <a 
href="https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/connectors/table/formats/avro-confluent/";><code>avro-confluent</code>
 format for Kafka</a>.</li>
-<li>AWS Lambda <a 
href="https://aws.amazon.com/about-aws/whats-new/2025/06/aws-lambda-native-support-avro-protobuf-kafka-events/";>(June
 2025) added native handling for Avro‑formatted Kafka events</a> with Glue and 
Confluent Schema Registry integrations.</li>
-<li>Azure Event Hubs provides a <a 
href="https://learn.microsoft.com/en-us/azure/event-hubs/schema-registry-overview";>Schema
 Registry with Avro support</a> for Kafka‑compatible clients.</li>
-</ul>
-<p>In short: Arrow users encounter Avro both on disk (OCF) and on the wire 
(SOE). An Arrow‑first, vectorized reader/writer for OCF, SOE, and Confluent 
framing removes a pervasive bottleneck and keeps pipelines columnar 
end‑to‑end.</p>
-<h2>Introducing <code>arrow-avro</code></h2>
-<p><a 
href="https://github.com/apache/arrow-rs/tree/main/arrow-avro";><code>arrow-avro</code></a>
 is a high-performance Rust crate that converts between Avro and Arrow with a 
column‑first, batch‑oriented design. On the read side, it decodes Avro Object 
Container Files (OCF), Single‑Object Encoding (SOE), and the Confluent Schema 
Registry wire format directly into Arrow <code>RecordBatch</code>es. Meanwhile, 
the write path provides formats for encoding to OCF and SOE as well.</p>
-<p>The crate exposes two primary read APIs: a high-level <code>Reader</code> 
for OCF inputs and a low-level <code>Decoder</code> for streaming SOE frames. 
For SOE and Confluent/Apicurio frames, a <code>SchemaStore</code> is provided 
that resolves fingerprints or schema IDs to full Avro writer schemas, enabling 
schema evolution while keeping the decode path vectorized.</p>
-<p>On the write side, <code>AvroWriter</code> produces OCF (including 
container‑level compression), while <code>AvroStreamWriter</code> produces 
framed Avro messages for Single‑Object or Confluent/Apicurio encodings, as 
configured via the <code>WriterBuilder::with_fingerprint_strategy(...)</code> 
knob.</p>
-<p>Configuration is intentionally minimal but practical. For instance, the 
<code>ReaderBuilder</code> exposes knobs covering both batch file ingestion and 
streaming systems without forcing format‑specific code paths.</p>
-<h3>How this mirrors Parquet in Arrow‑rs</h3>
-<p>If you have used Parquet with Arrow‑rs, you already know the pattern. The 
<code>parquet</code> crate exposes a <a 
href="https://docs.rs/parquet/latest/parquet/arrow/index.html";>parquet::arrow 
module</a> that reads and writes Arrow <code>RecordBatch</code>es directly. 
Most users reach for <code>ParquetRecordBatchReaderBuilder</code> when reading 
and <code>ArrowWriter</code> when writing. You choose columns up front, set a 
batch size, and the reader gives you Arrow batches that flow str [...]
-<p><code>arrow‑avro</code> brings that same bridge to Avro. You get a single 
<code>ReaderBuilder</code> that can produce a <code>Reader</code> for OCF, or a 
streaming <code>Decoder</code> for on‑the‑wire frames. Both return Arrow 
<code>RecordBatch</code>es, which means engines can keep projection and 
filtering close to the reader and avoid building rows only to reassemble them 
back into columns later. For evolving streams, a small <code>SchemaStore</code> 
resolves fingerprints or ids bef [...]
-<p>The reason this pattern matters is straightforward. Arrow’s columnar format 
is designed for vectorized work and good cache locality. When a format reader 
produces Arrow batches directly, copies and branchy per‑row work are minimized, 
keeping downstream operators fast. That is the same story that made 
<code>parquet::arrow</code> popular in Rust, and it is what 
<code>arrow‑avro</code> now enables for Avro.</p>
-<h2>Architecture &amp; Technical Overview</h2>
-<div style="display: flex; gap: 16px; justify-content: center; align-items: 
flex-start; padding: 20px 15px;">
-<img src="/img/introducing-arrow-avro/arrow-avro-architecture.svg"
-        width="100%"
-        alt="High-level `arrow-avro` architecture"
-        style="background:#fff">
-</div>
-<p>At a high level, <a 
href="https://arrow.apache.org/rust/arrow_avro/index.html";>arrow-avro</a> 
splits cleanly into read and write paths built around Arrow 
<code>RecordBatch</code>es. The read side turns Avro (OCF files or framed byte 
streams) into batched Arrow arrays, while the write side takes Arrow batches 
and produces OCF files or streaming frames. When using an 
<code>AvroStreamWriter</code>, the framing (SOE or Confluent) is part of the 
stream output based on the configured finger [...]
-<p>On the <a 
href="https://arrow.apache.org/rust/arrow_avro/reader/index.html";>read</a> 
path, everything starts with the <a 
href="https://arrow.apache.org/rust/arrow_avro/reader/struct.ReaderBuilder.html";>ReaderBuilder</a>.
 A single builder can create a <a 
href="https://arrow.apache.org/rust/arrow_avro/reader/struct.Reader.html";>Reader</a>
 for Object Container Files (OCF) or a streaming <a 
href="https://arrow.apache.org/rust/arrow_avro/reader/struct.Decoder.html";>Decoder</a>
 for SOE/Conf [...]
-<p>When reading an OCF, the <code>Reader</code> parses a header and then 
iterates over blocks of encoded data. The header contains a metadata map with 
the embedded Avro schema and optional compression (i.e., <code>deflate</code>, 
<code>snappy</code>, <code>zstd</code>, <code>bzip2</code>, <code>xz</code>), 
plus a 16‑byte sync marker used to delimit blocks. Each subsequent OCF block 
then carries a row count and the encoded payload. The parsed OCF header and 
block structures are also encod [...]
-<p>On the <a 
href="https://arrow.apache.org/rust/arrow_avro/writer/index.html";>write</a> 
path, the <a 
href="https://arrow.apache.org/rust/arrow_avro/writer/struct.WriterBuilder.html";>WriterBuilder</a>
 produces either an <a 
href="https://arrow.apache.org/rust/arrow_avro/writer/type.AvroWriter.html";>AvroWriter</a>
 (OCF) or an <a 
href="https://arrow.apache.org/rust/arrow_avro/writer/type.AvroStreamWriter.html";>AvroStreamWriter</a>
 (SOE/Message). The <code>with_compression(...)</code> knob i [...]
-<p>Schema handling is centralized in the <a 
href="https://arrow.apache.org/rust/arrow_avro/schema/index.html";>schema</a> 
module. <a 
href="https://arrow.apache.org/rust/arrow_avro/schema/struct.AvroSchema.html";>AvroSchema</a>
 wraps a valid Avro Schema JSON string, supports computing a 
<code>Fingerprint</code>, and can be loaded into a <a 
href="https://arrow.apache.org/rust/arrow_avro/schema/struct.SchemaStore.html";>SchemaStore</a>
 as a writer schema. At runtime, the <code>Reader</code>/<c [...]
-<p>At the heart of <code>arrow-avro</code> is a type‑mapping 
<code>Codec</code> that the library uses to construct both encoders and 
decoders. The <code>Codec</code> captures, for every Avro field, how it maps to 
Arrow and how it should be encoded or decoded. The <code>Reader</code> logic 
builds a <code>Codec</code> per <em>(writer, reader)</em> schema pair, which 
the decoder later uses to vectorize parsing of Avro values directly into the 
correct Arrow builders. The <code>Writer</code>  [...]
-<p>Finally, by keeping container and stream framing (OCF vs. SOE) separate 
from encoding and decoding, the crate composes naturally with the rest of 
Arrow‑rs: you read or write Arrow <code>RecordBatch</code>es, pick OCF or SOE 
streams as needed, and wire up fingerprints only when you're on a streaming 
path. This results in a compact API surface that covers both batch files and 
high‑throughput streams without sacrificing columnar, vectorized execution.</p>
-<h2>Examples</h2>
-<h3>Decoding a Confluent-framed Kafka Stream</h3>
-<div class="language-rust highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code data-lang="rust"><span class="k">use</span> <span 
class="nn">arrow_avro</span><span class="p">::</span><span 
class="nn">reader</span><span class="p">::</span><span 
class="n">ReaderBuilder</span><span class="p">;</span>
-<span class="k">use</span> <span class="nn">arrow_avro</span><span 
class="p">::</span><span class="nn">schema</span><span class="p">::{</span>
-    <span class="n">SchemaStore</span><span class="p">,</span> <span 
class="n">AvroSchema</span><span class="p">,</span> <span 
class="n">Fingerprint</span><span class="p">,</span> <span 
class="n">FingerprintAlgorithm</span><span class="p">,</span> <span 
class="n">CONFLUENT_MAGIC</span>
-<span class="p">};</span>
-
-<span class="k">fn</span> <span class="nf">main</span><span 
class="p">()</span> <span class="k">-&gt;</span> <span 
class="nb">Result</span><span class="o">&lt;</span><span class="p">(),</span> 
<span class="nb">Box</span><span class="o">&lt;</span><span 
class="k">dyn</span> <span class="nn">std</span><span class="p">::</span><span 
class="nn">error</span><span class="p">::</span><span 
class="n">Error</span><span class="o">&gt;&gt;</span> <span class="p">{</span>
-    <span class="c1">// Register writer schema under Confluent id=1.</span>
-    <span class="k">let</span> <span class="k">mut</span> <span 
class="n">store</span> <span class="o">=</span> <span 
class="nn">SchemaStore</span><span class="p">::</span><span 
class="nf">new_with_type</span><span class="p">(</span><span 
class="nn">FingerprintAlgorithm</span><span class="p">::</span><span 
class="n">Id</span><span class="p">);</span>
-    <span class="n">store</span><span class="nf">.set</span><span 
class="p">(</span>
-        <span class="nn">Fingerprint</span><span class="p">::</span><span 
class="nf">Id</span><span class="p">(</span><span class="mi">1</span><span 
class="p">),</span>
-        <span class="nn">AvroSchema</span><span class="p">::</span><span 
class="nf">new</span><span class="p">(</span><span 
class="s">r#"{"type":"record","name":"T","fields":[{"name":"x","type":"long"}]}"#</span><span
 class="nf">.into</span><span class="p">()),</span>
-    <span class="p">)</span><span class="o">?</span><span class="p">;</span>
-
-    <span class="c1">// Define reader schema to enable projection/schema 
evolution.</span>
-    <span class="k">let</span> <span class="n">reader_schema</span> <span 
class="o">=</span> <span class="nn">AvroSchema</span><span 
class="p">::</span><span class="nf">new</span><span class="p">(</span><span 
class="s">r#"{"type":"record","name":"T","fields":[{"name":"x","type":"long"}]}"#</span><span
 class="nf">.into</span><span class="p">());</span>
-
-    <span class="c1">// Build Decoder using reader and writer schemas</span>
-    <span class="k">let</span> <span class="k">mut</span> <span 
class="n">decoder</span> <span class="o">=</span> <span 
class="nn">ReaderBuilder</span><span class="p">::</span><span 
class="nf">new</span><span class="p">()</span>
-        <span class="nf">.with_reader_schema</span><span 
class="p">(</span><span class="n">reader_schema</span><span class="p">)</span>
-        <span class="nf">.with_writer_schema_store</span><span 
class="p">(</span><span class="n">store</span><span class="p">)</span>
-        <span class="nf">.build_decoder</span><span class="p">()</span><span 
class="o">?</span><span class="p">;</span>
-
-    <span class="c1">// Simulate one frame: magic 0x00 + 4‑byte big‑endian 
schema ID + Avro body (x=1 encoded as zig‑zag/VLQ).</span>
-    <span class="k">let</span> <span class="k">mut</span> <span 
class="n">frame</span> <span class="o">=</span> <span 
class="nn">Vec</span><span class="p">::</span><span class="nf">from</span><span 
class="p">(</span><span class="n">CONFLUENT_MAGIC</span><span 
class="p">);</span> <span class="n">frame</span><span 
class="nf">.extend_from_slice</span><span class="p">(</span><span 
class="o">&amp;</span><span class="mi">1u32</span><span 
class="nf">.to_be_bytes</span><span class="p">());</span [...]
-
-    <span class="c1">// Consume from decoder</span>
-    <span class="k">let</span> <span class="n">_consumed</span> <span 
class="o">=</span> <span class="n">decoder</span><span 
class="nf">.decode</span><span class="p">(</span><span 
class="o">&amp;</span><span class="n">frame</span><span class="p">)</span><span 
class="o">?</span><span class="p">;</span>
-    <span class="k">while</span> <span class="k">let</span> <span 
class="nf">Some</span><span class="p">(</span><span class="n">batch</span><span 
class="p">)</span> <span class="o">=</span> <span class="n">decoder</span><span 
class="nf">.flush</span><span class="p">()</span><span class="o">?</span> <span 
class="p">{</span>
-        <span class="nd">println!</span><span class="p">(</span><span 
class="s">"rows={}, cols={}"</span><span class="p">,</span> <span 
class="n">batch</span><span class="nf">.num_rows</span><span 
class="p">(),</span> <span class="n">batch</span><span 
class="nf">.num_columns</span><span class="p">());</span>
-    <span class="p">}</span>
-    <span class="nf">Ok</span><span class="p">(())</span>
-<span class="p">}</span>
-</code></pre></div></div>
-<p>The <code>SchemaStore</code> maps the incoming schema ID to the correct 
Avro writer schema so the decoder can perform projection/evolution against the 
reader schema. Confluent's wire format prefixes each message with a magic byte 
<code>0x00</code> followed by a big‑endian 4‑byte schema ID. After decoding 
Avro messages, the <code>Decoder::flush()</code> method yields Arrow 
<code>RecordBatch</code>es suitable for vectorized processing.</p>
-<p>A more advanced example can be found <a 
href="https://github.com/apache/arrow-rs/blob/main/arrow-avro/examples/decode_kafka_stream.rs";>here</a>.</p>
-<h3>Writing a Snappy Compressed Avro OCF file</h3>
-<div class="language-rust highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code data-lang="rust"><span class="k">use</span> <span 
class="nn">arrow_array</span><span class="p">::{</span><span 
class="n">Int64Array</span><span class="p">,</span> <span 
class="n">RecordBatch</span><span class="p">};</span>
-<span class="k">use</span> <span class="nn">arrow_schema</span><span 
class="p">::{</span><span class="n">Schema</span><span class="p">,</span> <span 
class="n">Field</span><span class="p">,</span> <span 
class="n">DataType</span><span class="p">};</span>
-<span class="k">use</span> <span class="nn">arrow_avro</span><span 
class="p">::</span><span class="nn">writer</span><span 
class="p">::{</span><span class="n">Writer</span><span class="p">,</span> <span 
class="n">WriterBuilder</span><span class="p">};</span>
-<span class="k">use</span> <span class="nn">arrow_avro</span><span 
class="p">::</span><span class="nn">writer</span><span class="p">::</span><span 
class="nn">format</span><span class="p">::</span><span 
class="n">AvroOcfFormat</span><span class="p">;</span>
-<span class="k">use</span> <span class="nn">arrow_avro</span><span 
class="p">::</span><span class="nn">compression</span><span 
class="p">::</span><span class="n">CompressionCodec</span><span 
class="p">;</span>
-<span class="k">use</span> <span class="nn">std</span><span 
class="p">::{</span><span class="nn">sync</span><span class="p">::</span><span 
class="nb">Arc</span><span class="p">,</span> <span class="nn">fs</span><span 
class="p">::</span><span class="n">File</span><span class="p">,</span> <span 
class="nn">io</span><span class="p">::</span><span 
class="n">BufWriter</span><span class="p">};</span>
-
-<span class="k">fn</span> <span class="nf">main</span><span 
class="p">()</span> <span class="k">-&gt;</span> <span 
class="nb">Result</span><span class="o">&lt;</span><span class="p">(),</span> 
<span class="nb">Box</span><span class="o">&lt;</span><span 
class="k">dyn</span> <span class="nn">std</span><span class="p">::</span><span 
class="nn">error</span><span class="p">::</span><span 
class="n">Error</span><span class="o">&gt;&gt;</span> <span class="p">{</span>
-  <span class="k">let</span> <span class="n">schema</span> <span 
class="o">=</span> <span class="nn">Schema</span><span class="p">::</span><span 
class="nf">new</span><span class="p">(</span><span class="nd">vec!</span><span 
class="p">[</span><span class="nn">Field</span><span class="p">::</span><span 
class="nf">new</span><span class="p">(</span><span class="s">"id"</span><span 
class="p">,</span> <span class="nn">DataType</span><span 
class="p">::</span><span class="n">Int64</span><span cl [...]
-  <span class="k">let</span> <span class="n">batch</span> <span 
class="o">=</span> <span class="nn">RecordBatch</span><span 
class="p">::</span><span class="nf">try_new</span><span class="p">(</span>
-    <span class="nn">Arc</span><span class="p">::</span><span 
class="nf">new</span><span class="p">(</span><span class="n">schema</span><span 
class="nf">.clone</span><span class="p">()),</span>
-    <span class="nd">vec!</span><span class="p">[</span><span 
class="nn">Arc</span><span class="p">::</span><span class="nf">new</span><span 
class="p">(</span><span class="nn">Int64Array</span><span 
class="p">::</span><span class="nf">from</span><span class="p">(</span><span 
class="nd">vec!</span><span class="p">[</span><span class="mi">1</span><span 
class="p">,</span><span class="mi">2</span><span class="p">,</span><span 
class="mi">3</span><span class="p">]))],</span>
-  <span class="p">)</span><span class="o">?</span><span class="p">;</span>
-  <span class="k">let</span> <span class="n">file</span> <span 
class="o">=</span> <span class="nn">File</span><span class="p">::</span><span 
class="nf">create</span><span class="p">(</span><span 
class="s">"target/example.avro"</span><span class="p">)</span><span 
class="o">?</span><span class="p">;</span>
-
-  <span class="c1">// Choose OCF block compression (e.g., None, Deflate, 
Snappy, Zstd)</span>
-  <span class="k">let</span> <span class="k">mut</span> <span 
class="n">writer</span><span class="p">:</span> <span 
class="n">Writer</span><span class="o">&lt;</span><span class="n">_</span><span 
class="p">,</span> <span class="n">AvroOcfFormat</span><span 
class="o">&gt;</span> <span class="o">=</span> <span 
class="nn">WriterBuilder</span><span class="p">::</span><span 
class="nf">new</span><span class="p">(</span><span class="n">schema</span><span 
class="p">)</span>
-      <span class="nf">.with_compression</span><span class="p">(</span><span 
class="nf">Some</span><span class="p">(</span><span 
class="nn">CompressionCodec</span><span class="p">::</span><span 
class="n">Snappy</span><span class="p">))</span>
-      <span class="nf">.build</span><span class="p">(</span><span 
class="nn">BufWriter</span><span class="p">::</span><span 
class="nf">new</span><span class="p">(</span><span class="n">file</span><span 
class="p">))</span><span class="o">?</span><span class="p">;</span>
-  <span class="n">writer</span><span class="nf">.write</span><span 
class="p">(</span><span class="o">&amp;</span><span class="n">batch</span><span 
class="p">)</span><span class="o">?</span><span class="p">;</span>
-  <span class="n">writer</span><span class="nf">.finish</span><span 
class="p">()</span><span class="o">?</span><span class="p">;</span>
-  <span class="nf">Ok</span><span class="p">(())</span>
-<span class="p">}</span>
-</code></pre></div></div>
-<p>The example above configures an Avro OCF <code>Writer</code>. It constructs 
a <code>Writer&lt;_, AvroOcfFormat&gt;</code> using 
<code>WriterBuilder::new(schema)</code> and wraps a <code>File</code> in a 
<code>BufWriter</code> for efficient I/O. The call to 
<code>.with_compression(Some(CompressionCodec::Snappy))</code> enables 
block‑level snappy compression. Finally, <code>writer.write(&amp;batch)?</code> 
serializes the batch as an Avro encoded block, and 
<code>writer.finish()?</code>  [...]
-<h2>Alternatives &amp; Benchmarks</h2>
-<p>There are fundamentally two different approaches for bringing Avro into 
Arrow:</p>
-<ol>
-<li>Row‑centric approach, typical of general Avro libraries such as 
<code>apache-avro</code>, deserializes one record at a time into native Rust 
values (i.e., <code>Value</code> or Serde types) and then builds Arrow arrays 
from those values.</li>
-<li>Vectorized approach, what <code>arrow-avro</code> provides, decodes 
directly into Arrow builders/arrays and emits <code>RecordBatch</code>es, 
avoiding most per‑row overhead.</li>
-</ol>
-<p>This section compares the performance of both approaches using these <a 
href="https://github.com/jecsand838/arrow-rs/tree/blog-benches/arrow-avro/benches";>Criterion
 benchmarks</a>.</p>
-<h3>Read performance (1M)</h3>
-<div style="display: flex; gap: 16px; justify-content: center; align-items: 
flex-start; padding: 5px 5px;">
-<img src="/img/introducing-arrow-avro/read_violin_1m.svg"
-        width="100%"
-        alt="1M Row Read Violin Plot"
-        style="background:#fff">
-</div>
-<h3>Read performance (10K)</h3>
-<div style="display: flex; gap: 16px; justify-content: center; align-items: 
flex-start; padding: 5px 5px;">
-<img src="/img/introducing-arrow-avro/read_violin_10k.svg"
-        width="100%"
-        alt="10K Row Read Violin Plot"
-        style="background:#fff">
-</div>
-<h3>Write performance (1M)</h3>
-<div style="display: flex; gap: 16px; justify-content: center; align-items: 
flex-start; padding: 5px 5px;">
-<img src="/img/introducing-arrow-avro/write_violin_1m.svg"
-        width="100%"
-        alt="1M Row Write Violin Plot"
-        style="background:#fff">
-</div>
-<h3>Write performance (10K)</h3>
-<div style="display: flex; gap: 16px; justify-content: center; align-items: 
flex-start; padding: 5px 5px;">
-<img src="/img/introducing-arrow-avro/write_violin_10k.svg"
-        width="100%"
-        alt="10K Row Write Violin Plot"
-        style="background:#fff">
-</div>
-<p>Across benchmarks, the violin plots show lower medians and tighter spreads 
for <code>arrow-avro</code> on both read and write paths. The gap widens when 
per‑row work dominates (i.e., 10K‑row scenarios). At 1M rows, the distributions 
remain favorable to <code>arrow-avro</code>, reflecting better cache locality 
and fewer copies once decoding goes straight to Arrow arrays. The general 
behavior is consistent with <code>apache-avro</code>'s record‑by‑record 
iteration and <code>arrow-avro</ [...]
-<p>The table below lists the cases we report in the figures:</p>
-<ul>
-<li>10K vs 1M rows for multiple data shapes.</li>
-<li><strong>Read cases:</strong>
-<ul>
-<li><code>f8</code>: <em>Full schema, 8K batch size.</em>
-Decode all four columns with batch_size = 8192.</li>
-<li><code>f1</code>: <em>Full schema, 1K batch size.</em>
-Decode all four columns with batch_size = 1024.</li>
-<li><code>p8</code>: <em>Projected <code>{id,name}</code>, 8K batch size 
(pushdown).</em>
-Decode only <code>id</code> and <code>name</code> with batch_size = 8192`.
-<em>How projection is applied:</em>
-<ul>
-<li><code>arrow-avro/p8</code>: projection via reader schema 
(<code>ReaderBuilder::with_reader_schema(...)</code>) so decoding is 
column‑pushed down in the Arrow‑first reader.</li>
-<li><code>apache-avro/p8</code>: projection via Avro reader schema 
(<code>AvroReader::with_schema(...)</code>) so the Avro library decodes only 
the projected fields.</li>
-</ul>
-</li>
-<li><code>np</code>: <em>Projected <code>{id,name}</code>, no pushdown, 8K 
batch size.</em>
-Both readers decode the full record (all four columns), materialize all 
arrays, then project down to <code>{id,name}</code> after decode. This models 
systems that can't push projection into the file/codec reader.</li>
-</ul>
-</li>
-<li><strong>Write cases:</strong>
-<ul>
-<li><code>c</code> (cold): <em>Schema conversion each iteration.</em></li>
-<li><code>h</code> (hot): <em>Avro JSON &quot;hot&quot; path.</em></li>
-</ul>
-</li>
-<li>The resulting Apache‑Avro vs Arrow‑Avro medians with the computed 
speedup.</li>
-</ul>
-<h3>Benchmark Median Time Results (Apple Silicon Mac)</h3>
-<table>
-<thead>
-<tr>
-<th>Case</th>
-<th align="right">apache-avro median</th>
-<th align="right">arrow-avro median</th>
-<th align="right">speedup</th>
-</tr>
-</thead>
-<tbody>
-<tr>
-<td>R/f8/10K</td>
-<td align="right">2.60 ms</td>
-<td align="right">0.24 ms</td>
-<td align="right">10.83x</td>
-</tr>
-<tr>
-<td>R/p8/10K</td>
-<td align="right">7.91 ms</td>
-<td align="right">0.24 ms</td>
-<td align="right">32.95x</td>
-</tr>
-<tr>
-<td>R/f1/10K</td>
-<td align="right">2.65 ms</td>
-<td align="right">0.25 ms</td>
-<td align="right">10.60x</td>
-</tr>
-<tr>
-<td>R/np/10K</td>
-<td align="right">2.62 ms</td>
-<td align="right">0.25 ms</td>
-<td align="right">10.48x</td>
-</tr>
-<tr>
-<td>R/f8/1M</td>
-<td align="right">267.21 ms</td>
-<td align="right">27.91 ms</td>
-<td align="right">9.57x</td>
-</tr>
-<tr>
-<td>R/p8/1M</td>
-<td align="right">791.79 ms</td>
-<td align="right">26.28 ms</td>
-<td align="right">30.13x</td>
-</tr>
-<tr>
-<td>R/f1/1M</td>
-<td align="right">262.93 ms</td>
-<td align="right">28.25 ms</td>
-<td align="right">9.31x</td>
-</tr>
-<tr>
-<td>R/np/1M</td>
-<td align="right">268.79 ms</td>
-<td align="right">27.69 ms</td>
-<td align="right">9.71x</td>
-</tr>
-<tr>
-<td>W/c/10K</td>
-<td align="right">4.78 ms</td>
-<td align="right">0.27 ms</td>
-<td align="right">17.70x</td>
-</tr>
-<tr>
-<td>W/h/10K</td>
-<td align="right">0.82 ms</td>
-<td align="right">0.28 ms</td>
-<td align="right">2.93x</td>
-</tr>
-<tr>
-<td>W/c/1M</td>
-<td align="right">485.58 ms</td>
-<td align="right">36.97 ms</td>
-<td align="right">13.13x</td>
-</tr>
-<tr>
-<td>W/h/1M</td>
-<td align="right">83.58 ms</td>
-<td align="right">36.75 ms</td>
-<td align="right">2.27x</td>
-</tr>
-</tbody>
-</table>
-<h2>Closing</h2>
-<p><code>arrow-avro</code> brings a purpose‑built, vectorized bridge 
connecting Arrow-rs and Avro that covers Object Container Files (OCF), 
Single‑Object Encoding (SOE), and the Confluent/Apicurio Schema Registry wire 
formats. This means you can now keep your ingestion paths columnar for both 
batch files and streaming systems. The reader and writer APIs shown above are 
now available for you to use with the v57.0.0 release of 
<code>arrow-rs</code>.</p>
-<p>This work is part of the ongoing Arrow‑rs effort to implement first-class 
Avro support in Rust. We'd love your feedback on real‑world use-cases, 
workloads, and integrations. We also welcome contributions, whether that's 
issues, benchmarks, or PRs. To follow along or help, open an <a 
href="https://github.com/apache/arrow-rs/issues";>issue on GitHub</a> and/or 
track <a href="https://github.com/apache/arrow-rs/issues/4886";>Add Avro 
Support</a> in <code>apache/arrow-rs</code>.</p>
-<h3>Acknowledgments</h3>
-<p>Special thanks to:</p>
-<ul>
-<li><a href="https://github.com/tustvold";>tustvold</a> for laying an 
incredible zero-copy foundation.</li>
-<li><a href="https://github.com/nathaniel-d-ef";>nathaniel-d-ef</a> and <a 
href="https://github.com/elastiflow";>ElastiFlow</a> for their numerous and 
invaluable project-wide contributions.</li>
-<li><a href="https://github.com/veronica-m-ef";>veronica-m-ef</a> for making 
Impala‑related contributions to the <code>Reader</code>.</li>
-<li><a href="https://github.com/Supermetal-Inc";>Supermetal</a> for 
contributions related to Apicurio Registry and Run-End Encoding type 
support.</li>
-<li><a href="https://github.com/kumarlokesh";>kumarlokesh</a> for contributing 
<code>Utf8View</code> support.</li>
-<li><a href="https://github.com/alamb";>alamb</a>, <a 
href="https://github.com/scovich";>scovich</a>, <a 
href="https://github.com/mbrobbel";>mbrobbel</a>, and <a 
href="https://github.com/klion26";>klion26</a> for their thoughtful reviews, 
detailed feedback, and support throughout the development of 
<code>arrow-avro</code>.</li>
-</ul>
-<p>If you have any questions about this blog post, please feel free to contact 
the author, <a href="mailto:[email protected]";>Connor 
Sanders</a>.</p>]]></content><author><name>jecsand838</name></author><category 
term="application" /><summary type="html"><![CDATA[A new native Rust vectorized 
reader/writer for Avro to Arrow, with OCF, Single‑Object, and Confluent wire 
format support.]]></summary><media:thumbnail 
xmlns:media="http://search.yahoo.com/mrss/"; url="https://arrow.apache.org/img/ 
[...]
\ No newline at end of file
+</ul>]]></content><author><name>pmc</name></author><category term="release" 
/><summary type="html"><![CDATA[The Apache Arrow team is pleased to announce 
the 22.0.0 release. This release covers over 3 months of development work and 
includes 213 resolved issues on 255 distinct commits from 60 distinct 
contributors. See the Install Page to learn how to get the libraries for your 
platform. The release notes below are not exhaustive and only expose selected 
highlights of the release. Many oth [...]
\ No newline at end of file
diff --git a/release/index.html b/release/index.html
index 7e8560c2a48..74c879b5b7b 100644
--- a/release/index.html
+++ b/release/index.html
@@ -20,12 +20,12 @@
 <meta property="og:site_name" content="Apache Arrow" />
 <meta property="og:image" 
content="https://arrow.apache.org/img/arrow-logo_horizontal_black-txt_white-bg.png";
 />
 <meta property="og:type" content="article" />
-<meta property="article:published_time" content="2026-02-07T07:17:37-05:00" />
+<meta property="article:published_time" content="2026-02-09T03:33:22-05:00" />
 <meta name="twitter:card" content="summary_large_image" />
 <meta property="twitter:image" 
content="https://arrow.apache.org/img/arrow-logo_horizontal_black-txt_white-bg.png";
 />
 <meta property="twitter:title" content="Releases" />
 <script type="application/ld+json">
-{"@context":"https://schema.org","@type":"BlogPosting","dateModified":"2026-02-07T07:17:37-05:00","datePublished":"2026-02-07T07:17:37-05:00","description":"Apache
 Arrow Releases Navigate to the release page for downloads and the changelog. 
23.0.0 (18 January 2026) 22.0.0 (24 October 2025) 21.0.0 (17 July 2025) 20.0.0 
(27 April 2025) 19.0.1 (16 February 2025) 19.0.0 (16 January 2025) 18.1.0 (24 
November 2024) 18.0.0 (28 October 2024) 17.0.0 (16 July 2024) 16.1.0 (14 May 
2024) 16.0.0 (20  [...]
+{"@context":"https://schema.org","@type":"BlogPosting","dateModified":"2026-02-09T03:33:22-05:00","datePublished":"2026-02-09T03:33:22-05:00","description":"Apache
 Arrow Releases Navigate to the release page for downloads and the changelog. 
23.0.0 (18 January 2026) 22.0.0 (24 October 2025) 21.0.0 (17 July 2025) 20.0.0 
(27 April 2025) 19.0.1 (16 February 2025) 19.0.0 (16 January 2025) 18.1.0 (24 
November 2024) 18.0.0 (28 October 2024) 17.0.0 (16 July 2024) 16.1.0 (14 May 
2024) 16.0.0 (20  [...]
 <!-- End Jekyll SEO tag -->
 
 
diff --git a/security/index.html b/security/index.html
index 4d2a2aba060..b2b1fe03cc2 100644
--- a/security/index.html
+++ b/security/index.html
@@ -240,8 +240,17 @@
   <div class="container p-4 pt-5">
     <main role="main" class="pb-5">
       <h1>Reporting Security Issues</h1>
-<p>Apache Arrow uses the standard process outlined by the <a 
href="https://www.apache.org/security/"; target="_blank" rel="noopener">Apache 
Security Team</a> for reporting vulnerabilities. Note that vulnerabilities 
should not be publicly disclosed until the project has responded.</p>
-<p>To report a possible security vulnerability, please email <a 
href="mailto:[email protected]";>[email protected]</a>.</p>
+<p>We take security seriously and would like our project to be as robust and
+dependable as possible. If you believe to have found a security bug, please do
+not file a public issue.</p>
+<p>First, please carefully read the Apache Arrow
+<a href="https://arrow.apache.org/docs/dev/format/Security.html";>Security 
Model</a>
+and understand its implications for untrusted data, as some apparent security
+issues can actually be usage issues.</p>
+<p>Second, please follow the standard <a 
href="https://apache.org/security/#reporting-a-vulnerability"; target="_blank" 
rel="noopener">vulnerability reporting process</a>
+outlined by the Apache Software Foundation. We will assess your report, follow
+up with our evaluation of the issue, and fix it as soon as possible if we deem
+it to be an actual security vulnerability.</p>
 <hr class="my-5">
 <h3>
 <a href="https://www.cve.org/CVERecord?id=CVE-2023-47248"; target="_blank" 
rel="noopener">CVE-2023-47248</a>: Arbitrary code execution when loading a 
malicious data file in PyArrow</h3>

Reply via email to