nealrichardson commented on a change in pull request #55:
URL: https://github.com/apache/arrow-site/pull/55#discussion_r412496764



##########
File path: _posts/2020-04-21-0.17.0-release.md
##########
@@ -0,0 +1,197 @@
+---
+layout: post
+title: "Apache Arrow 0.17.0 Release"
+date: "2020-04-21 00:00:00 -0600"
+author: pmc
+categories: [release]
+---
+<!--
+{% comment %}
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements.  See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to you under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+{% endcomment %}
+-->
+
+
+The Apache Arrow team is pleased to announce the 0.17.0 release. This covers
+over 2 months of development work and includes [**569 resolved issues**][1]
+from [**79 distinct contributors**][2]. See the Install Page to learn how to
+get the libraries for your platform.
+
+The release notes below are not exhaustive and only expose selected highlights
+of the release. Many other bugfixes and improvements have been made: we refer
+you to the [complete changelog][3].
+
+## Community
+
+Since the 0.16.0 release, two committers have joined the Project Management
+Committee (PMC):
+
+* [Neal Richardson][4]
+* [François Saint-Jacques][5]
+
+Thank you for all your contributions!
+
+## Columnar Format Notes
+
+A [C-level Data Interface][6] was designed to ease data sharing inside a 
single process. It allows different runtimes or libraries to share Arrow data 
using a well-known binary layout and metadata representation, without any 
copies. The C++ library now includes an implementation of this specification, 
and Python and R have bindings to that implementation.
+
+## Arrow Flight RPC notes
+
+* Adopted new DoExchange bi-directional data RPC
+* ListFlights supports being passed a Criteria argument in Java/C++/Python. 
This allows applications to search for flights satisfying a given query.
+* Custom metadata can be attached to errors that the server sends to the 
client, which can be used to encode richer application-specific information.
+* A number of minor bugs were fixed, including proper handling of empty null 
arrays in Java and round-tripping of certain Arrow status codes in C++/Python.
+
+## C++ notes
+
+### Feather V2
+
+The "Feather V2" format based on the Arrow IPC file format was developed.
+Feather v2 features full support for all Arrow data types,
+and resolves the 2GB per-column limitation for large amounts of string data 
that
+the [original Feather implementation][13] had.
+Feather V2 also introduces experimental IPC message compression using LZ4 
frame format or ZSTD. This will be formalized later in the Arrow format.
+
+### C++ Datasets
+
+TODO: SUMMARIZE C++ DATASETS WORK

Review comment:
       @fsaintjacques 

##########
File path: _posts/2020-04-21-0.17.0-release.md
##########
@@ -0,0 +1,197 @@
+---
+layout: post
+title: "Apache Arrow 0.17.0 Release"
+date: "2020-04-21 00:00:00 -0600"
+author: pmc
+categories: [release]
+---
+<!--
+{% comment %}
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements.  See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to you under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+{% endcomment %}
+-->
+
+
+The Apache Arrow team is pleased to announce the 0.17.0 release. This covers
+over 2 months of development work and includes [**569 resolved issues**][1]
+from [**79 distinct contributors**][2]. See the Install Page to learn how to
+get the libraries for your platform.
+
+The release notes below are not exhaustive and only expose selected highlights
+of the release. Many other bugfixes and improvements have been made: we refer
+you to the [complete changelog][3].
+
+## Community
+
+Since the 0.16.0 release, two committers have joined the Project Management
+Committee (PMC):
+
+* [Neal Richardson][4]
+* [François Saint-Jacques][5]
+
+Thank you for all your contributions!
+
+## Columnar Format Notes
+
+A [C-level Data Interface][6] was designed to ease data sharing inside a 
single process. It allows different runtimes or libraries to share Arrow data 
using a well-known binary layout and metadata representation, without any 
copies. The C++ library now includes an implementation of this specification, 
and Python and R have bindings to that implementation.
+
+## Arrow Flight RPC notes
+
+* Adopted new DoExchange bi-directional data RPC
+* ListFlights supports being passed a Criteria argument in Java/C++/Python. 
This allows applications to search for flights satisfying a given query.
+* Custom metadata can be attached to errors that the server sends to the 
client, which can be used to encode richer application-specific information.
+* A number of minor bugs were fixed, including proper handling of empty null 
arrays in Java and round-tripping of certain Arrow status codes in C++/Python.
+
+## C++ notes
+
+### Feather V2
+
+The "Feather V2" format based on the Arrow IPC file format was developed.
+Feather v2 features full support for all Arrow data types,
+and resolves the 2GB per-column limitation for large amounts of string data 
that
+the [original Feather implementation][13] had.
+Feather V2 also introduces experimental IPC message compression using LZ4 
frame format or ZSTD. This will be formalized later in the Arrow format.
+
+### C++ Datasets
+
+TODO: SUMMARIZE C++ DATASETS WORK
+
+### C++ Parquet notes
+
+* Complete support for writing nested types to Parquet format was completed. 
The legacy code can be accessed through parquet write option C++ and an 
environment variable in Python. Read support will come in a future release.
+* The BYTE_STREAM_SPLIT encoding was implemented for floating-point types. It 
helps improve the efficiency of memory compression for high-entropy data.
+* Expose Parquet schema field_id as Arrow field metadata
+* Support for DataPageV2
+
+### C++ build notes
+
+* We continued to make the core C++ library build simpler and faster. Among 
the improvements are the removal of the dependency on Thrift Compiler at build 
time; while Parquet still requires `thrift-cpp`, its dependencies are much 
lighter. We also further reduced the number of build configurations that 
require `boost`, and when `boost` is needed to be built, we only download the 
components we need, reducing the size of the `boost` bundle by 90%.
+* TODO: ARM64 platform notes?
+* TODO: LLVM 8 upgrade?
+* Simplified SIMD build configuration with ARROW_SIMD_LEVEL option allowing no 
SIMD, SSE4.2, AVX2, or AVX512 to be selected.
+* Fixed a number of bugs affecting compilation on aarch64 platforms
+
+### Other C++ notes
+
+* Many crashes on invalid input detected by [OSS-Fuzz][7] in the IPC reader 
and in Parquet-Arrow reading were fixed. See our recent [blog post][8] for more 
details.
+* A “Device” abstraction was added to simplify buffer management and movement 
across heterogeneous hardware configurations, e.g. CPUs and GPUs.
+* A streaming CSV reader was implemented, yielding individual RecordBatches 
and helping limit overall memory occupation.
+* Array casting from Decimal128 to integer types and to Decimal128 with 
different scale/precision was added.
+* Sparse CSF tensors are now supported.
+* When creating an Array, the null bitmap is not kept if the null count is 
known to be zero
+* Compressor support for the LZ4 frame format (LZ4_FRAME) was added
+* An event-driven interface for reading IPC streams was added.
+* Further core APIs that required passing an explicit out-parameter were 
migrated to Result<T>.
+
+TODO: New analytics kernels (match, sort indices / argsort, top-k)
+TODO: New Gandiva features?

Review comment:
       @praveenbingo @bkietz 

##########
File path: _posts/2020-04-21-0.17.0-release.md
##########
@@ -0,0 +1,197 @@
+---
+layout: post
+title: "Apache Arrow 0.17.0 Release"
+date: "2020-04-21 00:00:00 -0600"
+author: pmc
+categories: [release]
+---
+<!--
+{% comment %}
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements.  See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to you under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+{% endcomment %}
+-->
+
+
+The Apache Arrow team is pleased to announce the 0.17.0 release. This covers
+over 2 months of development work and includes [**569 resolved issues**][1]
+from [**79 distinct contributors**][2]. See the Install Page to learn how to
+get the libraries for your platform.
+
+The release notes below are not exhaustive and only expose selected highlights
+of the release. Many other bugfixes and improvements have been made: we refer
+you to the [complete changelog][3].
+
+## Community
+
+Since the 0.16.0 release, two committers have joined the Project Management
+Committee (PMC):
+
+* [Neal Richardson][4]
+* [François Saint-Jacques][5]
+
+Thank you for all your contributions!
+
+## Columnar Format Notes
+
+A [C-level Data Interface][6] was designed to ease data sharing inside a 
single process. It allows different runtimes or libraries to share Arrow data 
using a well-known binary layout and metadata representation, without any 
copies. The C++ library now includes an implementation of this specification, 
and Python and R have bindings to that implementation.
+
+## Arrow Flight RPC notes
+
+* Adopted new DoExchange bi-directional data RPC
+* ListFlights supports being passed a Criteria argument in Java/C++/Python. 
This allows applications to search for flights satisfying a given query.
+* Custom metadata can be attached to errors that the server sends to the 
client, which can be used to encode richer application-specific information.
+* A number of minor bugs were fixed, including proper handling of empty null 
arrays in Java and round-tripping of certain Arrow status codes in C++/Python.
+
+## C++ notes
+
+### Feather V2
+
+The "Feather V2" format based on the Arrow IPC file format was developed.
+Feather v2 features full support for all Arrow data types,
+and resolves the 2GB per-column limitation for large amounts of string data 
that
+the [original Feather implementation][13] had.
+Feather V2 also introduces experimental IPC message compression using LZ4 
frame format or ZSTD. This will be formalized later in the Arrow format.
+
+### C++ Datasets
+
+TODO: SUMMARIZE C++ DATASETS WORK
+
+### C++ Parquet notes
+
+* Complete support for writing nested types to Parquet format was completed. 
The legacy code can be accessed through parquet write option C++ and an 
environment variable in Python. Read support will come in a future release.
+* The BYTE_STREAM_SPLIT encoding was implemented for floating-point types. It 
helps improve the efficiency of memory compression for high-entropy data.
+* Expose Parquet schema field_id as Arrow field metadata
+* Support for DataPageV2
+
+### C++ build notes
+
+* We continued to make the core C++ library build simpler and faster. Among 
the improvements are the removal of the dependency on Thrift Compiler at build 
time; while Parquet still requires `thrift-cpp`, its dependencies are much 
lighter. We also further reduced the number of build configurations that 
require `boost`, and when `boost` is needed to be built, we only download the 
components we need, reducing the size of the `boost` bundle by 90%.
+* TODO: ARM64 platform notes?
+* TODO: LLVM 8 upgrade?
+* Simplified SIMD build configuration with ARROW_SIMD_LEVEL option allowing no 
SIMD, SSE4.2, AVX2, or AVX512 to be selected.
+* Fixed a number of bugs affecting compilation on aarch64 platforms
+
+### Other C++ notes
+
+* Many crashes on invalid input detected by [OSS-Fuzz][7] in the IPC reader 
and in Parquet-Arrow reading were fixed. See our recent [blog post][8] for more 
details.
+* A “Device” abstraction was added to simplify buffer management and movement 
across heterogeneous hardware configurations, e.g. CPUs and GPUs.
+* A streaming CSV reader was implemented, yielding individual RecordBatches 
and helping limit overall memory occupation.
+* Array casting from Decimal128 to integer types and to Decimal128 with 
different scale/precision was added.
+* Sparse CSF tensors are now supported.
+* When creating an Array, the null bitmap is not kept if the null count is 
known to be zero
+* Compressor support for the LZ4 frame format (LZ4_FRAME) was added
+* An event-driven interface for reading IPC streams was added.
+* Further core APIs that required passing an explicit out-parameter were 
migrated to Result<T>.
+
+TODO: New analytics kernels (match, sort indices / argsort, top-k)
+TODO: New Gandiva features?
+
+## Java notes
+
+* Netty dependencies were removed for BufferAllocator and ReferenceManager 
classes. In the future, we plan to move netty related classes to a separate 
module.
+* New features were provided to support efficiently appending vector/vector 
schema root values in batch.
+* Comparing a range of values in dense union vectors has been supported.
+* The quick sort algorithm was improved to avoid degenerating to the worst 
case.
+
+## Python notes
+
+### Datasets
+
+* Updated `pyarrow.dataset` module following the changes in the C++ Datasets 
project. This release also adds [richer documentation][10] on the datasets 
module.
+* Support for the improved dataset functionality in 
`pyarrow.parquet.read_table/ParquetDataset`. To enable, pass 
`use_legacy_dataset=False`. Among other things, this allows to specify filters 
for all columns and not only the partition keys (using row group statistics) 
and enables different partitioning schemes. See the "note" in the
+[`ParquetDataset` documentation][11].
+
+### Packaging
+
+* Wheels for Python 3.8 are now available
+* Support for Python 2.7 has been dropped
+* TODO: Summarize where to find out about Nightly Dev Builds

Review comment:
       @kszucs 

##########
File path: _posts/2020-04-21-0.17.0-release.md
##########
@@ -0,0 +1,197 @@
+---
+layout: post
+title: "Apache Arrow 0.17.0 Release"
+date: "2020-04-21 00:00:00 -0600"
+author: pmc
+categories: [release]
+---
+<!--
+{% comment %}
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements.  See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to you under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+{% endcomment %}
+-->
+
+
+The Apache Arrow team is pleased to announce the 0.17.0 release. This covers
+over 2 months of development work and includes [**569 resolved issues**][1]
+from [**79 distinct contributors**][2]. See the Install Page to learn how to
+get the libraries for your platform.
+
+The release notes below are not exhaustive and only expose selected highlights
+of the release. Many other bugfixes and improvements have been made: we refer
+you to the [complete changelog][3].
+
+## Community
+
+Since the 0.16.0 release, two committers have joined the Project Management
+Committee (PMC):
+
+* [Neal Richardson][4]
+* [François Saint-Jacques][5]
+
+Thank you for all your contributions!
+
+## Columnar Format Notes
+
+A [C-level Data Interface][6] was designed to ease data sharing inside a 
single process. It allows different runtimes or libraries to share Arrow data 
using a well-known binary layout and metadata representation, without any 
copies. The C++ library now includes an implementation of this specification, 
and Python and R have bindings to that implementation.
+
+## Arrow Flight RPC notes
+
+* Adopted new DoExchange bi-directional data RPC
+* ListFlights supports being passed a Criteria argument in Java/C++/Python. 
This allows applications to search for flights satisfying a given query.
+* Custom metadata can be attached to errors that the server sends to the 
client, which can be used to encode richer application-specific information.
+* A number of minor bugs were fixed, including proper handling of empty null 
arrays in Java and round-tripping of certain Arrow status codes in C++/Python.
+
+## C++ notes
+
+### Feather V2
+
+The "Feather V2" format based on the Arrow IPC file format was developed.
+Feather v2 features full support for all Arrow data types,
+and resolves the 2GB per-column limitation for large amounts of string data 
that
+the [original Feather implementation][13] had.
+Feather V2 also introduces experimental IPC message compression using LZ4 
frame format or ZSTD. This will be formalized later in the Arrow format.
+
+### C++ Datasets
+
+TODO: SUMMARIZE C++ DATASETS WORK
+
+### C++ Parquet notes
+
+* Complete support for writing nested types to Parquet format was completed. 
The legacy code can be accessed through parquet write option C++ and an 
environment variable in Python. Read support will come in a future release.
+* The BYTE_STREAM_SPLIT encoding was implemented for floating-point types. It 
helps improve the efficiency of memory compression for high-entropy data.
+* Expose Parquet schema field_id as Arrow field metadata
+* Support for DataPageV2
+
+### C++ build notes
+
+* We continued to make the core C++ library build simpler and faster. Among 
the improvements are the removal of the dependency on Thrift Compiler at build 
time; while Parquet still requires `thrift-cpp`, its dependencies are much 
lighter. We also further reduced the number of build configurations that 
require `boost`, and when `boost` is needed to be built, we only download the 
components we need, reducing the size of the `boost` bundle by 90%.
+* TODO: ARM64 platform notes?
+* TODO: LLVM 8 upgrade?

Review comment:
       @kou?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to