jonkeane commented on a change in pull request #11425:
URL: https://github.com/apache/arrow/pull/11425#discussion_r731826232



##########
File path: r/NEWS.md
##########
@@ -21,32 +21,62 @@
 
 There are now two ways to query Arrow data:
 
-## 1. Grouped aggregation in Arrow
+## 1. Expanded Arrow-native queries: aggregation and joins in Arrow
 
 `dplyr::summarize()`, both grouped and ungrouped, is now implemented for Arrow 
Datasets, Tables, and RecordBatches. Because data is scanned in chunks, you can 
aggregate over larger-than-memory datasets backed by many files. Supported 
aggregation functions include `n()`, `n_distinct()`, `min(),` `max()`, `sum()`, 
`mean()`, `var()`, `sd()`, `any()`, and `all()`. `median()` and `quantile()` 
with one probability are also supported and currently return approximate 
results using the t-digest algorithm.
 
+Along with `summarize()`, you can also call `count()`, `tally()`, and 
`distinct()`, which effectively wrap `summarize()`.
+
 This enhancement does change the behavior of `summarize()` and `collect()` in 
some cases: see "Breaking changes" below for details.
 
-New compute functions include `str_to_title()` and `strftime()`.
+In addition to `summarize()`, equality joins (`left_join()`, `inner_join()`, 
`semi_join()`, et al.) are also supported natively in Arrow.
+
+Grouped aggregation and (especially) joins should be considered somewhat 
experimental in this release. We expect them to work, but they may not be well 
optimized for all workloads. To help us focus our efforts on improving them in 
the next release, please let us know if you encounter unexpected behavior or 
poor performance.
+
+New non-aggregating compute functions include `str_to_title()` and 
`strftime()`. We've also worked to fill in support for all data types, such as 
`Decimal`, for functions added in previous releases. All type limitations 
mentioned in previous release notes should be no longer valid, and if you find 
a function that is not implemented for a certain data type, please [report an 
issue](https://issues.apache.org/jira/projects/ARROW/issues).
 
 ## 2. duckdb integration
 
-If you have the [duckdb](https://duckdb.org/) package installed, you can hand 
off an Arrow Dataset or query object to duckdb for further querying using the 
`to_duckdb()` function. This allows you to use duckdb's `dbplyr` methods, as 
well as its SQL interface, to aggregate data. Filtering and column projection 
done before `to_duckdb()` is evaluated in Arrow.
+If you have the [duckdb](https://duckdb.org/) package installed, you can hand 
off an Arrow Dataset or query object to duckdb for further querying using the 
`to_duckdb()` function. This allows you to use duckdb's `dbplyr` methods, as 
well as its SQL interface, to aggregate data. Filtering and column projection 
done before `to_duckdb()` is evaluated in Arrow, and duckdb can push down some 
predicates to Arrow as well. This handoff *does not* copy the data, instead it 
uses Arrow's C-interface (just like passing arrow data between R and Python). 
This means there is no serialization or data copying costs are incurred.
+
+You can also take a duckdb `tbl` and call `to_arrow()` to stream data to 
Arrow's query engine. This means that in a single dplyr pipeline, you could 
start with an Arrow Dataset, evaulate some steps in duckdb, then evaluate the 
rest in arrow.
+
 ## Breaking changes
 
 * Row order of data from a Dataset query is no longer deterministic. If you 
need a stable sort order, you should explicitly `arrange()` the query result. 
For calls to `summarize()`, you can set `options(arrow.summarise.sort = TRUE)` 
to match the current `dplyr` behavior of sorting on the grouping columns.
 * `dplyr::summarize()` on an in-memory Arrow Table or RecordBatch no longer 
eagerly evaluates. Call `compute()` or `collect()` to evaluate the query.
 * `head()` and `tail()` also no longer eagerly evaluate, both for in-memory 
data and for Datasets. Also, because row order is no longer deterministic, they 
will effectively give you a random slice of data from somewhere in the dataset 
unless you `arrange()` to specify sorting.
+* Row-level metadata is no longer saved when converting to Arrow tables (and 
thus when saving to parquet or feather). Our previous approach to saving this 
metadata is both (computationally) inefficient and unreliable with Arrow 
queries + datasets. This will most impact saving simple features columns. For 
savings these columns we recommend either converting the columns to well-known 
binary representations (using `sf::st_as_binary(col)`) or using the 
[sfarrow](https://CRAN.R-project.org/package=sfarrow) which handles some of the 
intricacies of this conversion process. We have plans to improve this and 
re-enable custom metadata like this in the future when we can implement the 
saving in a safe and efficient way. If you need to preserve the pre-6.0.0 
behavior of saving this metadata, you can set 
`options(arrow.preserve_row_level_metadata = TRUE)`. We will be removing this 
option in a coming release. We strongly recommend avoiding using this 
workaround if possible since the results will 
 not be supported in the future and can lead to surprising and inaccurate 
results. If you run into a custom class besides sf columns that are impacted by 
this please [report an 
issue](https://issues.apache.org/jira/projects/ARROW/issues).
 * Datasets are officially no longer supported on 32-bit Windows on R < 4.0 
(Rtools 3.5). 32-bit Windows users should upgrade to a newer version of R in 
order to use datasets.
 
+
 ## Installation on Linux
 
 * Package installation now fails if the Arrow C++ library does not compile. In 
previous versions, if the C++ library failed to compile, you would get a 
successful R package installation that wouldn't do much useful.
 * You can disable all optional C++ components when building from source by 
setting the environment variable `LIBARROW_MINIMAL=true`. This will have the 
core Arrow/Feather components but excludes Parquet, Datasets, compression 
libraries, and other optional features.
-* Source packages now bundle the Arrow C++ source code, so it does not have to 
be downloaded in order to build the package. Because the source is included, it 
is now possible to build the package on an offline/airgapped system. By 
default, the offline build will be minimal because it cannot download 
third-party C++ dependencies required to support all features. To allow a fully 
featured offline build, the included `create_package_with_all_dependencies()` 
function (also available on GitHub without installing the arrow package) will 
download all third-party C++ dependencies and bundle them inside the R source 
package. Run this function on a system connected to the network to produce the 
"fat" source package, then copy that .tar.gz package to your offline machine 
and install.
+* Source packages now bundle the Arrow C++ source code, so it does not have to 
be downloaded in order to build the package. Because the source is included, it 
is now possible to build the package on an offline/airgapped system. By 
default, the offline build will be minimal because it cannot download 
third-party C++ dependencies required to support all features. To allow a fully 
featured offline build, the included `create_package_with_all_dependencies()` 
function (also available on GitHub without installing the arrow package) will 
download all third-party C++ dependencies and bundle them inside the R source 
package. Run this function on a system connected to the network to produce the 
"fat" source package, then copy that .tar.gz package to your offline machine 
and install. Special thanks to @karldw for the huge amount of work in 
submitting this PR. 
 * Source builds can make use of system dependencies (such as `libz`) by 
setting `ARROW_DEPENDENCY_SOURCE=AUTO`. This is not the default in this release 
(`BUNDLED`, i.e. download and build all dependencies) but may become the 
default in the future.
 * The JSON library components (`read_json_arrow()`) are now optional and still 
on by default; set `ARROW_JSON=OFF` before building to disable them.
 
+## Other enhancements and fixes
+
+* More Arrow data types use ALTREP when converting to and from R. This speeds 
up some workflows significantly, while for others it merely delays conversion 
from Arrow to R. ALTREP is used by default, but to disable it, set 
`options(arrow.use_altrep = FALSE)`
+* `Field` objects can now be created as non-nullable, and `schema()` now 
optionally accepts a list of `Field`s
+* Numeric division by zero now matches R's behavior and no longer raises an 
error
+* `write_parquet()` no longer errors when used with a grouped data.frame
+* Improved error messages when using `case_when()`

Review comment:
       I was trying to be a bit more oblique / terse, but you're right 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to