thisisnic commented on code in PR #38143:
URL: https://github.com/apache/arrow/pull/38143#discussion_r1355227414


##########
r/NEWS.md:
##########
@@ -19,6 +19,50 @@
 
 # arrow 13.0.0.9000
 
+## New features
+
+* When reading partitioned CSV datasets and supplying a schema to
+  `open_dataset()`, the partition variables are now included in the resulting
+  dataset (#37658).
+* New function `write_csv_dataset()` now wraps `write_dataset()` and mirrors 
the syntax of `write_csv_arrow()` (@dgreiss, #36436).
+* `open_delim_dataset()` now accepts `quoted_na` argument to empty strings to 
be parsed as NA values (#37828).
+* Implemented `infer_schema()` method for `data.frame` (#37843).
+* Added functionality to read CSVs with comma or other character as decimal
+  mark both in dataset reading functions and new function `read_csv2_arrow()`
+  (#38002).
+
+## Minor improvements and fixes
+
+* Added default descriptions in `CsvParseOptions$create()` docs (@angela-li,
+  #37909).
+* Fixed a code path which may have resulted in R code being called from a
+  non-R thread after a failed allocation (#37565).
+* Fixed a bug where large Parquet files could not be read from R connections
+  (#37274).
+* Implemented more robust evaluation of stringr helpers (e.g., `fixed()`
+  `regex()` when using variables to parameterize arguments (#36784).
+* Exposed Parquet ReaderProperties to improve testing of Parquet reading
+  functionality (#36992).

Review Comment:
   ```suggestion
   * Thrift string and container size limits can now be configured via newly 
exposed ParquetReaderProperties, allowing users to work with Parquet files with 
unusually large metadata (#36992).
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to