stephhazlitt commented on code in PR #14514:
URL: https://github.com/apache/arrow/pull/14514#discussion_r1028603118


##########
r/README.md:
##########
@@ -1,331 +1,104 @@
-# arrow
+# arrow <img 
src="https://arrow.apache.org/img/arrow-logo_hex_black-txt_white-bg.png"; 
align="right" alt="" width="120" />
 
 
[![cran](https://www.r-pkg.org/badges/version-last-release/arrow)](https://cran.r-project.org/package=arrow)
 
[![CI](https://github.com/apache/arrow/workflows/R/badge.svg?event=push)](https://github.com/apache/arrow/actions?query=workflow%3AR+branch%3Amaster+event%3Apush)
 
[![conda-forge](https://img.shields.io/conda/vn/conda-forge/r-arrow.svg)](https://anaconda.org/conda-forge/r-arrow)
 
-**[Apache Arrow](https://arrow.apache.org/) is a cross-language
-development platform for in-memory data.** It specifies a standardized
+[Apache Arrow](https://arrow.apache.org/) is a cross-language
+development platform for in-memory data. It specifies a standardized
 language-independent columnar memory format for flat and hierarchical
 data, organized for efficient analytic operations on modern hardware. It
 also provides computational libraries and zero-copy streaming messaging
 and interprocess communication.
 
-**The `arrow` package exposes an interface to the Arrow C++ library,
-enabling access to many of its features in R.** It provides low-level
+The `arrow` R package exposes an interface to the Arrow C++ library,
+enabling access to many of its features in R. It provides low-level
 access to the Arrow C++ library API and higher-level access through a
 `{dplyr}` backend and familiar R functions.
 
 ## What can the `arrow` package do?
 
--   Read and write **Parquet files** (`read_parquet()`,
-    `write_parquet()`), an efficient and widely used columnar format
--   Read and write **Feather files** (`read_feather()`,
-    `write_feather()`), a format optimized for speed and
-    interoperability
--   Analyze, process, and write **multi-file, larger-than-memory
-    datasets** (`open_dataset()`, `write_dataset()`)
--   Read **large CSV and JSON files** with excellent **speed and
-    efficiency** (`read_csv_arrow()`, `read_json_arrow()`)
--   Write CSV files (`write_csv_arrow()`)
--   Manipulate and analyze Arrow data with **`dplyr` verbs**
--   Read and write files in **Amazon S3** and **Google Cloud Storage**
-    buckets with no additional function calls
--   Exercise **fine control over column types** for seamless
-    interoperability with databases and data warehouse systems
--   Use **compression codecs** including Snappy, gzip, Brotli,
-    Zstandard, LZ4, LZO, and bzip2 for reading and writing data
--   Enable **zero-copy data sharing** between **R and Python**
--   Connect to **Arrow Flight** RPC servers to send and receive large
-    datasets over networks
--   Access and manipulate Arrow objects through **low-level bindings**
-    to the C++ library
--   Provide a **toolkit for building connectors** to other applications
-    and services that use Arrow
-
-## Installation
+The `arrow` package provides functionality for a wide range of data analysis
+tasks. It allows users to read and write data in a variety formats:
 
-### Installing the latest release version
-
-Install the latest release of `arrow` from CRAN with
-
-``` r
-install.packages("arrow")
-```
+-   Read and write Parquet files, an efficient and widely used columnar format
+-   Read and write Feather files, a format optimized for speed and
+    interoperability
+-   Read and write CSV files with excellent speed and efficiency
+-   Read and write multi-file larger-than-memory datasets

Review Comment:
   ```suggestion
   -   Read and write multi-file and larger-than-memory datasets
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to