stephhazlitt commented on code in PR #159:
URL: https://github.com/apache/arrow-cookbook/pull/159#discussion_r1006295671


##########
r/content/datasets.Rmd:
##########
@@ -0,0 +1,291 @@
+<!---
+  Licensed to the Apache Software Foundation (ASF) under one
+  or more contributor license agreements.  See the NOTICE file
+  distributed with this work for additional information
+  regarding copyright ownership.  The ASF licenses this file
+  to you under the Apache License, Version 2.0 (the
+  "License"); you may not use this file except in compliance
+  with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing,
+  software distributed under the License is distributed on an
+  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  KIND, either express or implied.  See the License for the
+  specific language governing permissions and limitations
+  under the License.
+-->
+
+# Manipulating Data - Datasets
+
+## Introduction
+
+While Arrow allows you to work with single files imported as Arrow Tables, you 
can 
+also work with multi-files imported as an Arrow Dataset.  Partitioning allows 
you to split data across 
+multiple files/folders, avoiding problems associated with storing all your 
data in a single file.  This can provide 
+further advantages when using Arrow, as Arrow will only 
+read in the necessary partitioned files needed for any given analysis.
+
+It's possible to read in partitioned data in Parquet, Feather (aka Arrow), and 
CSV (or 
+other text-delimited) formats.  If you are choosing a partitioned or multifile 
format, we 
+recommend Parquet or Feather, both of which can have improved performance 
+when compared to CSVs due to their capabilities around metadata and 
compression.
+
+## Write a data to disk (Parquet)
+
+You want to write data to disk in a single Parquet file.
+
+### Solution
+
+```{r, write_dataset_basic}
+write_dataset(dataset = starwars, path = "starwars_data")
+```
+
+```{r, test_write_dataset_basic, opts.label = "test"}
+
+test_that("write_dataset_basic works as expected", {
+  
+  expect_true(file.exists("starwars_data"))
+  expect_length(list.files("starwars_data"), 1)
+  
+})
+
+```
+
+### Discussion
+
+The default format for `open_dataset()` is Parquet. 
+
+## Write a multifile dataset to disk partitioned based on a variable
+
+You want to write data to disk partitioned across multiple files.
+
+### Solution
+
+```{r, write_dataset_partitioned}
+write_dataset(dataset = starwars,
+  path = "starwars_data_partitioned",
+  partitioning = "homeworld")
+```
+
+
+```{r, test_write_dataset_partitioned, opts.label = "test"}
+
+test_that("write_dataset_partitioned works as expected", {
+  
+  expect_true(file.exists("starwars_data_partitioned"))
+  expect_length(list.files("starwars_data_partitioned", recursive = TRUE), 49)
+  
+})
+
+```
+
+### Discussion
+
+The data is written to separate folders based on the values in the `homeworld` 
+column.  The default behaviour is to use Hive-style (i.e. "col_name=value" 
folder names)
+partitions.
+
+```{r}
+# Take a look at the files in this directory
+list.files("starwars_data_partitioned", recursive = TRUE)
+```
+
+Note that in the example above, when there was an `NA` value in the `homeworld`
+column, these values are written to the `homeworld=__HIVE_DEFAULT_PARTITION__`
+directory.
+
+You can specify multiple partitioning variables to add extra levels of 
partitioning.
+
+```{r, write_dataset_partitioned_deeper}
+write_dataset(
+  dataset = starwars,
+  path = "starwars_partitioned_twice",
+  partitioning = c("homeworld", "species")
+)
+```
+
+```{r, test_write_dataset_partitioned_deeper, opts.label = "test"}
+
+test_that("write_dataset_partitioned_deeper works as expected", {
+  
+  expect_true(file.exists("starwars_partitioned_twice"))
+  expect_length(list.files("starwars_partitioned_twice", recursive = TRUE), 58)
+  
+})
+
+```
+
+```{r}
+# Take a look at the files in this directory
+list.files("starwars_partitioned_twice", recursive = TRUE)
+```
+
+There are two different ways to specify variables to use for partitioning - 
+either via the `partitioning` variable as above, or by using 
`dplyr::group_by()` on your data - the group variables will form the partitions.
+
+```{r, write_dataset_partitioned_groupby}
+library(dplyr)
+write_dataset(dataset = group_by(starwars, homeworld, species),
+  path = "starwars_groupby")
+```
+
+```{r, test_write_dataset_partitioned_groupby, opts.label = "test"}
+
+test_that("write_dataset_partitioned_groupby works as expected", {
+  
+  expect_true(file.exists("starwars_groupby"))
+  expect_length(list.files("starwars_groupby", recursive = TRUE), 58)
+  
+})
+
+```
+
+```{r}
+# Take a look at the files in this directory
+list.files("starwars_groupby", recursive = TRUE)
+```
+## Write data to disk - Feather format
+
+You want to write data to disk in a single Feather file.
+
+### Solution
+
+```{r, write_dataset_feather}
+write_dataset(dataset = starwars,
+  path = "starwars_data_feather",
+  format = "feather")
+```
+
+```{r, test_write_dataset_feather, opts.label = "test"}
+
+test_that("write_dataset_feather works as expected", {
+  
+  expect_true(file.exists("starwars_data_feather"))
+  expect_length(list.files("starwars_data_feather"), 1)
+  
+})
+
+```
+
+## Write data to disk - CSV format
+
+You want to write data to disk in a single CSV file.
+
+### Solution
+
+```{r, write_dataset_csv}
+# Need to update this example as we can't write list columns to CSV :(

Review Comment:
   I refactored to stick with the `airquality` dataset, to be more consistent 
with the rest fo the read/write material in the R cookbook.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to