PhilippeMoussalli commented on code in PR #22587:
URL: https://github.com/apache/beam/pull/22587#discussion_r1005905353
##########
examples/notebooks/beam-ml/dataframe_api_preprocessing.ipynb:
##########
@@ -0,0 +1,2163 @@
+{
+ "cells": [
+ {
+ "cell_type": "code",
+ "source": [
+ "#@title ###### Licensed to the Apache Software Foundation (ASF),
Version 2.0 (the \"License\")\n",
+ "\n",
+ "# Licensed to the Apache Software Foundation (ASF) under one\n",
+ "# or more contributor license agreements. See the NOTICE file\n",
+ "# distributed with this work for additional information\n",
+ "# regarding copyright ownership. The ASF licenses this file\n",
+ "# to you under the Apache License, Version 2.0 (the\n",
+ "# \"License\"); you may not use this file except in compliance\n",
+ "# with the License. You may obtain a copy of the License at\n",
+ "#\n",
+ "# http://www.apache.org/licenses/LICENSE-2.0\n",
+ "#\n",
+ "# Unless required by applicable law or agreed to in writing,\n",
+ "# software distributed under the License is distributed on an\n",
+ "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n",
+ "# KIND, either express or implied. See the License for the\n",
+ "# specific language governing permissions and limitations\n",
+ "# under the License."
+ ],
+ "metadata": {
+ "id": "sARMhsXz8yR1",
+ "cellView": "form"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "# Overview\n",
+ "\n",
+ "One of the most common tools used for data exploration and
pre-processing is [pandas
DataFrames](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html).
Pandas has become very popular for its ease of use. It has very intuitive
methods to perform common analytical tasks and data pre-processing. \n",
+ "\n",
+ "Pandas loads all of the data into memory on a single machine (one
node) for rapid execution. This works well when dealing with small-scale
datasets. However, many projects involve datasets that can grow too big to fit
in memory. These use cases generally require the usage of parallel data
processing frameworks such as Apache Beam.\n",
+ "\n",
+ "\n",
+ "## Beam DataFrames\n",
+ "\n",
+ "\n",
+ "Beam DataFrames provide a pandas-like\n",
+ "API to declare and define Beam processing pipelines. It provides a
familiar interface for machine learning practioners to build complex
data-processing pipelines by only invoking standard pandas commands.\n",
+ "\n",
+ "> ℹ️ To learn more about Beam DataFrames, take a look at the\n",
+ "[Beam DataFrames
overview](https://beam.apache.org/documentation/dsls/dataframes/overview)
page.\n",
+ "\n",
+ "## Tutorial outline\n",
+ "\n",
+ "In this notebook, we walk through the use of the Beam DataFrames API
to perform common data exploration as well as pre-processing steps that are
necessary to prepare your dataset for machine learning model training and
inference, such as: \n",
+ "\n",
+ "* Removing unwanted columns.\n",
+ "* One-hot encoding categorical columns.\n",
+ "* Normalizing numerical columns.\n",
+ "\n",
+ "\n"
+ ],
+ "metadata": {
+ "id": "iFZC1inKuUCy"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "# Installation\n",
+ "\n",
+ "As we want to explore the elements within a `PCollection`, we can
make use of the the Interactive runner by installing Apache Beam with the
`interactive` component. The latest implemented DataFrames API methods invoked
in this notebook are available in Beam <b>2.41</b> or later.\n"
+ ],
+ "metadata": {
+ "id": "A0f2HJ22D4lt"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "pCjwrwNWnuqI"
+ },
+ "source": [
+ "**Option 1:** Install latest version with implemented df.mean()\n",
+ "\n",
+ "TODO: Remove "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "-OJC0Xn5Um-C"
+ },
+ "outputs": [],
+ "source": [
+ "!git clone https://github.com/apache/beam.git\n",
+ "\n",
+ "!cd beam/sdks/python && pip3 install -r build-requirements.txt \n",
+ "\n",
+ "%pip install -e beam/sdks/python/.[interactive,gcp]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "xfXzNzA1n3ZP"
+ },
+ "source": [
+ "**Option 2:** Install latest release version \n",
+ "\n",
+ "**[12/07/2022]:** df.mean() is currently not supported for this
version (beam 2.40)\n",
+ "\n",
+ "TODO: Remove"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "4xY7ECJZOuJj"
+ },
+ "outputs": [],
+ "source": [
+ "! pip install apache-beam[interactive,gcp]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "# Part I : Local exploration with the Interactive Beam runner\n",
+ "We first use the [Interactive
Beam](https://beam.apache.org/releases/pydoc/2.20.0/apache_beam.runners.interactive.interactive_beam.html)
to explore and develop our pipeline.\n",
+ "This allows us to test our code interactively, building out the
pipeline as we go before deploying it on a distributed runner. \n",
+ "\n",
+ "\n",
+ "> ℹ️ In this section, we will only be working with a subset of the
original dataset since we're only using the the compute resources of the
notebook instance.\n"
+ ],
+ "metadata": {
+ "id": "3NO6RgB7GkkE"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "5I3G094hoB1P"
+ },
+ "source": [
+ "# Loading the data\n",
+ "\n",
+ "Pandas has the\n",
+
"[`pandas.read_csv`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html)\n",
+ "function to easily read CSV files into DataFrames.\n",
+ "We're using the beam\n",
+
"[`beam.dataframe.io.read_csv`](https://beam.apache.org/releases/pydoc/current/apache_beam.dataframe.io.html#apache_beam.dataframe.io.read_csv)\n",
+ "function that emulates `pandas.read_csv`. The main difference between
them is that the beam method returns a deferred Beam DataFrame while pandas
return a standard DataFrame.\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {
+ "id": "X3_OB9cAULav"
+ },
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "\n",
+ "import numpy as np\n",
+ "import pandas as pd \n",
+ "import apache_beam as beam\n",
+ "import apache_beam.runners.interactive.interactive_beam as ib\n",
+ "from apache_beam import dataframe\n",
+ "from apache_beam.runners.interactive.interactive_runner import
InteractiveRunner\n",
+ "from apache_beam.runners.dataflow import DataflowRunner\n",
+ "\n",
+ "# Available options: [sample_1000, sample_10000, sample_100000, full]
where\n",
+ "# sample contains all of the dataset (around 1000000 samples)\n",
+ "\n",
+ "source_csv_file =
'gs://apache-beam-samples/nasa_jpl_asteroid/sample_10000.csv'\n",
+ "\n",
+ "# Initialize pipline\n",
+ "p = beam.Pipeline(InteractiveRunner())\n",
+ "\n",
+ "# Create a deferred Beam DataFrame with the contents of our csv
file.\n",
+ "beam_df = p | beam.dataframe.io.read_csv(source_csv_file,
splittable=True)\n"
Review Comment:
I just read more about the `splittable` param and it turns out it's not
needed for this example. i'll leave it out
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]