tvalentyn commented on code in PR #22088:
URL: https://github.com/apache/beam/pull/22088#discussion_r922422932


##########
sdks/python/apache_beam/examples/inference/README.md:
##########
@@ -255,4 +255,42 @@ This writes the output to the `predictions.txt` with 
contents like:
 0,0
 ...
 ```
-Each line has data separated by a comma ",". The first item is the actual 
label of the digit. The second item is the predicted label of the digit.
\ No newline at end of file
+Each line has data separated by a comma ",". The first item is the actual 
label of the digit. The second item is the predicted label of the digit.
+
+### Running `sklearn_japanese_housing_regression.py`
+
+#### Getting the data:
+Data for this example can be found at:
+https://www.kaggle.com/datasets/nishiodens/japan-real-estate-transaction-prices
+
+#### Models:
+Prebuilt sklearn pipelines are hosted at:
+https://storage.cloud.google.com/apache-beam-ml/models/japanese_housing/
+
+Note: This example uses more than one model. Since not all features in an 
example are populated, a different model will be chosen based on available data.
+
+For example an example without distance to the nearest station will use a 
model that doesn't rely on that data.
+
+#### Running the Pipeline
+To run locally, use the following command:
+```sh
+python -m 
apache_beam.examples.inference.sklearn_japanese_housing_regression.py \
+  --input_file INPUT \
+  --output OUTPUT \
+  --model_path MODEL_PATH
+```
+For example:
+```sh
+python -m 
apache_beam.examples.inference.sklearn_japanese_housing_regression.py \
+  --input_file mnist_data.csv \

Review Comment:
   is `mnist_data` a correct filename for the japanese housing dataset?



##########
sdks/python/apache_beam/examples/inference/sklearn_japanese_housing_regression.py:
##########
@@ -0,0 +1,165 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+"""A pipeline that uses RunInference API on a regression about housing prices.
+
+This example uses the japanese housing data from kaggle.
+https://www.kaggle.com/datasets/nishiodens/japan-real-estate-transaction-prices
+
+Since the data has missing fields, this example illustrates how to split
+data and assign it to the models that are trained on different subsets of
+features. The predictions are then recombined.
+
+In order to set this example up, you will need two things.
+1. Build models (or use ours) and reference those via the model directory.
+2. Download the data from kaggle and host it.
+"""
+
+import argparse
+from typing import Iterable
+
+import apache_beam as beam
+from apache_beam.io.filesystems import FileSystems
+from apache_beam.ml.inference.base import RunInference
+from apache_beam.ml.inference.sklearn_inference import ModelFileType
+from apache_beam.ml.inference.sklearn_inference import 
SklearnModelHandlerPandas
+from apache_beam.options.pipeline_options import PipelineOptions
+from apache_beam.options.pipeline_options import SetupOptions
+import pandas
+
+MODELS = [{
+    'name': 'all_features',
+    'required_features': [
+        'Area',
+        'Year',
+        'MinTimeToNearestStation',
+        'MaxTimeToNearestStation',
+        'TotalFloorArea',
+        'Frontage',
+        'Breadth',
+        'BuildingYear'
+    ]
+},
+          {
+              'name': 'floor_area',
+              'required_features': ['Area', 'Year', 'TotalFloorArea']
+          },
+          {
+              'name': 'stations',
+              'required_features': [
+                  'Area',
+                  'Year',
+                  'MinTimeToNearestStation',
+                  'MaxTimeToNearestStation'
+              ]
+          }, {
+              'name': 'no_features', 'required_features': ['Area', 'Year']
+          }]
+
+
+def sort_by_features(dataframe, max_size):
+  """ Partitions the dataframe by what data it has available."""
+  for i, model in enumerate(MODELS):
+    required_features = dataframe[model['required_features']]
+    if not required_features.isnull().any().any():
+      return i
+  return -1
+
+
+class LoadDataframe(beam.DoFn):
+  def process(self, file_name: str) -> Iterable[pandas.DataFrame]:
+    """ Loads data files as a pandas dataframe."""
+    file = FileSystems.open(file_name, 'rb')
+    dataframe = pandas.read_csv(file)
+    for i in range(dataframe.shape[0]):
+      yield dataframe.iloc[[i]]
+
+
+def report_predictions(prediction_result):
+  true_result = prediction_result.example['TradePrice'].values[0]
+  inference = prediction_result.inference
+  return 'True Price %.0f, Predicted Price %.0f' % (true_result, inference)
+
+
+def parse_known_args(argv):
+  """Parses args for the workflow."""
+  parser = argparse.ArgumentParser()
+  parser.add_argument(
+      '--input',
+      dest='input',
+      required=True,
+      help='A single or comma separated list of files or uris.')

Review Comment:
   ```suggestion
         help='A single or comma-separated list of files or uris.')
   ```



##########
sdks/python/apache_beam/examples/inference/README.md:
##########
@@ -255,4 +255,42 @@ This writes the output to the `predictions.txt` with 
contents like:
 0,0
 ...
 ```
-Each line has data separated by a comma ",". The first item is the actual 
label of the digit. The second item is the predicted label of the digit.
\ No newline at end of file
+Each line has data separated by a comma ",". The first item is the actual 
label of the digit. The second item is the predicted label of the digit.
+
+### Running `sklearn_japanese_housing_regression.py`
+
+#### Getting the data:
+Data for this example can be found at:
+https://www.kaggle.com/datasets/nishiodens/japan-real-estate-transaction-prices
+
+#### Models:
+Prebuilt sklearn pipelines are hosted at:
+https://storage.cloud.google.com/apache-beam-ml/models/japanese_housing/
+
+Note: This example uses more than one model. Since not all features in an 
example are populated, a different model will be chosen based on available data.
+
+For example an example without distance to the nearest station will use a 
model that doesn't rely on that data.

Review Comment:
   ```suggestion
   For example, an example without distance to the nearest station will use a 
model that doesn't rely on that data.
   ```



##########
sdks/python/apache_beam/examples/inference/sklearn_japanese_housing_regression.py:
##########
@@ -0,0 +1,165 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+"""A pipeline that uses RunInference API on a regression about housing prices.
+
+This example uses the japanese housing data from kaggle.
+https://www.kaggle.com/datasets/nishiodens/japan-real-estate-transaction-prices
+
+Since the data has missing fields, this example illustrates how to split
+data and assign it to the models that are trained on different subsets of
+features. The predictions are then recombined.
+
+In order to set this example up, you will need two things.
+1. Build models (or use ours) and reference those via the model directory.
+2. Download the data from kaggle and host it.
+"""
+
+import argparse
+from typing import Iterable
+
+import apache_beam as beam
+from apache_beam.io.filesystems import FileSystems
+from apache_beam.ml.inference.base import RunInference
+from apache_beam.ml.inference.sklearn_inference import ModelFileType
+from apache_beam.ml.inference.sklearn_inference import 
SklearnModelHandlerPandas
+from apache_beam.options.pipeline_options import PipelineOptions
+from apache_beam.options.pipeline_options import SetupOptions
+import pandas
+
+MODELS = [{
+    'name': 'all_features',
+    'required_features': [
+        'Area',
+        'Year',
+        'MinTimeToNearestStation',
+        'MaxTimeToNearestStation',
+        'TotalFloorArea',
+        'Frontage',
+        'Breadth',
+        'BuildingYear'
+    ]
+},
+          {
+              'name': 'floor_area',
+              'required_features': ['Area', 'Year', 'TotalFloorArea']
+          },
+          {
+              'name': 'stations',
+              'required_features': [
+                  'Area',
+                  'Year',
+                  'MinTimeToNearestStation',
+                  'MaxTimeToNearestStation'
+              ]
+          }, {
+              'name': 'no_features', 'required_features': ['Area', 'Year']
+          }]
+
+
+def sort_by_features(dataframe, max_size):
+  """ Partitions the dataframe by what data it has available."""
+  for i, model in enumerate(MODELS):
+    required_features = dataframe[model['required_features']]
+    if not required_features.isnull().any().any():

Review Comment:
   is it possible to rewrite this without double negation?
   sth like:
   
   ```
   if  required_features.notnull().all().all():
   ```



##########
sdks/python/apache_beam/examples/inference/sklearn_japanese_housing_regression.py:
##########
@@ -0,0 +1,165 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+"""A pipeline that uses RunInference API on a regression about housing prices.
+
+This example uses the japanese housing data from kaggle.
+https://www.kaggle.com/datasets/nishiodens/japan-real-estate-transaction-prices
+
+Since the data has missing fields, this example illustrates how to split
+data and assign it to the models that are trained on different subsets of
+features. The predictions are then recombined.
+
+In order to set this example up, you will need two things.
+1. Build models (or use ours) and reference those via the model directory.
+2. Download the data from kaggle and host it.
+"""
+
+import argparse
+from typing import Iterable
+
+import apache_beam as beam
+from apache_beam.io.filesystems import FileSystems
+from apache_beam.ml.inference.base import RunInference
+from apache_beam.ml.inference.sklearn_inference import ModelFileType
+from apache_beam.ml.inference.sklearn_inference import 
SklearnModelHandlerPandas
+from apache_beam.options.pipeline_options import PipelineOptions
+from apache_beam.options.pipeline_options import SetupOptions
+import pandas
+
+MODELS = [{
+    'name': 'all_features',
+    'required_features': [

Review Comment:
   formatting looks strange here. is this how yapf rewrites it? there may be a 
way to disable it for a particular section.



##########
sdks/python/apache_beam/examples/inference/sklearn_japanese_housing_regression.py:
##########
@@ -0,0 +1,165 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+"""A pipeline that uses RunInference API on a regression about housing prices.
+
+This example uses the japanese housing data from kaggle.
+https://www.kaggle.com/datasets/nishiodens/japan-real-estate-transaction-prices
+
+Since the data has missing fields, this example illustrates how to split
+data and assign it to the models that are trained on different subsets of
+features. The predictions are then recombined.
+
+In order to set this example up, you will need two things.
+1. Build models (or use ours) and reference those via the model directory.
+2. Download the data from kaggle and host it.
+"""
+
+import argparse
+from typing import Iterable
+
+import apache_beam as beam
+from apache_beam.io.filesystems import FileSystems
+from apache_beam.ml.inference.base import RunInference
+from apache_beam.ml.inference.sklearn_inference import ModelFileType
+from apache_beam.ml.inference.sklearn_inference import 
SklearnModelHandlerPandas
+from apache_beam.options.pipeline_options import PipelineOptions
+from apache_beam.options.pipeline_options import SetupOptions
+import pandas
+
+MODELS = [{
+    'name': 'all_features',
+    'required_features': [
+        'Area',
+        'Year',
+        'MinTimeToNearestStation',
+        'MaxTimeToNearestStation',
+        'TotalFloorArea',
+        'Frontage',
+        'Breadth',
+        'BuildingYear'
+    ]
+},
+          {
+              'name': 'floor_area',
+              'required_features': ['Area', 'Year', 'TotalFloorArea']
+          },
+          {
+              'name': 'stations',
+              'required_features': [
+                  'Area',
+                  'Year',
+                  'MinTimeToNearestStation',
+                  'MaxTimeToNearestStation'
+              ]
+          }, {
+              'name': 'no_features', 'required_features': ['Area', 'Year']
+          }]
+
+
+def sort_by_features(dataframe, max_size):
+  """ Partitions the dataframe by what data it has available."""
+  for i, model in enumerate(MODELS):
+    required_features = dataframe[model['required_features']]
+    if not required_features.isnull().any().any():
+      return i
+  return -1
+
+
+class LoadDataframe(beam.DoFn):
+  def process(self, file_name: str) -> Iterable[pandas.DataFrame]:
+    """ Loads data files as a pandas dataframe."""
+    file = FileSystems.open(file_name, 'rb')
+    dataframe = pandas.read_csv(file)
+    for i in range(dataframe.shape[0]):
+      yield dataframe.iloc[[i]]
+
+
+def report_predictions(prediction_result):
+  true_result = prediction_result.example['TradePrice'].values[0]
+  inference = prediction_result.inference
+  return 'True Price %.0f, Predicted Price %.0f' % (true_result, inference)
+
+
+def parse_known_args(argv):
+  """Parses args for the workflow."""
+  parser = argparse.ArgumentParser()
+  parser.add_argument(
+      '--input',
+      dest='input',
+      required=True,
+      help='A single or comma separated list of files or uris.')
+  parser.add_argument(
+      '--model_path',
+      dest='model_path',
+      required=True,
+      help='A path from where all models can be read.')
+  parser.add_argument(
+      '--output',
+      dest='output',
+      required=True,
+      help='Path to save output predictions.')
+  return parser.parse_known_args(argv)
+
+
+def inference_transform(model_name, model_path):
+  # These sklearn models are a pipeline that use pandas.
+  model_filename = model_path + model_name + '.pickle'
+  model_loader = SklearnModelHandlerPandas(
+      model_file_type=ModelFileType.PICKLE, model_uri=model_filename)
+  transform_name = 'RunInference ' + model_name
+  return transform_name >> RunInference(model_loader)
+
+
+def run(argv=None, save_main_session=True):
+  """Entry point. Defines and runs the pipeline."""
+  known_args, pipeline_args = parse_known_args(argv)
+  pipeline_options = PipelineOptions(pipeline_args)
+  pipeline_options.view_as(SetupOptions).save_main_session = save_main_session
+
+  with beam.Pipeline(options=pipeline_options) as p:
+    # This example uses a single file, but it is possible to use many files.

Review Comment:
   will the code `just work` if someone supplies comma-separated inputs (as 
suggested in the help string for --input)?



##########
sdks/python/apache_beam/ml/inference/sklearn_inference_it_test.py:
##########
@@ -70,6 +78,27 @@ def test_sklearn_mnist_classification(self):
       true_label, expected_prediction = expected_outputs[i].split(',')
       self.assertEqual(predictions_dict[true_label], expected_prediction)
 
+  def test_sklearn_regression(self):
+    test_pipeline = TestPipeline(is_integration_test=True)
+    input_file = 
'gs://apache-beam-ml/testing/inputs/japanese_housing_test_data.csv'  # pylint: 
disable=line-too-long
+    output_file_dir = 'gs://temp-storage-for-end-to-end-tests'
+    output_file = '/'.join([output_file_dir, str(uuid.uuid4()), 'result.txt'])
+    model_path = 'gs://apache-beam-ml/models/japanese_housing/'
+    extra_opts = {
+        'input': input_file,
+        'output': output_file,
+        'model_path': model_path,
+    }
+    sklearn_japanese_housing_regression.run(
+        test_pipeline.get_full_options_as_args(**extra_opts),
+        save_main_session=False)
+    self.assertEqual(FileSystems().exists(output_file), True)
+
+    expected_output_filepath = 
'gs://apache-beam-ml/testing/expected_outputs/japanese_housing_subset.txt'  # 
pylint: disable=line-too-long
+    expected = file_lines_sorted(expected_output_filepath)
+    actual = file_lines_sorted(output_file)
+    self.assertListEqual(expected, actual)

Review Comment:
   do we need to do any rounding in `report_predictions` to avoid flakes due to 
possible lack of precision?



##########
sdks/python/apache_beam/examples/inference/sklearn_japanese_housing_regression.py:
##########
@@ -0,0 +1,165 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+"""A pipeline that uses RunInference API on a regression about housing prices.
+
+This example uses the japanese housing data from kaggle.
+https://www.kaggle.com/datasets/nishiodens/japan-real-estate-transaction-prices
+
+Since the data has missing fields, this example illustrates how to split
+data and assign it to the models that are trained on different subsets of
+features. The predictions are then recombined.
+
+In order to set this example up, you will need two things.
+1. Build models (or use ours) and reference those via the model directory.
+2. Download the data from kaggle and host it.
+"""
+
+import argparse
+from typing import Iterable
+
+import apache_beam as beam
+from apache_beam.io.filesystems import FileSystems
+from apache_beam.ml.inference.base import RunInference
+from apache_beam.ml.inference.sklearn_inference import ModelFileType
+from apache_beam.ml.inference.sklearn_inference import 
SklearnModelHandlerPandas
+from apache_beam.options.pipeline_options import PipelineOptions
+from apache_beam.options.pipeline_options import SetupOptions
+import pandas
+
+MODELS = [{
+    'name': 'all_features',
+    'required_features': [
+        'Area',
+        'Year',
+        'MinTimeToNearestStation',
+        'MaxTimeToNearestStation',
+        'TotalFloorArea',
+        'Frontage',
+        'Breadth',
+        'BuildingYear'
+    ]
+},
+          {
+              'name': 'floor_area',
+              'required_features': ['Area', 'Year', 'TotalFloorArea']
+          },
+          {
+              'name': 'stations',
+              'required_features': [
+                  'Area',
+                  'Year',
+                  'MinTimeToNearestStation',
+                  'MaxTimeToNearestStation'
+              ]
+          }, {
+              'name': 'no_features', 'required_features': ['Area', 'Year']
+          }]
+
+
+def sort_by_features(dataframe, max_size):
+  """ Partitions the dataframe by what data it has available."""
+  for i, model in enumerate(MODELS):
+    required_features = dataframe[model['required_features']]
+    if not required_features.isnull().any().any():
+      return i
+  return -1
+
+
+class LoadDataframe(beam.DoFn):
+  def process(self, file_name: str) -> Iterable[pandas.DataFrame]:
+    """ Loads data files as a pandas dataframe."""
+    file = FileSystems.open(file_name, 'rb')
+    dataframe = pandas.read_csv(file)
+    for i in range(dataframe.shape[0]):
+      yield dataframe.iloc[[i]]
+
+
+def report_predictions(prediction_result):
+  true_result = prediction_result.example['TradePrice'].values[0]
+  inference = prediction_result.inference
+  return 'True Price %.0f, Predicted Price %.0f' % (true_result, inference)
+
+
+def parse_known_args(argv):
+  """Parses args for the workflow."""
+  parser = argparse.ArgumentParser()
+  parser.add_argument(
+      '--input',
+      dest='input',
+      required=True,
+      help='A single or comma separated list of files or uris.')
+  parser.add_argument(
+      '--model_path',
+      dest='model_path',
+      required=True,
+      help='A path from where all models can be read.')
+  parser.add_argument(
+      '--output',
+      dest='output',
+      required=True,
+      help='Path to save output predictions.')
+  return parser.parse_known_args(argv)
+
+
+def inference_transform(model_name, model_path):
+  # These sklearn models are a pipeline that use pandas.

Review Comment:
   the comment sounds somewhat confusing to me, but perhaps I am missing some 
sklearn context. is it necessary?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to