[
https://issues.apache.org/jira/browse/BEAM-13983?focusedWorklogId=757423&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-757423
]
ASF GitHub Bot logged work on BEAM-13983:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 15/Apr/22 15:19
Start Date: 15/Apr/22 15:19
Worklog Time Spent: 10m
Work Description: ryanthompson591 commented on code in PR #17368:
URL: https://github.com/apache/beam/pull/17368#discussion_r851328207
##########
sdks/python/apache_beam/ml/inference/sklearn_loader.py:
##########
@@ -0,0 +1,73 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+import abc
+import enum
+import pickle
+import sys
+from dataclasses import dataclass
+from typing import Any
+from typing import Iterable
+from typing import List
+
+import joblib
+import numpy
+
+import apache_beam.ml.inference.api as api
+import apache_beam.ml.inference.base as base
+import sklearn_loader
+from apache_beam.io.filesystems import FileSystems
+
+
+class SerializationType(enum.Enum):
+ PICKLE = 1
+ JOBLIB = 2
+
+
+class SKLearnInferenceRunner(base.InferenceRunner):
+ def run_inference(self, batch: List[numpy.array],
+ model: Any) -> Iterable[numpy.array]:
+ # vectorize data for better performance
+ vectorized_batch = numpy.stack(batch, axis=0)
+ predictions = model.predict(vectorized_batch)
+ return [api.PredictionResult(x, y) for x, y in zip(batch, predictions)]
+
+ def get_num_bytes(self, batch: List[numpy.array]) -> int:
+ """Returns the number of bytes of data for a batch."""
+ return sum(sys.getsizeof(element) for element in batch)
+
+
+class SKLearnModelLoader(base.ModelLoader):
+ def __init__(
+ self,
+ serialization: SerializationType = SerializationType.PICKLE,
+ model_uri: str = ''):
+ self._serialization = serialization
+ self._model_uri = model_uri
Review Comment:
I think it's fine to use different parameters. In my view, path communicates
more a directory, and URI a single file.
TFX-BSL is using saved_model_spec (which is a proto) that contains
model_path. As far as I can tell, TF saves models to a path rather than a URI
or file.
https://cloud.google.com/blog/topics/developers-practitioners/using-tfx-inference-dataflow-large-scale-ml-inference-patterns
What does pytorch do? Is it a single file or a path with a bunch of data?
Issue Time Tracking
-------------------
Worklog Id: (was: 757423)
Time Spent: 1.5h (was: 1h 20m)
> Implement RunInference for Scikit-learn
> ---------------------------------------
>
> Key: BEAM-13983
> URL: https://issues.apache.org/jira/browse/BEAM-13983
> Project: Beam
> Issue Type: Sub-task
> Components: sdk-py-core
> Reporter: Andy Ye
> Priority: P2
> Labels: run-inference
> Time Spent: 1.5h
> Remaining Estimate: 0h
>
> Implement RunInference for Scikit-learn as described in the design doc
> [https://s.apache.org/inference-sklearn-pytorch]
> There will be a sklearn_impl.py file that contains SklearnModelLoader and
> SlkearnInferenceRunner classes.
--
This message was sent by Atlassian Jira
(v8.20.1#820001)