[ 
https://issues.apache.org/jira/browse/BEAM-13983?focusedWorklogId=757433&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-757433
 ]

ASF GitHub Bot logged work on BEAM-13983:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 15/Apr/22 16:13
            Start Date: 15/Apr/22 16:13
    Worklog Time Spent: 10m 
      Work Description: ryanthompson591 commented on code in PR #17368:
URL: https://github.com/apache/beam/pull/17368#discussion_r851355795


##########
sdks/python/apache_beam/ml/inference/sklearn_loader.py:
##########
@@ -0,0 +1,73 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+import abc
+import enum
+import pickle
+import sys
+from dataclasses import dataclass
+from typing import Any
+from typing import Iterable
+from typing import List
+
+import joblib
+import numpy
+
+import apache_beam.ml.inference.api as api
+import apache_beam.ml.inference.base as base
+import sklearn_loader
+from apache_beam.io.filesystems import FileSystems
+
+
+class SerializationType(enum.Enum):
+  PICKLE = 1
+  JOBLIB = 2
+
+
+class SKLearnInferenceRunner(base.InferenceRunner):
+  def run_inference(self, batch: List[numpy.array],
+                    model: Any) -> Iterable[numpy.array]:
+    # vectorize data for better performance
+    vectorized_batch = numpy.stack(batch, axis=0)
+    predictions = model.predict(vectorized_batch)
+    return [api.PredictionResult(x, y) for x, y in zip(batch, predictions)]
+
+  def get_num_bytes(self, batch: List[numpy.array]) -> int:
+    """Returns the number of bytes of data for a batch."""
+    return sum(sys.getsizeof(element) for element in batch)

Review Comment:
   Yeah, you pretty much got it. There is a little overhead that getsizeof 
measures, but this would be similar to what serialization does.





Issue Time Tracking
-------------------

    Worklog Id:     (was: 757433)
    Time Spent: 2h 20m  (was: 2h 10m)

> Implement RunInference for Scikit-learn
> ---------------------------------------
>
>                 Key: BEAM-13983
>                 URL: https://issues.apache.org/jira/browse/BEAM-13983
>             Project: Beam
>          Issue Type: Sub-task
>          Components: sdk-py-core
>            Reporter: Andy Ye
>            Priority: P2
>              Labels: run-inference
>          Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Implement RunInference for Scikit-learn as described in the design doc 
> [https://s.apache.org/inference-sklearn-pytorch]
> There will be a sklearn_impl.py file that contains SklearnModelLoader and 
> SlkearnInferenceRunner classes.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to