mathewjacob1002 commented on code in PR #41770:
URL: https://github.com/apache/spark/pull/41770#discussion_r1256417120


##########
python/pyspark/ml/torch/distributor.py:
##########
@@ -1003,3 +1006,112 @@ def _get_spark_partition_data_loader(
         # if num_workers is zero, we cannot set `prefetch_factor` otherwise
         # torch will raise error.
         return DataLoader(dataset, batch_size, num_workers=num_workers)
+
+
+class DeepspeedTorchDistributor(TorchDistributor):
+    
+    def __init__(self, num_gpus: int = 1, nnodes: int = 1, local_mode: bool = 
True, use_gpu: bool = True, deepspeed_config = None):
+        """
+            @param: num_gpus: the number of gpus per node (the same num_gpus 
argument in deepspeed command)
+            @param: nnodes: the number of nodes that you want to run with 
(analagous to deepspeed command)
+            @param: local_mode: boolean value representing whether you want 
distributed training or to run the training locally
+            @param: use_gpu: represents whether or not to use GPUs
+            @param: deepspeed_config: can be a dictionary representing 
arguments for deepspeed config, or can be a string representing the path
+                    to a config file. If nothing is specified, deepspeed will 
use its default optimizers and settings
+        """

Review Comment:
   Done!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to