As far as I understand you will need a GPU for each worker node or you will
need to partition the GPU processing somehow to each node which I think
would defeat the purpose. In Databricks for example when you select GPU
workers there is a GPU allocated to each worker. I assume this is the
“correct” approach to this problem

On Mon, 6 Feb 2023 at 8:17 AM, Mich Talebzadeh <mich.talebza...@gmail.com>
wrote:

> if you have several nodes with only one node having GPUs, you still have
> to wait for the result set to complete. In other words it will be as fast
> as the lowest denominator ..
>
> my postulation
>
> HTH
>
>
>
>    view my Linkedin profile
> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Sun, 5 Feb 2023 at 13:38, Irene Markelic <ir...@markelic.de> wrote:
>
>> Hello,
>>
>> has anyone used spark with GPUs? I wonder if every worker node in a
>> cluster needs one GPU or if you can have several worker nodes of which
>> only one has a GPU.
>>
>> Thank you!
>>
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>
>>

Reply via email to