Hey Nick,

See my replies below

On 2019-08-06 8:09 p.m., xerofoify via talk wrote:
On Tue, Aug 6, 2019 at 4:23 PM Alex Volkov via talk <talk@gtalug.org> wrote:
Yes.

Unfortunately I went though the debugging process before I got to the 
paragraph. On the upside I think I got a lightning talk out of it, that I'll 
try to present at the next meeting.

Alex.

Alex,
I don't know how much your intending to do with that GPU or otherwise.
If your just using Nvidia I can't help
you as mentioned but if your interested in GPU workloads  was looking
at the AMDGPU backend for LLVM.
Not sure if there is one that targets Nvidia cards but it may be of
interest to you as you would be able to
compile directly for the GPU rather than using an API to access it.
Not sure about Nvidia so double check
that.

Here is the official documentation for AMD through:
https://llvm.org/docs/AMDGPUUsage.html

I haven't gotten that far into it, just ran a few ML tutorials, I haven't yet created any of my own models yet, my knowledge is limited to trying out tensorflow package and getting tensorflow-gpu (nvidia bindings) verifiably working with the hardware.

Turns out NVIDIA drivers have this nice feature of falling back to CPU processing when there's an error -- this is useful when needing to get things done at any cost, not so much when attempting to debug the issue.

So far I mostly used the card for hw-accelerated h264 encoding through ffmpeg.

If your using it for machine learning it may be helpful to be aware of
it as you could compile
the libraries if possible onto the GPU target rather than access than
indirectly through
the CPU. Again not sure of what libraries but you should for most of
the popular ones
and that may increase throughput a lot as it's direct assembly for the
card not abstracted.

Thanks for the advice, I'm not that far along in the process to use this information.

You seem to know a lot on the topic of optimizing workloads on GPU, would you like to come to our meeting next Tuesday and give a 5-10 minute talk on this? -- https://gtalug.org/meeting/2019-08/


As for GPU memory that may be a issue as Hugh mentioned depending on the size
of the workload. I don't think it would matter for your tutorials but
going across the
PCI bus is about as bad as cache misses for CPUs so best to not have them if
possible. If you were able to find a 6GB version that would be more than enough
for most workloads excluding professional. 1060s were shipped with either 3 or
6GB so that may be something for card you ordered to check. Retail I recall
it being about a 30-50 Canadian difference and for double the RAM it was
a good detail at the time if you bought one.

There seem to be a lot of gamers who upgraded to 1080 selling used 1060 6GB for a reasonable price. I got MSI GTX 1060 6GB version.

So far with h264 encoding I've noticed that there's significant processing drop, when the card finishes encoding a chunk of data, then saturates PCI bus.


Alex.


Hopefully that helps a little,

Nick

P.S. Not aware but I'm assuming there is one for gcc as well if you would
prefer that for your development or learning.

On 2019-08-05 11:12 a.m., D. Hugh Redelmeier via talk wrote:

| From: Alex Volkov via talk <talk@gtalug.org>

| I have another system with Ryzen 5 2400G and was hoping to run ROCm on it, but
| as it turns out -- ROCm doesn't fully support AMD cards with built-in
| graphics. I still can install discreet card into that system but the solution
| is not as cheap as getting a used GTX off craigslist.

In January I saw cheap Radeo RX 580s on Kijiji too.  I haven't looked
recently.

One advantage of AMD over nvidia is that larger memories are more common.

It's a shame about ROCm's lack of APU support.  Parts of it are there.

<https://rocm.github.io/hardware.html>

     The iGPU in AMD APUs

     The following APUs are not fully supported by the ROCm stack.

“Carrizo” and “Bristol Ridge” APUs
“Raven Ridge” APUs

     These APUs are enabled in the upstream Linux kernel drivers and the
     ROCm Thunk. Support for these APUs is enabled in the ROCm OpenCL
     runtime. However, support for them is not enabled in our HCC compiler,
     HIP, or the ROCm libraries. In addition, because ROCm is currently
     focused on discrete GPUs, AMD does not make any claims of continued
     support in the ROCm stack for these integrated GPUs.

     In addition, these APUs may may not work due to OEM and ODM choices
     when it comes to key configurations parameters such as inclusion of
     the required CRAT tables and IOMMU configuration parameters in the
     system BIOS. As such, APU-based laptops, all-in-one systems, and
     desktop motherboards may not be properly detected by the ROCm drivers.
     You should check with your system vendor to see if these options are
     available before attempting to use an APU-based system with ROCm.


---
Post to this mailing list talk@gtalug.org
Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk


---
Post to this mailing list talk@gtalug.org
Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk
---
Post to this mailing list talk@gtalug.org
Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk


---
Post to this mailing list talk@gtalug.org
Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk

Reply via email to