Do we merge the cuda branch into 1.4? I didn't do the work directly
into the trunk because I wasn't sure of what I'd need to add to the
interface. Eventually, the additions are
- the "tight" field, which just means whether children are tightly packed,
such as cores in an nvidia MultiProcessor, i.e
Le 13/12/2011 12:14, Samuel Thibault a écrit :
> Do we merge the cuda branch into 1.4? I didn't do the work directly
> into the trunk because I wasn't sure of what I'd need to add to the
> interface. Eventually, the additions are
>
> - the "tight" field, which just means whether children are tightl
(sorry for the delay in replying...)
On Dec 9, 2011, at 3:22 PM, Brice Goglin wrote:
>> 1. Will there ever be any differentiation between cache levels in
>> hwloc_obj.type? I ask because in OMPI, we found that the various counting
>> routines were not helpful because they only search by *type*
On Dec 13, 2011, at 8:09 AM, Brice Goglin wrote:
> Last feeling: The more I think about PCI support, the more I wonder
> whether it will be used for anything but getting nice lstopo outputs.
> Inline helpers are already great for many cases, people just need
> locality info in most cases, so I won
Le 13/12/2011 16:22, Jeff Squyres a écrit :
>
> I can't speak for GPUs, but I think the PCI information will be useful to
> know what devices are close to what PUs / NUMA nodes. That information can
> be used to make decisions about binding, for example (i.e., you want to be
> "close" to the sp
On Dec 13, 2011, at 10:40 AM, Brice Goglin wrote:
> In most cases, you don't need PCI support for this, you juste manipulate
> a cuda device, an ibv_device, a MX endpoint, ... and use one of the
> inline helpers to get the corresponding locality (a cpuset).
I care mostly about Ethernet devices.
Le 13/12/2011 16:45, Jeff Squyres a écrit :
> On Dec 13, 2011, at 10:40 AM, Brice Goglin wrote:
>
>> In most cases, you don't need PCI support for this, you juste manipulate
>> a cuda device, an ibv_device, a MX endpoint, ... and use one of the
>> inline helpers to get the corresponding locality (a
On Dec 13, 2011, at 10:48 AM, Brice Goglin wrote:
> We can get the locality for those by reading
> /sys/class/net//device/local_cpus with name=eth0 and so on. That's
> very similar to what we do for OFED. I can add a helper for these
> devices as well if needed.
That would be great. It would cov
On Dec 13, 2011, at 10:55 AM, Jeff Squyres wrote:
> i.e., outside of those with specialty/nice network types...
Hahaha! s/nice/niche/ is what I meant to type! :-)
--
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/
Jeff Squyres, le Tue 13 Dec 2011 16:20:20 +0100, a écrit :
> >> 2. It would be helpful to have a member in the obj that represents the
> >> logical AND of online_cpuset and allowed_cpuset.
> >
> > I am never sure about all this. I don't like all these cpusets. Samuel
> > will answer better :)
>
Brice Goglin, le Tue 13 Dec 2011 16:41:08 +0100, a écrit :
> Le 13/12/2011 16:22, Jeff Squyres a écrit :
> >
> > I can't speak for GPUs, but I think the PCI information will be useful to
> > know what devices are close to what PUs / NUMA nodes. That information can
> > be used to make decisions
On Dec 13, 2011, at 12:02 PM, Samuel Thibault wrote:
>> In most cases, you don't need PCI support for this, you juste manipulate
>> a cuda device, an ibv_device, a MX endpoint, ... and use one of the
>> inline helpers to get the corresponding locality (a cpuset).
>
> A problem I see there is that
On Dec 13, 2011, at 11:59 AM, Samuel Thibault wrote:
> I keep not having time to do what I'd like to do, like answering
> mails...
What -- you have a real job or something like that? ;-)
> I believe we already discussed about something similar in the
> past and didn't really decide against or f
Le 13/12/2011 18:02, Samuel Thibault a écrit :
> Brice Goglin, le Tue 13 Dec 2011 16:41:08 +0100, a écrit :
>> Le 13/12/2011 16:22, Jeff Squyres a écrit :
>>> I can't speak for GPUs, but I think the PCI information will be useful to
>>> know what devices are close to what PUs / NUMA nodes. That i
Brice Goglin, le Tue 13 Dec 2011 14:10:17 +0100, a écrit :
> My main problem is that it's hard to know whether this will look good in
> two years when we'll have support for AMD APU, Intel MIC and other
> "strange" architectures. Which types should be common to CPUs and these
> accelerators? Might
You should also error out if --enable-cuda is given and cuda isn't
found. We got some complains about this for XML and PCI. Just duplicate
the xml_happy stuff for cuda.
Brice
Le 13/12/2011 18:15, sthib...@osl.iu.edu a écrit :
> Author: sthibaul
> Date: 2011-12-13 12:15:09 EST (Tue, 13 Dec 2011)
+1
On Dec 13, 2011, at 12:19 PM, Brice Goglin wrote:
> You should also error out if --enable-cuda is given and cuda isn't
> found. We got some complains about this for XML and PCI. Just duplicate
> the xml_happy stuff for cuda.
>
> Brice
>
>
>
> Le 13/12/2011 18:15, sthib...@osl.iu.edu a écri
Le 13/12/2011 18:13, Jeff Squyres a écrit :
> I see this in the category of: users really need to do this anyway, so
> either we're forcing every user to do it, or they're ignoring it and
> just using "available" or "online", which may not always give correct
> results. So we might as well provide
Brice Goglin, le Tue 13 Dec 2011 18:13:52 +0100, a écrit :
> Le 13/12/2011 18:02, Samuel Thibault a écrit :
> > Brice Goglin, le Tue 13 Dec 2011 16:41:08 +0100, a écrit :
> >> Le 13/12/2011 16:22, Jeff Squyres a écrit :
> >>> I can't speak for GPUs, but I think the PCI information will be useful to
Jeff Squyres, le Tue 13 Dec 2011 18:13:38 +0100, a écrit :
> On Dec 13, 2011, at 11:59 AM, Samuel Thibault wrote:
> > I keep not having time to do what I'd like to do, like answering
> > mails...
>
> What -- you have a real job or something like that? ;-)
It's mostly the crazy end of the year. M
Brice Goglin, le Tue 13 Dec 2011 18:21:12 +0100, a écrit :
> Le 13/12/2011 18:13, Jeff Squyres a écrit :
> > I see this in the category of: users really need to do this anyway, so
> > either we're forcing every user to do it, or they're ignoring it and
> > just using "available" or "online", which
Le 13/12/2011 18:17, Samuel Thibault a écrit :
> Brice Goglin, le Tue 13 Dec 2011 14:10:17 +0100, a écrit :
>> My main problem is that it's hard to know whether this will look good in
>> two years when we'll have support for AMD APU, Intel MIC and other
>> "strange" architectures. Which types shoul
Le 13/12/2011 18:47, Samuel Thibault a écrit :
> I'd say that some people might want WHOLE_SYSTEM while still needing
> the bindable cpuset.
Let's wait for those people to complain before adding a 8th
cpuset/nodeset to the object structure. If they do complain and they
really don't want to AND the
Le 13/12/2011 18:45, Samuel Thibault a écrit :
>> As long as we don't add something obviously not portable, I am fine.
> The current openfabrics-verbs.h and cuda*.h are obviously not portable.
I meant "obviously cannot be ported to other OS".
Brice
Brice Goglin, le Tue 13 Dec 2011 18:58:14 +0100, a écrit :
> >> Also I don't think the GPU caches should be L2 because they are not very
> >> similar to the CPU ones.
> > How so?
>
> In the same way the GPU memory is different from the NUMA node memory?
> Why would caches and cores be similar for
Brice Goglin, le Tue 13 Dec 2011 19:04:33 +0100, a écrit :
> Le 13/12/2011 18:45, Samuel Thibault a écrit :
> >> As long as we don't add something obviously not portable, I am fine.
> > The current openfabrics-verbs.h and cuda*.h are obviously not portable.
>
> I meant "obviously cannot be ported
Le 13/12/2011 19:36, Samuel Thibault a écrit :
> Brice Goglin, le Tue 13 Dec 2011 19:04:33 +0100, a écrit :
>> Le 13/12/2011 18:45, Samuel Thibault a écrit :
As long as we don't add something obviously not portable, I am fine.
>>> The current openfabrics-verbs.h and cuda*.h are obviously not p
Le 13/12/2011 19:30, Samuel Thibault a écrit :
> Ok, I thought you were referring to an architectural detail. Well,
> actually NUMA nodes and embedded memory should just both use a MEM
> object, instead of duplicating all kinds of objects. We won't duplicate
> such things for the MIC, will we?
My
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 14/12/11 00:09, Brice Goglin wrote:
> My main problem is that it's hard to know whether this will look good in
> two years when we'll have support for AMD APU, Intel MIC and other
> "strange" architectures.
We don't even really need to wait for th
Creating nightly hwloc snapshot SVN tarball was a success.
Snapshot: hwloc 1.4a1r4044
Start time: Tue Dec 13 21:01:01 EST 2011
End time: Tue Dec 13 21:07:34 EST 2011
Your friendly daemon,
Cyrador
Creating nightly hwloc snapshot SVN tarball was a success.
Snapshot: hwloc 1.3.1rc2r4045
Start time: Tue Dec 13 21:07:35 EST 2011
End time: Tue Dec 13 21:13:27 EST 2011
Your friendly daemon,
Cyrador
31 matches
Mail list logo