I need to call `cpuinfo_processors_count`. The 8 files needed are below:

https://github.com/pytorch/cpuinfo/blob/master/src/api.c#L16
https://github.com/pytorch/cpuinfo/blob/master/src/api.h

https://github.com/pytorch/cpuinfo/blob/master/src/x86/windows/init.c#L569
https://github.com/pytorch/cpuinfo/blob/master/src/x86/linux/init.c#L570
https://github.com/pytorch/cpuinfo/blob/master/src/x86/mach/init.c#L325

https://github.com/pytorch/cpuinfo/blob/master/src/arm/mach/init.c#L480
https://github.com/pytorch/cpuinfo/blob/master/src/arm/linux/init.c#L654

Best,
Alex

On 5/19/18, 9:51 AM, "Hen" <[email protected]> wrote:

    It's 2-clause BSD: https://github.com/pytorch/cpuinfo/blob/master/LICENSE
    
    Which files are you looking to incorporate? I want to see what the source
    headers look like.
    
    Hen
    
    On Fri, May 18, 2018 at 5:47 PM, Marco de Abreu <
    [email protected]> wrote:
    
    > By the way. In general, I'd prefer to have a solution in C++ as part of 
our
    > Backend, as this would allow us to share this information with all 
frontend
    > APIs. Tobias is currently working on a PR which does exactly the same, but
    > for GPUs.
    >
    > Marco de Abreu <[email protected]> schrieb am Sa., 19. Mai
    > 2018,
    > 02:45:
    >
    > > I would have to check the license, but I assume it's under Apache. In
    > that
    > > case, it should be enough to copy the file and keep the original license
    > > header. Additionally, you have to document any changes you made and 
where
    > > you got it from. You can do this in the header, below the license.
    > >
    > > In any case, check the NOTICE of the project and see if they contain any
    > > special instructions.
    > >
    > > -Marco
    > >
    > > Anirudh <[email protected]> schrieb am Sa., 19. Mai 2018, 00:27:
    > >
    > >> Hi Alex,
    > >>
    > >> I am no expert but adding the license and the copyright to the header 
of
    > >> the file should be enough.
    > >> Can someone who has experience with this confirm ?
    > >>
    > >> Anirudh
    > >>
    > >> On Fri, May 18, 2018 at 10:55 AM, Alex Zai <[email protected]> wrote:
    > >>
    > >> > For this issue (https://github.com/apache/
    > incubator-mxnet/issues/10836
    > >> ),
    > >> > we
    > >> > need to determine the number of physical cores for each platform.
    > >> Currently
    > >> > we assume each platform supports HyperThreading and just fetch the
    > >> number
    > >> > of logical cores and divide by 2. However, in cases where the machine
    > >> does
    > >> > not support HTT, we underutilize the CPUs. Per the issue’s thread, 
the
    > >> > PyTorch organization has a library that does just this (
    > >> > https://github.com/pytorch/cpuinfo). The library is a bit heavy and
    > we
    > >> > only
    > >> > need a small portion of the code. Does anyone know if there is an
    > issue
    > >> > with just using a subset of the code?
    > >> >
    > >> >
    > >> >
    > >> > Alex
    > >> >
    > >>
    > >
    >
    

Reply via email to