The hugepages mapping is done in DPDK as part of the library
initialization, so this looks like a DPDK issue.

Would you mind reporting that to the dpdk-dev list?

On 10/03/2016 11:44, "discuss on behalf of John Wei"
<discuss-boun...@openvswitch.org on behalf of johnt...@gmail.com> wrote:

>I was trying to run multiple copies of ovs-vswitchd on different
>containers on the same host. But, I was not able start them in parallel
>because they all seem to try to grab all the available memory in the
>beginning, even I have specified --socket-mem
> parameter. Is there a work around for this?
>
>
>If I started 2nd ovs-vswitchd after first one is up, release unneeded
>memory, then I was able to bring up both.
>
>
>
>
>
>Information on my environment:
>
>AnonHugePages:    161792 kB
>HugePages_Total:    8192        <-- that is 16 GB
>HugePages_Free:     7936        <--- each vswitchd used 128 pages
>HugePages_Rsvd:        0
>HugePages_Surp:        0
>Hugepagesize:       2048 kB
>
>
>On my 2 core machines, I started vswitched with limit of 256MB
>
>--socket-mem 128,128
>
>
>It appears that vswitchd look at that HugePages_Free parameter, grab all
>that free memory, then reduce to the limit specified in  --socket-mem
>option
>
>
>John
>
>
> 
>
>

_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to