On Tue, May 26, 2020 at 9:41 AM Wonsup Yoon <[email protected]> wrote:

> Actually, I used preempt_lock to prevent data races.
> If two concurrent threads in a core access same per-cpu variable, I think
> we still need preempt lock.
>

This is true - if you have two threads in the same core that access the
same per-cpu variable, you need some sort of locking.
preempt lock isn't the only way, of course - you can also use a mutex, as
well as std::atomic, and other solutions.

preempt lock is indeed usually the fastest method, but as you saw it comes
with strings attached - the locked code really
cannot cause any preemption - which means it can't wait for any mutex,
cannot do anything (including delayed symbol
resolution which might wait for a mutex).  In addition, you need to make
sure the entire object is already in memory and
doesn't need to be demand-paged, or you may get a preemption in the middle
of the code just to read in another page
of executable.

We have a macro OSV_ELF_MLOCK_OBJECT()  (from <osv/elf.hh>)  which marks
the object with a flag (a
.note.osv-mlock section) that ensures *both* things: The object is entirely
read into memory on start, and all of
its symbols are resolved on start. You can see an example of
OSV_ELF_MLOCK_OBJECT() being used in a bunch
of tests in tests/. If you use this macro, you don't need to change your
code's compilation.

example)
>
> counter's initial value: 0
>
>                                   CPU 0
> Thread A            A_local = counter + 1   (A_local = 1)
> Thread A                 *(preemption)*
> Thread B            B_local = counter + 1   (B_local = 1)
> Thread B            counter = B_local         (counter = 1)
> Thread B                   *(exit)*
> Thread A             counter = A_local        (counter = 1)
>
> I expect counter to be 2, but 1 returns.
>
>
> 2020년 5월 26일 화요일 오후 3시 4분 24초 UTC+9, Nadav Har'El 님의 말:
>>
>>
>> On Tue, May 26, 2020 at 4:22 AM Wonsup Yoon <[email protected]> wrote:
>>
>>> Thank you for the response.
>>>
>>> Yes, dynamic_percpu<T> is perfect for my purpose.
>>>
>>> However, I encountered another issue.
>>>
>>> If I use dynamic_percpu with preempt-lock (I think it is very common
>>> pattern), it abort due to assertion failed.
>>> It seems lazy binding prevents preemption lock.
>>> So, I had to add -fno-plt option, and it works.
>>>
>>
>> You are right about preempt lock and your workaround for lazy binding.
>> However, to use a per-cpu variable, you don't need full preemption
>> locking - all you need is *migration* locking - in other words, the thread
>> running this code should not be migrated to a different CPU (this will
>> change the meaning of the per-cpu variable while you're using it), but it
>> is perfectly fine for the thread to be preempted to run a different thread
>> - as long as the original thread eventually returns to run on the same CPU
>> it previously ran on.
>>
>> So just replace your use of "preempt_lock" by "migration_lock" (include
>> <osv/migration-lock.hh>) and everything should work, without disabling lazy
>> binding.
>>
>> Please note that if you use the per-cpu on a thread which is already
>> bound to a specific CPU (which was the case in your original code you
>> shared), you don't even need migration lock! A pinned thread already can't
>> migrate to any other CPU, so it doesn't need to use this
>> migration-avoidance mechanism at all. You can use per-cpu variables on such
>> threads without any special protection.
>>
>>
>>>
>>>
>>>
>>> example code)
>>>
>>> #include <stdio.h>
>>> #include <assert.h>
>>>
>>> #include <osv/preempt-lock.hh>
>>> #include <osv/percpu.hh>
>>>
>>> struct counter {
>>> int x = 0;
>>>
>>> void inc(){
>>> x += 1;
>>> }
>>>
>>> int get(){
>>> return x;
>>> }
>>> };
>>>
>>> dynamic_percpu<counter> c;
>>>
>>> int main(int argc, char *argv[])
>>> {
>>> SCOPE_LOCK(preempt_lock);
>>> c->inc();
>>>
>>> return 0;
>>> }
>>>
>>>
>>> Backtrace)
>>>
>>> [backtrace]
>>> 0x000000004023875a <__assert_fail+26>
>>> 0x000000004035860c <elf::object::resolve_pltgot(unsigned int)+492>
>>> 0x0000000040358669 <elf_resolve_pltgot+57>
>>> 0x000000004039e2ef <???+1077535471>
>>> 0x000010000000f333 <???+62259>
>>> 0x000000004042a47c <osv::application::run_main()+60>
>>> 0x0000000040224bd0 <osv::application::main()+144>
>>> 0x000000004042a628 <???+1078109736>
>>> 0x0000000040462715 <???+1078339349>
>>> 0x00000000403fac86 <thread_main_c+38>
>>> 0x000000004039f632 <???+1077540402>
>>>
>>>
>>>
>>>
>>> 2020년 5월 24일 일요일 오후 5시 26분 17초 UTC+9, Nadav Har'El 님의 말:
>>>>
>>>>
>>>> On Sat, May 23, 2020 at 6:35 PM Wonsup Yoon <[email protected]> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I'm trying to use PERCPU macro in application or module.
>>>>>
>>>>
>>>> Hi,
>>>>
>>>> The PERCPU macro does not support this. What it does is to add
>>>> information about this variable in a special section of the executable
>>>> (".percpu"), then arch/x64/loader.ld makes sure all these entries will be
>>>> together between "_percpu_start" and "_percpu_end", and finally sched.cc
>>>> for every CPU creates (in the cpu::cpu(id) constructor) a copy of this
>>>> data. So if a loadable module (share library) contains another per-cpu
>>>> variable, it never gets added to the percpu area.
>>>>
>>>> However, I believe we do have a mechanism that will suite you:
>>>> *dynamic_percpu<T>*.
>>>> You can create (and destroy) such an object of type dynamic_percpu<T>
>>>> at any time, and it does the right thing:  The variable will be allocated
>>>> on all CPUs when the object is created, will be allocated on new cpus if
>>>> those happen, and will be freed when the object is destroyed.
>>>> In your case you can have a global dynamic_percpu<T> variable in your
>>>> loadable module. This object will be created when the module is loaded, and
>>>> destroyed when the module is unloaded - which is what you want.
>>>>
>>>> Nadav.
>>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "OSv Development" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to [email protected].
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/osv-dev/07f76c69-0448-4a97-b587-995f7dbafe58%40googlegroups.com
>>> <https://groups.google.com/d/msgid/osv-dev/07f76c69-0448-4a97-b587-995f7dbafe58%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "OSv Development" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/osv-dev/6f9dd47a-4f7a-46a8-89c4-fcaf1909dcc8%40googlegroups.com
> <https://groups.google.com/d/msgid/osv-dev/6f9dd47a-4f7a-46a8-89c4-fcaf1909dcc8%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups "OSv 
Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/osv-dev/CANEVyjsDnzXbEaf%3Dgag3g5Gi4syCm_cq8zPkJLu7CjDvW7P6GQ%40mail.gmail.com.

Reply via email to