On Wed, Sep 24, 2014 at 1:12 PM, Eitan Eliahu <[email protected]> wrote: > I just wanted to be sure that this change does not apply to 64 bit > compilation where a 64 bit aligned move is atomic. Okay. That is taken care as of now.
> Yes, this is another reason to move to 64 but builds. I think we can make an attempt to move to 64 bit builds once we have end to end (full integration) OVS working on Hyper-V with decent performance and stability. IMO, it is a little early to pick this work now. > Thanks, > Eitan > > > -----Original Message----- > From: Gurucharan Shetty [mailto:[email protected]] > Sent: Wednesday, September 24, 2014 1:09 PM > To: Eitan Eliahu > Cc: [email protected]; Gurucharan Shetty > Subject: Re: [ovs-dev] [PATCH] ovs-atomic-msvc: Fix 64 bit atomic read/writes. > > On Wed, Sep 24, 2014 at 12:54 PM, Eitan Eliahu <[email protected]> wrote: >> >> Guru, can we differentiate between MSVC 32bit and 64bit builds? > The ovs-atomic-msvc.h is only included for 32 bit builds (see ovs-atomic.h). > So everything inside it is 32 bit compatible only. If the suggestion is that > we should change the name from ovs-atomic-msvc to something like > ovs-atomic-msvc586, I think we can do it once/if we actually support 64 bit > builds on Windows. > > >> Thanks, >> Eitan >> >> -----Original Message----- >> From: dev [mailto:[email protected]] On Behalf Of Gurucharan >> Shetty >> Sent: Wednesday, September 24, 2014 11:53 AM >> To: [email protected] >> Cc: Gurucharan Shetty >> Subject: [ovs-dev] [PATCH] ovs-atomic-msvc: Fix 64 bit atomic read/writes. >> >> MSVC converts 64 bit read/writes into two instructions (uses 'mov' as seen >> through cl //FAs). So there is a possibility that an interrupt can make a 64 >> bit read/write non-atomic even when 8 byte aligned. So we cannot use a >> simple assignment. Use a full memory barrier function instead. >> >> Reported-by: Jarno Rajahalme <[email protected]> >> Signed-off-by: Gurucharan Shetty <[email protected]> >> --- >> lib/ovs-atomic-msvc.h | 32 +++++++++++++++++++------------- >> 1 file changed, 19 insertions(+), 13 deletions(-) >> >> diff --git a/lib/ovs-atomic-msvc.h b/lib/ovs-atomic-msvc.h index >> e8e4891..c6a7db3 100644 >> --- a/lib/ovs-atomic-msvc.h >> +++ b/lib/ovs-atomic-msvc.h >> @@ -97,14 +97,22 @@ atomic_signal_fence(memory_order order) >> *(DST) = (SRC); \ >> } >> >> -/* 64 bit writes are atomic on i586 if 64 bit aligned. */ >> +/* MSVC converts 64 bit writes into two instructions. So there is >> + * a possibility that an interrupt can make a 64 bit write non-atomic >> +even >> + * when 8 byte aligned. So use InterlockedExchange64(). >> + * >> + * For atomic stores, 'consume' and 'acquire' semantics are not valid. >> +But we >> + * are using 'Exchange' to get atomic stores here and we only have >> + * InterlockedExchange64(), InterlockedExchangeNoFence64() and >> + * InterlockedExchange64Acquire() available. So we are forced to use >> + * InterlockedExchange64() which uses full memory barrier for >> +everything >> + * greater than 'memory_order_relaxed'. */ >> #define atomic_store64(DST, SRC, ORDER) \ >> - if (((size_t) (DST) & (sizeof *(DST) - 1)) \ >> - || ORDER == memory_order_seq_cst) { \ >> - InterlockedExchange64((int64_t volatile *) (DST), \ >> - (int64_t) (SRC)); \ >> + if (ORDER == memory_order_relaxed) { \ >> + InterlockedExchangeNoFence64((int64_t volatile *) (DST), \ >> + (int64_t) (SRC)); \ >> } else { \ >> - *(DST) = (SRC); \ >> + InterlockedExchange64((int64_t volatile *) (DST), (int64_t) >> + (SRC));\ >> } >> >> /* Used for 8 and 16 bit variations. */ @@ -149,16 +157,14 @@ >> atomic_signal_fence(memory_order order) >> #define atomic_readX(SRC, DST, ORDER) \ >> *(DST) = *(SRC); >> >> -/* 64 bit reads are atomic on i586 if 64 bit aligned. */ >> +/* MSVC converts 64 bit reads into two instructions. So there is >> + * a possibility that an interrupt can make a 64 bit read non-atomic >> +even >> + * when 8 byte aligned. So use fully memory barrier InterlockedOr64(). >> +*/ >> #define atomic_read64(SRC, DST, ORDER) \ >> - if (((size_t) (SRC) & (sizeof *(SRC) - 1)) == 0) { \ >> - *(DST) = *(SRC); \ >> - } else { \ >> __pragma (warning(push)) \ >> __pragma (warning(disable:4047)) \ >> - *(DST) = InterlockedOr64((int64_t volatile *) (SRC), 0); \ >> - __pragma (warning(pop)) \ >> - } >> + *(DST) = InterlockedOr64((int64_t volatile *) (SRC), 0); \ >> + __pragma (warning(pop)) >> >> #define atomic_read(SRC, DST) \ >> atomic_read_explicit(SRC, DST, memory_order_seq_cst) >> -- >> 1.7.9.5 >> >> _______________________________________________ >> dev mailing list >> [email protected] >> https://urldefense.proofpoint.com/v1/url?u=http://openvswitch.org/mail >> man/listinfo/dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=yTvML8OxA42Jb6V >> iHe7fUXbvPVOYDPVq87w43doxtlY%3D%0A&m=Cu2NRb3rnVAqvKL6D%2F%2FeBzqdvSInb >> O%2Fn2HZF3mvBffM%3D%0A&s=2e46da0e80a14570700ff58aee940b081e01945d658e6 >> d9a18cef87622144933 _______________________________________________ dev mailing list [email protected] http://openvswitch.org/mailman/listinfo/dev
