On 06/29/2011 05:47 PM, Jonas Maebe wrote:
...
Thanks for the multiple pointers.
I was just trying to construct an example that
(a) is similar to stuff an normal user might think would be sure to work and
(b) if the cache-sync problems really exist in the way discussed is
likely to fail on
Vinzent Höfler schrieb:
Question is, what makes one variable use read/write-through, while
other variables can be read from the cache, with lazy-write?
Synchronisation. Memory barriers. That's what they are for.
And this doesn't happen out of thin air. How else?
Ok, maybe I misunderstood
On 06/29/2011 09:00 PM, Vinzent Höfler wrote:
POSIX: pthread_mutex_(un)lock() Co.
Or, maybe I didn't understand the question...
I suppose, you did understand what I intended to say.
Regarding FPC, TCriticalSection is a decent encapsulation for
pthread_mutex_... when used in this way.
On 06/29/2011 09:44 PM, Vinzent Höfler wrote:
On Wed, 29 Jun 2011 13:28:20 +0200, Hans-Peter Diettrich
Ada2005 RM:
|C.6(16): For a volatile object all reads and updates of the object as
| a whole are performed directly to memory.
|C.6(20): The external effect [...] is defined to
30.06.2011 13:31, Hans-Peter Diettrich:
If so, would it help to enclose above instructions in e.g.
Synchronized begin
update the links...
end;
If by such hypothetical synchronized operator you mean just memory
barriers and nothing else, then AFAICS this would not be of much use in
practice,
On 30 Jun 2011, at 10:42, Michael Schnell wrote:
On 06/29/2011 09:44 PM, Vinzent Höfler wrote:
That's Ada's definition of volatile. C's definition is less
stronger, but
should basically have the same effect.
Nice and what is the FPC definition ?
There is none. FPC has a volatile modifier
On Thu, Jun 30, 2011 at 4:31 AM, Hans-Peter Diettrich
drdiettri...@aol.com wrote:
After these considerations I'd understand that using Interlocked
instructions in the code would ensure such read/write-through, but merely as
a side effect - they also lock the bus for every instruction, what's
On 06/30/2011 11:52 AM, Jonas Maebe wrote:
On 30 Jun 2011, at 10:38, Michael Schnell wrote:
Regarding FPC, TCriticalSection is a decent encapsulation for
pthread_mutex_... when used in this way.
But e.g. if you use a TThreadList instance myList with multiple
threads it can't be the way to
On 06/29/2011 09:44 PM, Vinzent Höfler wrote:
If they are accessed by only one thread, I'd assert that each core's view
on its own cache is not susceptible to memory ordering issues
I don't suppose this is that simple.
AFAIK, the cache does not work on byte addresses, but on entities of
On 06/30/2011 11:45 AM, Jonas Maebe wrote:
There is none. FPC has a volatile modifier in svn trunk, but it
currently only affects the node tree optimizer.
...
Its only use is for memory mapped I/O.
I don't suppose the node tree optimizer is memory mapped I/O ???
-Michael
On 06/30/2011 11:31 AM, Hans-Peter Diettrich wrote:
Consider the shareable bi-linked list, where insertion requires code
like this:
list.Lock; //prevent concurrent access
... //determine affected list elements
new.prev := prev; //prev must be guaranteed to be valid
new.next := next;
On Thu, Jun 30, 2011 at 8:04 AM, Michael Schnell mschn...@lumino.de wrote:
- if the potential cache incoherency would not be handled by Hardware / OS
/ Libraries on behalf of user land programs, I feel that this would so
disastrous and ubiquitous that it result in so many programs not working
On 06/30/2011 03:29 PM, Andrew Brunner wrote:
In a case I observed, it did cause a significant problem to the
server. Yes, it was disastrous, and ONLY evident during stress tests.
... which encourages me to suggest that it is a nasty bug _somewhere_.
Maybe even in the hardware used.
I can't
Am 30.06.2011 14:53, schrieb Michael Schnell:
On 06/30/2011 11:45 AM, Jonas Maebe wrote:
There is none. FPC has a volatile modifier in svn trunk, but it
currently only affects the node tree optimizer.
...
Its only use is for memory mapped I/O.
I don't suppose the node tree optimizer is memory
On 30 Jun 2011, at 14:26, Michael Schnell wrote:
On 06/30/2011 11:52 AM, Jonas Maebe wrote:
On 30 Jun 2011, at 10:38, Michael Schnell wrote:
But e.g. if you use a TThreadList instance myList with multiple
threads it can't be the way to go to include any occurrence of
myList.xxx by a
On 06/28/2011 06:07 PM, Andrew Brunner wrote:
You can stick your head in the sand all you want, just don't run your
code on multi-core cpus and expect valid stability - and come back
here complaining on how unstable your multi-threaded application is
due to FPC design!
If a correctly done Posix
On 06/28/2011 06:33 PM, Andrew Brunner wrote:
Remember ***cores!=threads*** people.
Wrong regarding the issue in question (see the message by Jonas).
I'm at a loss for words. So you equate threads to cores?
A (Posix compliant) user software needs to consider each thread as
running on it's
On 06/28/2011 10:25 PM, Hans-Peter Diettrich wrote:
Can you run your test again, assuring that only one thread can access
the list at the same time, but *without* the Interlocked updates?
This would be a very nice move of Andrew's !!!
If he uses a single TCriticalSection instance to
On 28/06/11 15:15, Andrew Brunner wrote:
On Tue, Jun 28, 2011 at 9:00 AM, Henry Vermaakhenry.verm...@gmail.com wrote:
On 28/06/11 14:23, Andrew Brunner wrote:
There is no problem no need for volatile variables. Compare and Swap
or Interlocked mechanisms will solve any problems.
Nope. You
Vinzent Höfler schrieb:
Question is, what makes one variable use read/write-through, while
other variables can be read from the cache, with lazy-write?
Synchronisation. Memory barriers. That's what they are for.
And this doesn't happen out of thin air. How else?
Is this a compiler
On 06/28/2011 06:42 PM, Hans-Peter Diettrich wrote:
I could not find a definition of the mutex struct, to determine
whether it contains any user-alterable values. When the value is
declared outside the mutex struct, it will be accessible also
*without* locking the mutex first.
What do you
29.06.2011 15:28, Hans-Peter Diettrich:
But if so, which variables (class fields...) can ever be treated as
non-volatile, when they can be used from threads other than the main
thread?
Without explicit synchronisation? Actually, none.
Do you understand the implication of your answer?
When
On 06/28/2011 08:09 PM, Hans-Peter Diettrich wrote:
When you have a look at TThreadList.LockList/UnlockList, then you'll
see that LockList enters the critical section, and UnlockList leaves it.
Yep This is how a CS works.
All code executed in between such two calls is absolutely ignorant of
On 06/28/2011 07:05 PM, Vinzent Höfler wrote:
On Tue, 28 Jun 2011 15:54:35 +0200, Michael Schnell
mschn...@lumino.de wrote:
No, it can't. volatile just ensures that accessing the variable
results in
actual memory accesses. That does not mean cache-coherence, so another
core
may still see
On 06/29/2011 01:06 AM, Vinzent Höfler wrote:
Without explicit synchronisation? Actually, none.
How to do such synchronization with normal portable user-program
programming (aka Posix means).
-Michael
___
fpc-devel maillist -
On 06/29/2011 01:28 PM, Hans-Peter Diettrich wrote:
Do you understand the implication of your answer?
Regarding Objects it would mean that it's forbidden to create an object
in one thread and use it in another one. This is done very often.I can't
believe that the hardware is that bad.
On 28 Jun 2011, at 22:25, Hans-Peter Diettrich wrote:
Andrew Brunner schrieb:
Wrong. Sigh... Order of execution is paramount just about
everywhere.
It can be disastrous if not understood.
Remember ***cores!=threads*** people.
If your experience is really that chaotic, I think it's worth
On 06/29/2011 03:17 PM, Nikolai Zhubr wrote:
All places where any non-readonly data could be accessed by 2 or more
threads should be protected. Thats it.
So this is not supposed to work:
Main thread:
myThread := TmyThread.Create(True);
while not myThread.Suspended sleep(0); //give
Here is a nice example of one that actually works...
http://wiki.lazarus.freepascal.org/Manager_Worker_Threads_System
Granted I wrote this sample and Wiki a long time ago, but you may want
to read this ;-)
___
fpc-devel maillist -
On 06/29/2011 04:33 PM, Andrew Brunner wrote:
Here is a nice example of one that actually works...
http://wiki.lazarus.freepascal.org/Manager_Worker_Threads_System
Nice.
But actually what we need is an example that does not work and shows a
case that the cache incoherency in fact is not
Michael Schnell schrieb:
All code executed in between such two calls is absolutely ignorant of
the state of the CS, there is no in/outside.
The State (relevant to this thread) of CS does not change when some
code of the thread is between enter and leave. So it is not ignorant
but it does
Nikolai Zhubr schrieb:
29.06.2011 15:28, Hans-Peter Diettrich:
But if so, which variables (class fields...) can ever be treated as
non-volatile, when they can be used from threads other than the main
thread?
Without explicit synchronisation? Actually, none.
Do you understand the implication
29.06.2011 18:31, Michael Schnell:
[...]
So this is not supposed to work:
Main thread:
myThread := TmyThread.Create(True);
while not myThread.Suspended sleep(0); //give up time slice to allow the
worker thread to start
myList := TThreadlist.Create; // set the variable in cache 1
On 06/29/2011 05:28 PM, Hans-Peter Diettrich wrote:
The code in a called subroutine doesn't know about the CS. [It usually
also doesn't care about from which exact thread it was called]
This means that possible recursion must be prevented in all related
code in a CS, or (safer) that the CS
On 06/29/2011 05:57 PM, Hans-Peter Diettrich wrote:
This means that a simplified version of TThreadList would be nice,
that allows to wrap a single shareable object and make it usable in an
thread-safe way.
IMHO just TList is OH for this purpose. It (supposedly) is per-instance
thread
29.06.2011 19:57, Hans-Peter Diettrich:
[...]
imply that in detail all application specific
objects must be either local to an thread, or must be protected against
concurrent access (shareable)?
IMHO yes.
[...]
Possibly the language could be extended to help in the determination of
On 06/29/2011 05:29 PM, Nikolai Zhubr wrote:
. I somehow doubt that passing these system calls could let something
remain unflushed in the cache, as the OS will most probably have to do
some synchronization inside these calls for internal bookkeeping, but
this is IMHO kind of side-effect.
On 29 Jun 2011, at 17:29, Nikolai Zhubr wrote:
29.06.2011 18:31, Michael Schnell:
[...]
So this is not supposed to work:
Main thread:
myThread := TmyThread.Create(True);
while not myThread.Suspended sleep(0); //give up time slice to allow the
worker thread to start
sleep(0) does not
On Wed, 29 Jun 2011 16:31:32 +0200, Michael Schnell mschn...@lumino.de
wrote:
On 06/29/2011 03:17 PM, Nikolai Zhubr wrote:
All places where any non-readonly data could be accessed by 2 or more
threads should be protected. Thats it.
So this is not supposed to work:
Precisely, it is not
On Wed, 29 Jun 2011 15:57:15 +0200, Michael Schnell mschn...@lumino.de
wrote:
On 06/29/2011 01:06 AM, Vinzent Höfler wrote:
Without explicit synchronisation? Actually, none.
How to do such synchronization with normal portable user-program
programming (aka Posix means).
POSIX:
On Wed, 29 Jun 2011 13:28:20 +0200, Hans-Peter Diettrich
drdiettri...@aol.com wrote:
Vinzent Höfler schrieb:
Question is, what makes one variable use read/write-through, while
other variables can be read from the cache, with lazy-write?
Synchronisation. Memory barriers. That's what they
A similar discussion is going on in Lazarus-develop, but this obviously
is a compiler question.
In C, there is the volatile keyword that ensures that after the code
sequence enters the next C instruction after that which modified this
variable, another thread sees the correct state of the
On 06/28/2011 01:20 PM, Henry Vermaak wrote:
Operations on volatile variables are not atomic,
That is of course known.
nor do they establish a proper happens-before relationship for threading.
I see. So maybe part of my question is invalid.
But as pthread_mutex (and the appropriate Windows
On Tue, Jun 28, 2011 at 6:14 AM, Michael Schnell mschn...@lumino.de wrote:
For variables not defined as volatile, (e.g.) pthread_mutex (and similar
stuff on Windows) can be used to protect them.
A mutex may be able to atomically block access because of its own
memory barrier, but I would
On 28 Jun 2011, at 14:58, Andrew Brunner wrote:
On Tue, Jun 28, 2011 at 6:14 AM, Michael Schnell
mschn...@lumino.de wrote:
For variables not defined as volatile, (e.g.) pthread_mutex (and
similar
stuff on Windows) can be used to protect them.
A mutex may be able to atomically block
On Tue, Jun 28, 2011 at 8:00 AM, Jonas Maebe jonas.ma...@elis.ugent.be wrote:
On 28 Jun 2011, at 14:58, Andrew Brunner wrote:
On Tue, Jun 28, 2011 at 6:14 AM, Michael Schnell mschn...@lumino.de
wrote:
For variables not defined as volatile, (e.g.) pthread_mutex (and similar
stuff on
On 28 Jun 2011, at 15:05, Andrew Brunner wrote:
On Tue, Jun 28, 2011 at 8:00 AM, Jonas Maebe jonas.ma...@elis.ugent.be
wrote:
On 28 Jun 2011, at 14:58, Andrew Brunner wrote:
A mutex may be able to atomically block access because of its own
memory barrier, but I would suggest that
On 28/06/11 14:00, Jonas Maebe wrote:
On 28 Jun 2011, at 14:58, Andrew Brunner wrote:
On Tue, Jun 28, 2011 at 6:14 AM, Michael Schnell mschn...@lumino.de
wrote:
For variables not defined as volatile, (e.g.) pthread_mutex (and similar
stuff on Windows) can be used to protect them.
A mutex
On 28 Jun 2011, at 14:32, Michael Schnell wrote:
So, regarding C, I understand that (even in a single CPU environment):
If all accesses to a variable are protected by a MUTEX, multiple
threads will use the variable as expected, only if it is defined as
volatile. Otherwise is might be
On 06/28/2011 02:58 PM, Andrew Brunner wrote:
A mutex may be able to atomically block access because of its own
memory barrier, but I would suggest that employing such a technique on
multi-core systems will not ensure an accurate value.
If this is true, how can any multithreaded be done ?
On Tue, Jun 28, 2011 at 8:11 AM, Jonas Maebe jonas.ma...@elis.ugent.be wrote:
http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap04.html
(point 4.11)
Nope. Nothing about order - just access - and that is entirely on the
application level - not system.
1.) Code execution on die
On Tue, Jun 28, 2011 at 8:16 AM, Jonas Maebe jonas.ma...@elis.ugent.be wrote:
The C (or Pascal) compiler has no idea whether or not the global variable
will be accessed by the pthread_mutex_lock()/unlock() function. As a result,
it will never cache it in a register across function calls, and
On 06/28/2011 03:00 PM, Jonas Maebe wrote:
. I don't know about the Windows equivalents.
see
http://msdn.microsoft.com/en-us/library/ms686355%28v=VS.85%29.aspx
___
fpc-devel maillist - fpc-devel@lists.freepascal.org
On 28 Jun 2011, at 15:20, Andrew Brunner wrote:
On Tue, Jun 28, 2011 at 8:11 AM, Jonas Maebe jonas.ma...@elis.ugent.be
wrote:
http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap04.html
(point 4.11)
Nope. Nothing about order - just access - and that is entirely on the
On Tue, Jun 28, 2011 at 8:28 AM, Jonas Maebe jonas.ma...@elis.ugent.be wrote:
1.) Code execution on die is not controlled by pthreads implemention -
as it is unaware at that level.
I have no idea what you mean by this. What would code execution off die be
as opposed to code execution on
On 28 Jun 2011, at 15:39, Andrew Brunner wrote:
On Tue, Jun 28, 2011 at 8:28 AM, Jonas Maebe jonas.ma...@elis.ugent.be
wrote:
1.) Code execution on die is not controlled by pthreads
implemention -
as it is unaware at that level.
I have no idea what you mean by this. What would code
On 06/28/2011 03:16 PM, Jonas Maebe wrote:
The C (or Pascal) compiler has no idea whether or not the global
variable will be accessed by the pthread_mutex_lock()/unlock()
function. As a result, it will never cache it in a register across
function calls, and the call to the mutex function by
On 06/28/2011 03:23 PM, Andrew Brunner wrote:
Getting developers to
chose the right tool for the job is the key here.
Regarding normal user application there is only one option: Posix. Ans
same happily is encapsulated in the RTL/LCL for FPC/Lazarus programmers.
Advanced (non-portable)
On 06/28/2011 03:23 PM, Andrew Brunner wrote:
There is no problem no need for volatile variables. Compare and Swap
or Interlocked mechanisms will solve any problems.
volatile is a directive to the compiler on how to handle a variable.
Variables that are not handled by the compiler but handled
On 28/06/11 14:20, Andrew Brunner wrote:
On Tue, Jun 28, 2011 at 8:11 AM, Jonas Maebejonas.ma...@elis.ugent.be wrote:
http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap04.html
(point 4.11)
Nope. Nothing about order - just access - and that is entirely on the
application level
On 28/06/11 14:23, Andrew Brunner wrote:
There is no problem no need for volatile variables. Compare and Swap
or Interlocked mechanisms will solve any problems.
Nope. You still need to prevent the cpu from reordering instructions
with memory barriers. I'm starting to sound like a broken
No, that is impossible. That's the whole point of using libraries such as
libpthread. They abstract such issues away. Using atomic operations inside
mutex sections only slows down your program unnecessarily (unless you also
access the target memory location from code not guarded by that mutex,
Of course it is. They issue a hardware memory barrier. This stops the cpu
from reordering operations. How do you think anything using pthreads will
work if they didn't?
Documentation please? If what you are saying is accurate just point
me to the documentation?
Hello FPC,
Tuesday, June 28, 2011, 3:39:29 PM, you wrote:
AB Sort of right. 6 core system. Core 1 locks code block. Code block
AB should still use interlocked statements to make memory assignments so
AB that when Core 1 releases lock - Core 2 can have a real-time image of
AB variable.
On Tue, Jun 28, 2011 at 9:00 AM, Henry Vermaak henry.verm...@gmail.com wrote:
On 28/06/11 14:23, Andrew Brunner wrote:
There is no problem no need for volatile variables. Compare and Swap
or Interlocked mechanisms will solve any problems.
Nope. You still need to prevent the cpu from
On 28/06/11 15:09, Andrew Brunner wrote:
Of course it is. They issue a hardware memory barrier. This stops the cpu
from reordering operations. How do you think anything using pthreads will
work if they didn't?
Documentation please? If what you are saying is accurate just point
me to the
On 28 Jun 2011, at 15:54, Michael Schnell wrote:
static int x;
void play_with_x(void) {
for (i=1; i0; i--) {
x += 1;
};
x = 0;
};
the compiler will see that x is just defined to be 0 in the end and
optimize out thge complete loop.
But if you do the same with
volatile static
Jonas already pointed you to it:
http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap04.html#tag_04_11
Applications shall ensure that access to any memory location by more than
one thread of control (threads or processes) is restricted such that no
thread of control can read or
On 06/28/2011 04:23 PM, Jonas Maebe wrote:
On 28 Jun 2011, at 15:54, Michael Schnell wrote:
I believe that inserting some ptherad_mutex... calls will not force
the compiler to bother about some intermediate values of a non
volatile variable.
You believe wrongly.
As the compiler does not
On Tue, Jun 28, 2011 at 9:23 AM, Jonas Maebe jonas.ma...@elis.ugent.be wrote:
On topic, Jonas can you take a few moments to describe how developers
can force code execution in order w/o using a third party library? Is
there a compiler directive we can use?
On 06/28/2011 04:31 PM, Andrew Brunner wrote:
how developers
can force code execution in order w/o using a third party library?
Execution in order only makes sense when there is another thread that
relies on this order.
So if both threads use the same critical section for accessing all
On Tue, Jun 28, 2011 at 9:33 AM, Michael Schnell mschn...@lumino.de wrote:
And this has been discussed in the other message: If the variable in fact is
global the compiler needs to avoid caching it, if it is static and the
function is in another module it might still decide to cache it, but
On 28 Jun 2011, at 16:28, Andrew Brunner wrote:
Jonas already pointed you to it:
http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap04.html#tag_04_11
Applications shall ensure that access to any memory location by
more than
one thread of control (threads or processes) is
On Tue, Jun 28, 2011 at 9:43 AM, Michael Schnell mschn...@lumino.de wrote:
On 06/28/2011 04:31 PM, Andrew Brunner wrote:
how developers
can force code execution in order w/o using a third party library?
Execution in order only makes sense when there is another thread that relies
on this
On 06/28/2011 04:38 PM, Andrew Brunner wrote:
1.) How can we get the core to not relent and have the code handed off
to another core until we're finished?
2.) How can we get the core to have a synchronised copy of a
particular variable (aside from CAS)?
I suppose you need to ask these questions
On 06/28/2011 05:02 PM, Andrew Brunner wrote:
Wrong. Sigh... Order of execution is paramount just about everywhere.
It can be disastrous if not understood.
You still did not give an example
Remember ***cores!=threads*** people.
Wrong regarding the issue in question (see the message by
You can stick your head in the sand all you want, just don't run your
code on multi-core cpus and expect valid stability - and come back
here complaining on how unstable your multi-threaded application is
due to FPC design!
User programs are not supposed to bother about anything beyond threads
On Tue, Jun 28, 2011 at 10:17 AM, Michael Schnell mschn...@lumino.de wrote:
You still did not give an example
Don't take my word. Just look at the wikipedia link I already posted
which indicates otherwise.
Remember ***cores!=threads*** people.
Wrong regarding the issue in question (see
On Tue, 28 Jun 2011 15:54:35 +0200, Michael Schnell mschn...@lumino.de
wrote:
But if you do the same with
volatile static int x;
the code will stay and another thread can watch x growing in a time
sharing system.
No, it can't. volatile just ensures that accessing the variable results
On Tue, 28 Jun 2011 15:20:22 +0200, Andrew Brunner
andrew.t.brun...@gmail.com wrote:
On Tue, Jun 28, 2011 at 8:11 AM, Jonas Maebe jonas.ma...@elis.ugent.be
wrote:
http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap04.html
(point 4.11)
Nope. Nothing about order - just
Jonas Maebe schrieb:
2.) Blocking access as described in 4.11 does not address execution
order.
It does guarantee that if T1 locks the mutex, changes the value, unlocks
the mutex [...]
Can you explain please, to what changes the value applies?
I could not find a definition of the mutex
Andrew Brunner schrieb:
On Tue, Jun 28, 2011 at 9:23 AM, Jonas Maebe jonas.ma...@elis.ugent.be wrote:
On topic, Jonas can you take a few moments to describe how developers
can force code execution in order w/o using a third party library? Is
there a compiler directive we can use?
I think
Michael Schnell schrieb:
Only the ordering decision inside vs outside of the critical section
is necessary for threaded user application. If both Enter and Leave do a
full fence barrier, I suppose we are safe.
Since the condition is only stored *inside* the CS or mutex, no other
code will
28.06.2011 19:42, Hans-Peter Diettrich wrote:
Jonas Maebe schrieb:
2.) Blocking access as described in 4.11 does not address execution
order.
It does guarantee that if T1 locks the mutex, changes the value,
unlocks the mutex [...]
Can you explain please, to what changes the value applies?
On Tue, 28 Jun 2011 20:09:18 +0200, Hans-Peter Diettrich
drdiettri...@aol.com wrote:
When you have a look at TThreadList.LockList/UnlockList, then you'll see
that LockList enters the critical section, and UnlockList leaves it. All
code executed in between such two calls is absolutely
On Tue, 28 Jun 2011 18:11:29 +0200, Hans-Peter Diettrich
drdiettri...@aol.com wrote:
I think that you should give at least an example, where instruction
reordering makes a difference. Neither a compiler nor a processor is
allowed to reorder instructions in a way, that breaks the def/use
On Tue, 28 Jun 2011 20:34:19 +0200, Nikolai Zhubr n-a-zh...@yandex.ru
wrote:
involving some mutex. Such proper constructs are not enforced by pascal
language automatically (like say in java), so mistakes are quite
possible (and sometimes do happen).
JFTR, but they aren't /enforced/ in
At beginning of June I've found the following link on the ReactOS
mailing list when they were discussing about memory ordering and
volatile as well:
http://kernel.org/doc/Documentation/volatile-considered-harmful.txt
For those interested the following is the link to the starting
discussion:
28.06.2011 22:38, Vinzent Höfler wrote:
involving some mutex. Such proper constructs are not enforced by
pascal language automatically (like say in java), so mistakes are
quite possible (and sometimes do happen).
JFTR, but they aren't /enforced/ in Java, neither.
Well, ok, I didn't mean that
Andrew Brunner schrieb:
On Tue, Jun 28, 2011 at 9:43 AM, Michael Schnell mschn...@lumino.de wrote:
On 06/28/2011 04:31 PM, Andrew Brunner wrote:
how developers
can force code execution in order w/o using a third party library?
Execution in order only makes sense when there is another thread
Vinzent Höfler schrieb:
On Tue, 28 Jun 2011 15:54:35 +0200, Michael Schnell mschn...@lumino.de
wrote:
But if you do the same with
volatile static int x;
the code will stay and another thread can watch x growing in a time
sharing system.
No, it can't. volatile just ensures that accessing
Andrew Brunner schrieb:
On Tue, Jun 28, 2011 at 9:33 AM, Michael Schnell mschn...@lumino.de wrote:
And this has been discussed in the other message: If the variable in fact is
global the compiler needs to avoid caching it, if it is static and the
function is in another module it might still
On Tue, 28 Jun 2011 23:29:52 +0200, Hans-Peter Diettrich
drdiettri...@aol.com wrote:
Vinzent Höfler schrieb:
No, it can't. volatile just ensures that accessing the variable
results in
actual memory accesses. That does not mean cache-coherence, so another
core may still see other (as in
93 matches
Mail list logo