Paulo Marques wrote:
David Brown wrote:
Paulo Marques wrote:
David Brown wrote:
[...]
it could perhaps reason that since there is no way for anything
outside the program to find out where the local volatile variable
resides, there is no way for anything else to influence or use the
variable, and therefore the "volatile" qualifier can be ignored.
This sentence makes no sense at all. The "volatile" is precisely to
warn the compiler that it should not "reason" anything about this
variable.
I think the standards are pretty vague regarding exactly what
"volatile" means. There is nothing (that I know of) in the standards
saying where a volatile variable must be allocated.
Yes, I wasn't disputing that either. Only that the compiler could not
"reason" that it wasn't used.
Note that I'm not arguing the memory/register allocation here. You could
also have a CPU that had a "register access counter", or something,
where accessing a CPU register would increase the counter value and you
wanted to use volatile to make the compiler access the register in the
loop to increase the counter value.
If you give a volatile qualifier to a local variable, it's obvious
that you want it to behave differently from regular local variables,
so I think gcc is doing the best it can, from the weakly defined
volatile semantics.
It may be obvious to *you*, as the author, that you mean "volatile" to
work like this. It certainly works like that on avr-gcc at the moment.
The "obvious" part here is that you can declare a local variable without
any modifiers or with the volatile modifier. If the volatile modifier
makes the compiler generate the same code, then it would be useless. If
it forces the compiler to not optimize away accesses to that variable,
then it can have some use.
Just because the keyword "volatile" exists, does not automatically mean
that it will cause different code to be generated! The "register"
qualifier is pretty much redundant on modern compilers.
But what appears "obvious" to programmers (even expert experienced
programmers), and what the standards say, what the compiler does, and
how it all works on the target in question, can be very different
things. When working with bigger cpus with caches and instruction
re-ordering, for example, "volatile" is not nearly strong enough to
give you the kind of guarantees we take for granted on avr-gcc.
Yes, I follow LKML too, and all the endless threads on memory ordering /
volatile / SMP races ;)
I don't follow the LKML (I check out the highlights occasionally), but
I've seen it in practice with a couple of embedded processors.
This makes as much sense as saying that any volatile is futile, since
you can compile a program with "-combine -whole-program" and so the
compiler can always "reason" that any variable will not be accessed
outside of its control.
No, it's not quite the same. In particular, if the variable's address
is known outside the code (for example, if it is given a fixed
address, such as by the definition of the port I/O registers), then
there is no way the compiler could make guarantees about the safety of
removing the "volatile".
I think you're actually agreeing with me that the compiler can not
optimize global accesses away, but you're saying that it can for local
variables because their locations aren't known?
That's pretty much the case, yes. As I said, I don't know if that
argument is valid (depending on the standards and the compiler's
interpretation of the standards). I'd just want to be sure before
relying on particular behaviour.
Certainly if you have anything that depends on the behaviour of external
influences on data whose address is unknown (alternatively, whose
function is unknown for a given address), then your code is going to be
unpredictable at best. Is it valid for the compiler to assume that
either you've written correct working code, and thus it can optimize it
on that assumption, or otherwise you have written broken code, in which
case it will do no harm by breaking it further?
Actually, I've thought of a couple of scenarios where there could be
outside influence on data without the local symbols being exported or
the addresses of the local variables being taken. One is if you are
running under a debugger or other program that has access to a symbol
table, and the other is in conjunction with automatic garbage collection
libraries or similar code that sweeps through memory.
What if the CPU had dedicated storage for the stack frame, and accessing
locations there had side effects unknown to the compiler?
Yes, I'm grasping at straws here, but the bottom line is: if the side
effects are unknown to the compiler, _they_ _are_ _unknown_ to the
compiler. It is best not to assume anything.
I agree that it is best for the compiler not to assume anything. I just
wondering if that might apply to the programmer too!
Similarly, if different globally accessible functions (such as
interrupt functions) accessed the variable, it could not remove the
"volatile". But for local variables within a function, it is much
more straightforward to see how such variables could be accessed or
addressed.
Well, I could argue that the compiler also knows these functions are
"interrupt" functions and could assume that any variable modified by
these functions had to be "treated as volatile", even without the
keyword ;)
This would in fact be the best scenario: inside the interrupt functions
the variables would be accessed as regular variables, but outside they
would be accessed as volatiles.
That would be very useful. I find using normal variables, but including
a "volatile" cast when accessing them elsewhere, to be useful. It would
be nice if C had a way of expressing finer-grained control of access
(such as volatile for writes but not reads, or atomic access for data
bigger than the cpu's width).
Note that I'm not disputing that the compiler could in theory use a
register instead. I personally don't think that would be a good idea,
but it might be allowed by the specs. What I'm disputing is that the
compiler _can not_ ignore the volatile and optimize the loop away
entirely. That would be a compiler bug, for sure.
It would certainly be a surprise to many (including me) if it *did*
remove the volatile variable in practice. But I'm not 100% sure that
the standards disallow such optimisations - that's all I'm saying.
I hope I'm quoting from the right (currently in use) standard:
" 99. A volatile declaration may be used to describe an object
corresponding to a memory-mapped input/output port or an
object accessed by an asynchronously interrupting
function. Actions on objects so declared shall not be
``optimized out'' by an implementation or reordered
except as permitted by the rules for evaluating
expressions."
So, I don't think the standards allow the compiler to "optimize out" a
loop that has a volatile access in it.
Ah well, that sounds fairly conclusive.
mvh.,
David
_______________________________________________
AVR-GCC-list mailing list
AVR-GCC-list@nongnu.org
http://lists.nongnu.org/mailman/listinfo/avr-gcc-list