Steven Schveighoffer <> changed:

           What    |Removed                     |Added
                 CC|                            |

--- Comment #7 from Steven Schveighoffer <> 2012-06-02 
17:48:23 PDT ---
All of the functions(In reply to comment #3)
> According to
> > Pure functions are functions that produce the same result for the same 
> > arguments.

This is certainly true.  However, it's not practical nor always possible for
the compiler to determine if a call can be optimized out.  Consider that on any
call to a pure function that takes mutable data, the function could modify the
data, so even calling with the same exact pointer again may result in a new
effective parameter.

However, if a function has only immutable or implicitly convertible to
immutable parameters and return values, the function *can* be optimized out,
because it's guaranteed nothing ever changes.

This situation is what has been called "strong pure".  It's the equivalent to
functional language purity.

It's possible in certain situations for a "weak pure" function to be considered
strong pure.  For example, consider a function which takes a const parameter,
and returns a const.  Pass an immutable into it, and nothing could possibly
have changed before the next call, it can be optimized out.  The compiler does
not take advantage of these yet.

> And my original question is
> > The Question: What exactly does these pure functions consider as `argument
> value` and as `returned value`?

argument value is all the data reachable via the parameters.  Argument result
is all the data reachable via the result.

For pointers, you are under the same rules as normal functions -- @safe
functions cannot use pointers, unsafe ones can.  If an unsafe pure function is
called, a certain degree of freedom to screw up is available, just like any
other unsafe function.

> int f(in int* p) pure;
> void g()
> {
>     auto arr = new int[5];
>     auto res = f(arr.ptr);
>     assert(res == f(arr.ptr));

obviously this passes, all the parameters are identical, and nothing could have
changed between the two calls.  The call will not currently be optimized out,
because the compiler isn't smart enough yet.

>     assert(res == f(arr.ptr + 1)); // *p isn't changed

may or may not pass, parameter is different.

>     arr[1] = 7;
>     assert(res == f(arr.ptr)); // neither p nor *p is changed

may or may not pass.  f is not @safe, so it could possibly access arr[1].

>     arr[0] = 7;
>     assert(res == f(arr.ptr)); // p isn't changed

may or may not pass, the parameter is different.

> And I completely misunderstand why pure functions can't be optimized out as
> Steven Schveighoffer sad in druntime pull 198 comment:

I hope I have helped to further your understanding with this post.  Don just
looked up the original thread which outlined the weak-pure proposal, which was
submitted to digitalmars.D on August 2010.  You may want to read that entire

In general response to this bug, I'm unsure how pointers should be treated by
the optimizer.  My gut feeling is the compiler/optimizer should trust the code
"knows what it's doing." and so should expect that the code implicitly knows
how much data it can access after the pointer.

Consider an interesting case, using BSD sockets:

int f(immutable sockaddr *addr) pure;

sockaddr is a specific size, yet it's a "base class" of different types of
address structures.  Typically, one casts the sockaddr into the correct struct
based on the sa_family member.

But this may technically mean f accesses more data than it is given, based on a
rigid interpretation of the type system.  Should the compiler enforce this
given it makes this kind of function practically useless?  I think not.

Configure issuemail:
------- You are receiving this mail because: -------

Reply via email to