On 6/29/2012 4:49 AM, Mathias Gaunard wrote:
> On 28/06/2012 21:09, Eric Niebler wrote:
> 
>> After meditating on this for a bit, a thought occurred to me. Your
>> unpack function is a generalization of the pattern used by the _default
>> transform.
> 
> It is indeed.

Right. Providing the higher-level primitive transform is on my to-do
list. Thanks for the suggestion.

>> Generators are intended to meet this need. What are they lacking for
>> you? Is it the lack of an "unpack" transform?
> 
> We use generators for something else. Generators are in charge of
> putting a raw expression in the NT2 domain, which involves computing the
> logical size of the expression as well as the type it would have it
> evaluated.
> 
> Doing the expression rewriting in the generator itself causes dependency
> problems, since expression rewriting is defined in terms of unpacking
> then re-making expressions, which involves calling the generator.
> 
> I don't know yet how we could do what we need in a generator-less world.

Well, in the present generator world, the generator is passed an
expression, and your mission is to rewrite it (or wrap it). Rewriting
the expression causes a recursive invocation of the generator for the
current expression. This is the cause of the trouble, IIUC.

In the future generator-less world, your domain's make_expr function
object is passed a tag and a bunch of child nodes. You get to decide how
to turn that into an expression. If you want to transform the child
nodes first, you should be able to do that, and it wouldn't recurse
back. You're recursing only on the children, not on the current
expression. That should work. In theory.

>>> Both optimize and schedule require containing children by value to
>>> function correctly.
>>
>> How is this relevant?
> 
> When using normal Proto features, the expressions built from operators
> contain their children by reference while the expression built from
> transforms contain their children by value.
> 
> Since in our case we use the same code for both, we had to always
> contain children by value.

I'm afraid I'm being dense. I still don't see how that relates to the
need for your unpack function or the limitations of transforms.

>> Transforms are not pure functional. The data parameter can be mutated
>> during evaluation.
> 
> The transform language is functional since the only thing it does is
> define a calling expression which is a combination of primitive
> transforms. Of course each primitive transform doesn't have to be
> functional, but the language to use them cannot define state, it can
> just pass it along like a monad.

Your unpack function is no less functional in nature. I'm not denying
that transforms have limitations that your unpack function doesn't (see
below). I'm just saying that you're barking up the wrong tree with your
argument about transform's functional nature.

> I guess it's not pure functional though because of proto::and_.

I don't understand this statement.

> In any case, a lot of non-trivial expression evaluation strategies
> cannot be practically implemented as a transform and require a primitive
> transform.

Ah! By "transform" you mean "callable transform or object transform, but
not primitive transform". But the term "transform" includes primitive
transforms. Is that why we're talking past each other?

> If everything ends up being primitive transforms, we might as well use
> simple function objects directly, which are not tied by constraints such
> as arity, state, data, environment etc., just store whatever state is
> needed in the function object or bind that state with boost::bind or
> similar.

I see what you're getting at. Primitive transforms have limitations on
the number of arguments and the meaning of those arguments. I understand.

> I'd like to see Proto provide algorithms like this that accept arbitrary
> function objects and that are not intended to require transforms to do
> useful things.

Sure. However, I have a reason for wanting to make these things play
nicely with transforms. See below.

>> You know about proto::vararg, right? It lets you handle nodes of
>> arbitrary arity.
> 
> The transformations that can currently be done as a non-primitive
> transform when the transformation must not rely on an explicit arity of
> the expression are extremely limited.

Limiting the discussion to non-primitive transforms, I agree. I didn't
know that's what we were discussing.

> Adding unpacking transforms would certainly make it more powerful, but
> still not as powerful as what you could do with a simple function object
> coupled with macros or variadic templates.
> 
> I think it's important to keep in mind that transforms are a possible
> solution that can work well for some languages, but that there should be
> other solutions as well.
> 
>> Once there is an unpack transform, will you still feel this way?
> 
> We already have the unpack algorithm that I described. It's relatively
> simple and straightforward code. We used to have it defined as a
> primitive transform, it was much more complicated to use and required
> adding special cases for state and data, and was limited to just that.
> 
> I don't see how using a transform that does the same thing but less
> generic would be useful to us.

First, let me say that I agree. Second, that I want to make your life
easier today. Third, that I am trying to understand the problem so that
transforms in Proto.Next are powerful enough to handle all use cases.

Now, to answer your question. In proto today, transforms are limited in
the number of argument they can handle and by the fact that the function
objects used therein are themselves stateless. However, they have
syntactic benefits when used to compose larger algorithms. You don't
need to explicitly pass them the current expression, state and data
parameters. Proto knows when something is a transform and passes the
extra state along "for free". So, as to why you might want unpack to be
a transform, this is why. Note that in Proto, primitive transforms ARE
function objects and can be used that way.

You want an unpack function that can be called like this:

  unpack(e, f0, f1)

and that behaves like this:

  f0(f1(child_c<0>(e)), f1(child_c<1>(e)), ...)

I'm suggesting an unpack transform that can be used (in proto
algorithms) like this:

  _unpack<F0, F1>

and that behaves like:

  make<F0>()(e,s,d)(when<_,F1>()(child_c<0>(e), s, d),
when<_,F1>()(child_c<1>(e), s, d), ...)

or maybe even:

  auto f1 = lazy<F1>()(e,s,d);
  make<F0>()(e,s,d)(f1(child_c<0>(e), s, d), f1()(child_c<1>(e), s, d), ...)

With such a definition, the _default<X> transform is simply:

  template<typename X>
  struct _default
    : _unpack<_default_eval<tag_of<_>()>, _default<X> >
  {};

(where _default_eval<tag::plus> accepts two arguments and adds them, etc.)

Now, an unpack free function like this:

  unpack<F0, F1>(e)

is implemented as:

  _unpack<F0, F1>()(e)

I can hear you protesting: what if f0 and f1 are stateful function
objects? No argument, this is awkward: there isn't a simple way to pass
in stateful function objects. You are limited to function objects that
are created with make<F0>()(e,s,d) (or lazy<F0>()(e,s,d)). With enough
contortions, this can be made to do the job today, but I admit it's not
pretty.

So, is that the sticking point? You need stateful function objects? In
Proto.Next, I can easily see sticking the (stateful) function objects in
slots in the environment parameter, and hiding that all behind a tidy
interface like the one you suggest. For now, an unpack free function
might be a good addition to bridge the gap. That would be in addition to
the primitive transform, though. I don't think it eliminates the need
for the transform.

Thoughts? Btw, I do appreciate the time you've spend explaining your use
cases to me. I have been keeping them in mind during the Proto redesign.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
_______________________________________________
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto

Reply via email to