Re: [proto] proto-11 progress report

2012-07-20 Thread Eric Niebler
On 7/17/2012 6:14 PM, Eric Niebler wrote:
> I'm considering adding the slots mechanism to proto-current so that this
> can be made to work there, also. The problem is that once you use a
> slot, the data parameter is no longer just a dumb blob. I can make
> proto::_data ignore the slots and just return the dumb blob as before,
> and that will satisfy some. But any custom primitive transforms that
> folks have written will need to be ready for the fact that they could
> get passed something different than what they were expecting. I don't
> think it will break code until you decide to use a slot (or use a
> transform that uses a slot). Then you'll need to fix up your transforms.
> 
> Does anybody object to the above scheme?

This is now implemented on trunk. It's implemented in a backward
compatible way.[*]

What this means is that instead of a monolithic blob of data, the third
parameter to a Proto transforms can be a structured object with O(1)
lookup based on tag. You define a tag with:

  BOOST_PROTO_DEFINE_ENV_VAR(my_tag_type, my_key);

Then, you can use it like this:

  some_transform()(expr, state, (my_key= 42, your_key= "hello"));

In your transforms, you can access the value associated with a
particular key using the proto::_env_var transform.

You can still pass an unstructured blob, and things work as they did
before. The proto::_data transform checks to see if the data parameter
is a blob or structured. If it's a blob, that simply gets returned. If
it's structured, it returns the value associated with the
proto::data_type tag. In other words, these two are treated the same:

  int i = 42;
  some_transform()(expr, state, i);
  some_transform()(expr, state, (proto::data= i));

There's more, but I'll save it. It's a big change, and docs are yet to
be written. Y'all might want to test your code against trunk and report
problems early. (This will *not* be part of 1.51.)

[*] I had to make some changes to Phoenix though because unfortunately
Phoenix makes use of some undocumented parts of Proto.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] proto-11 progress report

2012-07-17 Thread Eric Niebler
On 6/25/2012 12:21 PM, Mathias Gaunard wrote:
> There is a function which is very simple and that I found to be very
> useful when dealing with expression trees.
> 
> unpack(e, f0, f1) which calls
> f0(f1(e.child0), f1(e.child1), ..., f1(e.childN))
> 
> I can do recursion or not with the right f1, and I can 'unpack' an
> expression to an n-ary operation f0.
> 
> Here f0 is typically a function that uses its own overloading-based
> dispatching mechanism.

Now with proto-11, it's not too hard to implement Matias' unpack
function in terms of primitives. You can find an example here:



The functions f0 and f1 can be stateful. It uses the new unpacking
patterns, the slot-based transform environment, and a new bind function
object.

I'm considering adding the slots mechanism to proto-current so that this
can be made to work there, also. The problem is that once you use a
slot, the data parameter is no longer just a dumb blob. I can make
proto::_data ignore the slots and just return the dumb blob as before,
and that will satisfy some. But any custom primitive transforms that
folks have written will need to be ready for the fact that they could
get passed something different than what they were expecting. I don't
think it will break code until you decide to use a slot (or use a
transform that uses a slot). Then you'll need to fix up your transforms.

Does anybody object to the above scheme? The advantage is that things
like Mathias' unpack function become possible. Also, I'll finally be
able to implement the let<> transform. I'd probably be able to change
the implementation of the external transforms feature to make use of
slots, too (although that might be a breaking change, in which case I'd
wait until proto-11).

BTW, the "unpacking patterns" feature will be part of 1.51.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com

___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] proto-11 progress report

2012-07-01 Thread Eric Niebler
On 6/29/2012 4:49 AM, Mathias Gaunard wrote:
> On 28/06/2012 21:09, Eric Niebler wrote:
> 
>> After meditating on this for a bit, a thought occurred to me. Your
>> unpack function is a generalization of the pattern used by the _default
>> transform.
> 
> It is indeed.

Right. Providing the higher-level primitive transform is on my to-do
list. Thanks for the suggestion.

>> Generators are intended to meet this need. What are they lacking for
>> you? Is it the lack of an "unpack" transform?
> 
> We use generators for something else. Generators are in charge of
> putting a raw expression in the NT2 domain, which involves computing the
> logical size of the expression as well as the type it would have it
> evaluated.
> 
> Doing the expression rewriting in the generator itself causes dependency
> problems, since expression rewriting is defined in terms of unpacking
> then re-making expressions, which involves calling the generator.
> 
> I don't know yet how we could do what we need in a generator-less world.

Well, in the present generator world, the generator is passed an
expression, and your mission is to rewrite it (or wrap it). Rewriting
the expression causes a recursive invocation of the generator for the
current expression. This is the cause of the trouble, IIUC.

In the future generator-less world, your domain's make_expr function
object is passed a tag and a bunch of child nodes. You get to decide how
to turn that into an expression. If you want to transform the child
nodes first, you should be able to do that, and it wouldn't recurse
back. You're recursing only on the children, not on the current
expression. That should work. In theory.

>>> Both optimize and schedule require containing children by value to
>>> function correctly.
>>
>> How is this relevant?
> 
> When using normal Proto features, the expressions built from operators
> contain their children by reference while the expression built from
> transforms contain their children by value.
> 
> Since in our case we use the same code for both, we had to always
> contain children by value.

I'm afraid I'm being dense. I still don't see how that relates to the
need for your unpack function or the limitations of transforms.

>> Transforms are not pure functional. The data parameter can be mutated
>> during evaluation.
> 
> The transform language is functional since the only thing it does is
> define a calling expression which is a combination of primitive
> transforms. Of course each primitive transform doesn't have to be
> functional, but the language to use them cannot define state, it can
> just pass it along like a monad.

Your unpack function is no less functional in nature. I'm not denying
that transforms have limitations that your unpack function doesn't (see
below). I'm just saying that you're barking up the wrong tree with your
argument about transform's functional nature.

> I guess it's not pure functional though because of proto::and_.

I don't understand this statement.

> In any case, a lot of non-trivial expression evaluation strategies
> cannot be practically implemented as a transform and require a primitive
> transform.

Ah! By "transform" you mean "callable transform or object transform, but
not primitive transform". But the term "transform" includes primitive
transforms. Is that why we're talking past each other?

> If everything ends up being primitive transforms, we might as well use
> simple function objects directly, which are not tied by constraints such
> as arity, state, data, environment etc., just store whatever state is
> needed in the function object or bind that state with boost::bind or
> similar.

I see what you're getting at. Primitive transforms have limitations on
the number of arguments and the meaning of those arguments. I understand.

> I'd like to see Proto provide algorithms like this that accept arbitrary
> function objects and that are not intended to require transforms to do
> useful things.

Sure. However, I have a reason for wanting to make these things play
nicely with transforms. See below.

>> You know about proto::vararg, right? It lets you handle nodes of
>> arbitrary arity.
> 
> The transformations that can currently be done as a non-primitive
> transform when the transformation must not rely on an explicit arity of
> the expression are extremely limited.

Limiting the discussion to non-primitive transforms, I agree. I didn't
know that's what we were discussing.

> Adding unpacking transforms would certainly make it more powerful, but
> still not as powerful as what you could do with a simple function object
> coupled with macros or variadic templates.
> 
> I think it's important to keep in mind that transforms are a possible
> solution that can work well for some languages, but that there should be
> other solutions as well.
> 
>> Once there is an unpack transform, will you still feel this way?
> 
> We already have the unpack algorithm that I described. It's relatively
> simple and straightforwar

Re: [proto] proto-11 progress report

2012-06-29 Thread Mathias Gaunard

On 28/06/2012 21:09, Eric Niebler wrote:


After meditating on this for a bit, a thought occurred to me. Your
unpack function is a generalization of the pattern used by the _default
transform.


It is indeed.



Generators are intended to meet this need. What are they lacking for
you? Is it the lack of an "unpack" transform?


We use generators for something else. Generators are in charge of 
putting a raw expression in the NT2 domain, which involves computing the 
logical size of the expression as well as the type it would have it 
evaluated.


Doing the expression rewriting in the generator itself causes dependency 
problems, since expression rewriting is defined in terms of unpacking 
then re-making expressions, which involves calling the generator.


I don't know yet how we could do what we need in a generator-less world.



Both optimize and schedule require containing children by value to
function correctly.


How is this relevant?


When using normal Proto features, the expressions built from operators 
contain their children by reference while the expression built from 
transforms contain their children by value.


Since in our case we use the same code for both, we had to always 
contain children by value.




Transforms are not pure functional. The data parameter can be mutated
during evaluation.


The transform language is functional since the only thing it does is 
define a calling expression which is a combination of primitive 
transforms. Of course each primitive transform doesn't have to be 
functional, but the language to use them cannot define state, it can 
just pass it along like a monad.

I guess it's not pure functional though because of proto::and_.

In any case, a lot of non-trivial expression evaluation strategies 
cannot be practically implemented as a transform and require a primitive 
transform.


If everything ends up being primitive transforms, we might as well use 
simple function objects directly, which are not tied by constraints such 
as arity, state, data, environment etc., just store whatever state is 
needed in the function object or bind that state with boost::bind or 
similar.


I'd like to see Proto provide algorithms like this that accept arbitrary 
function objects and that are not intended to require transforms to do 
useful things.




You know about proto::vararg, right? It lets you handle nodes of
arbitrary arity.


The transformations that can currently be done as a non-primitive 
transform when the transformation must not rely on an explicit arity of 
the expression are extremely limited.


Adding unpacking transforms would certainly make it more powerful, but 
still not as powerful as what you could do with a simple function object 
coupled with macros or variadic templates.


I think it's important to keep in mind that transforms are a possible 
solution that can work well for some languages, but that there should be 
other solutions as well.




Once there is an unpack transform, will you still feel this way?


We already have the unpack algorithm that I described. It's relatively 
simple and straightforward code. We used to have it defined as a 
primitive transform, it was much more complicated to use and required 
adding special cases for state and data, and was limited to just that.


I don't see how using a transform that does the same thing but less 
generic would be useful to us.

___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] proto-11 progress report

2012-06-28 Thread Eric Niebler
On 6/27/2012 2:11 PM, Mathias Gaunard wrote:
> On 25/06/2012 23:30, Eric Niebler wrote:
>> On 6/25/2012 12:21 PM, Mathias Gaunard wrote:
> 
>>> There is a function which is very simple and that I found to be very
>>> useful when dealing with expression trees.
>>>
>>> unpack(e, f0, f1) which calls
>>> f0(f1(e.child0), f1(e.child1), ..., f1(e.childN))
>>>
>>> I can do recursion or not with the right f1, and I can 'unpack' an
>>> expression to an n-ary operation f0.
>>>
>>> Here f0 is typically a function that uses its own overloading-based
>>> dispatching mechanism.
>>
>> OK, thanks for the suggestion. Where have you found this useful?
> 
> For example I can use this to call
> 
> functor()(functor()(a, b), c);
> 
> from a tree like
> 
> expr,
> expr >, expr > >
> 
> NT2 uses a mechanism like this to evaluate expressions.

OK.

> For element-wise expressions (i.e. usual vector operations), the f1 is
> the run(expr, pos) function -- actually more complicated, but there is
> no need to go into details -- which by default simply calls unpack
> recursively.
> 
> What the f0 does is simply unpack the expression and call a functor
> associated to the tag (i.e. run(expr, i) with expr bar> > calls pow(run(foo, i), run(bar, i)) ).
> 
> The important bit is that it is also possible to overload run for a
> particular node type.
> 
> On terminals, the run is defined to do a load/store at the given
> position. This means run(a + b * sqrt(c) / pow(d, e), i)  calls
> plus(a[i], multiplies(b[i], divides(sqrt(c[i]), pow(d[i], e[i]
> 
> Each function like plus, multiplies, sqrt, pow, etc. is overloaded so
> that if any of the arguments is an expression, it does a make_expr. If
> the values are scalars, it does the expected operations on scalars. If
> they're SIMD vectors, it does the expected operations on vectors.
> 
> run is also overloaded for a variety of operations that depend on the
> position itself, such as restructuring, concatenating or repeating data;
> the position is modified before running the children or different things
> may be run depending on a condition.
> 
> A simple example is the evaluation of cat(a, b) where run(expr, i) is
> defined as something a bit like i < size(child0(expr)) ?
> run(child0(expr), i) : run(child1(expr), i-size(child0(expr))
> 
> unpack is also used for expression rewriting: before expressions are
> run, the expression is traversed recursively and reconstructed. Whenever
> operations that are not combinable are found, those sub-expressions are
> evaluated and the resulting terminal is inserted in their place in the
> tree. In NT2 this is done by the schedule function.

After meditating on this for a bit, a thought occurred to me. Your
unpack function is a generalization of the pattern used by the _default
transform. The _default transform unpacks an expression, transforms
each child with X, then recombines the result using the "C++ meaning"
corresponding to the expression's tag type. In other words,
_default()(e) is unpack(e, f0, f1) where f1 is X and f0 is
hard-coded. I could very easily provide unpack as a fundamental
transform and then trivially implement _default in terms of that. I very
much like this idea.

Aside: The pass_through transform almost fits this mold too, except that
each child can get its own f1. Hmm.

Unpack doesn't sound like the right name for it, though. It reminds me a
bit of Haskell's fmap, but instead of always putting the mapped elements
back into a box of the source type, it lets you specify how to box
things on the way out.

> Similarly we have a phase that does the same kind of thing to replace
> certain patterns of combinations by their optimized counterparts
> (optimize). We'd like to do this at construction time but it's currently
> not very practical with the way Proto works.

Generators are intended to meet this need. What are they lacking for
you? Is it the lack of an "unpack" transform?

> Both optimize and schedule require containing children by value to
> function correctly.

How is this relevant?

> Transforms are not used much since they're only useful for the most
> simple operations due to their pseudo-DSL pure functional nature. It's
> especially problematic when performance sensitive code, which needs to
> be stateful, is involved. 

Transforms are not pure functional. The data parameter can be mutated
during evaluation. Even expressions themselves can be mutated by a
transform, as long as they're non-const.

> Finally it's also not practical to do anything
> involving nodes of arbitrary arity with them.

You know about proto::vararg, right? It lets you handle nodes of
arbitrary arity.

> Unfortunately I cannot say transforms are good enough for the kind of
> thing NT2 does.

Once there is an unpack transform, will you still feel this way?

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com


___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/pro

Re: [proto] proto-11 progress report

2012-06-27 Thread Mathias Gaunard

On 25/06/2012 23:30, Eric Niebler wrote:

On 6/25/2012 12:21 PM, Mathias Gaunard wrote:



There is a function which is very simple and that I found to be very
useful when dealing with expression trees.

unpack(e, f0, f1) which calls
f0(f1(e.child0), f1(e.child1), ..., f1(e.childN))

I can do recursion or not with the right f1, and I can 'unpack' an
expression to an n-ary operation f0.

Here f0 is typically a function that uses its own overloading-based
dispatching mechanism.


OK, thanks for the suggestion. Where have you found this useful?


For example I can use this to call

functor()(functor()(a, b), c);

from a tree like

expr, 
expr >, expr > >


NT2 uses a mechanism like this to evaluate expressions.

For element-wise expressions (i.e. usual vector operations), the f1 is 
the run(expr, pos) function -- actually more complicated, but there is 
no need to go into details -- which by default simply calls unpack 
recursively.


What the f0 does is simply unpack the expression and call a functor 
associated to the tag (i.e. run(expr, i) with exprbar> > calls pow(run(foo, i), run(bar, i)) ).


The important bit is that it is also possible to overload run for a 
particular node type.


On terminals, the run is defined to do a load/store at the given 
position. This means run(a + b * sqrt(c) / pow(d, e), i)  calls 
plus(a[i], multiplies(b[i], divides(sqrt(c[i]), pow(d[i], e[i]


Each function like plus, multiplies, sqrt, pow, etc. is overloaded so 
that if any of the arguments is an expression, it does a make_expr. If 
the values are scalars, it does the expected operations on scalars. If 
they're SIMD vectors, it does the expected operations on vectors.


run is also overloaded for a variety of operations that depend on the 
position itself, such as restructuring, concatenating or repeating data; 
the position is modified before running the children or different things 
may be run depending on a condition.


A simple example is the evaluation of cat(a, b) where run(expr, i) is 
defined as something a bit like i < size(child0(expr)) ? 
run(child0(expr), i) : run(child1(expr), i-size(child0(expr))


unpack is also used for expression rewriting: before expressions are 
run, the expression is traversed recursively and reconstructed. Whenever 
operations that are not combinable are found, those sub-expressions are 
evaluated and the resulting terminal is inserted in their place in the 
tree. In NT2 this is done by the schedule function.


Similarly we have a phase that does the same kind of thing to replace 
certain patterns of combinations by their optimized counterparts 
(optimize). We'd like to do this at construction time but it's currently 
not very practical with the way Proto works.


Both optimize and schedule require containing children by value to 
function correctly.


Transforms are not used much since they're only useful for the most 
simple operations due to their pseudo-DSL pure functional nature. It's 
especially problematic when performance sensitive code, which needs to 
be stateful, is involved. Finally it's also not practical to do anything 
involving nodes of arbitrary arity with them.


Unfortunately I cannot say transforms are good enough for the kind of 
thing NT2 does.

___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] proto-11 progress report

2012-06-25 Thread Eric Niebler
On 6/25/2012 12:44 PM, Bart Janssens wrote:
> On Sun, Jun 24, 2012 at 1:10 AM, Eric Niebler  wrote:
>> Data parameter uses a slot mechanism
>> 
>> In proto today, transforms take 3 parameters: expression, state and
>> data. As you can see from above, transforms in proto-11 take an
>> arbitrary number of parameters. However, that can make it hard to find
>> the piece of data you're looking for. Which position will it be in?
>> Instead, by convention most transforms will still only deal with the
>> usual 3 parameters. However, the data parameter is like a fusion::map:
>> it will have slots that you can access in O(1) by tag.
> 
> Intersting! Our current "data" element contains a fusion vector, also
> to enable grouping of strongly typed data. This is a major source of
> complexity in our code, so it will be simpler if we can reuse this
> mechanism.

Yes, the slots mechanism greatly simplifies that kind of code. You can
find a simple lambda library example here:

https://github.com/ericniebler/home/blob/master/src/proto/libs/proto/example/lambda.cpp

Notice how the lambda_eval_ function uses the slots feature to store the
lambda's arguments on line 48, and how the algorithm accesses the values
of those slots on line 36. Formerly, this had to be done with a fusion
vector, as you are doing. This way is much simpler, I think.

I primarily added this feature so that I have a way to implement the
let<> transform I've described here previously. It'll give you a way to
create local variables within a transform. The current let "stack frame"
will use one of these slots.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com



signature.asc
Description: OpenPGP digital signature
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] proto-11 progress report

2012-06-25 Thread Eric Niebler
On 6/25/2012 12:21 PM, Mathias Gaunard wrote:
> On 24/06/2012 01:10, Eric Niebler wrote:
> 
>> As for what is not changing:
>>
>> Grammars, Transforms and Algorithms
>> ===
>> It would be wonderful if there were a more natural syntax for describing
>> proto algorithms rather than with structs, function objects, proto::or_,
>> proto::when, and friends. If there is one, I haven't found it yet. On
>> the up side, it means that many current proto-based libraries can be
>> upgraded with little effort. On the down side, the learning curve will
>> still be pretty steep. If anybody has ideas for how to use C++11 to
>> simplify pattern matching and the definition of recursive tree
>> transformation algorithms, I'm all ears.
> 
> There is a function which is very simple and that I found to be very
> useful when dealing with expression trees.
> 
> unpack(e, f0, f1) which calls
> f0(f1(e.child0), f1(e.child1), ..., f1(e.childN))
> 
> I can do recursion or not with the right f1, and I can 'unpack' an
> expression to an n-ary operation f0.
> 
> Here f0 is typically a function that uses its own overloading-based
> dispatching mechanism.

OK, thanks for the suggestion. Where have you found this useful?

Along those lines, I've been thinking about adding a kind of transform
that gives an exploded view of an expression, kind of like what
evaluation contexts give you. If you tell me your use case, I can try to
make the new transform support it.

For the record, I'll be killing off evaluation contexts completely in
proto.next.

>> It needs clang trunk to compile.
> 
> Why doesn't it work with GCC?

The big missing feature is ref qualifiers for member functions (n2439).
AFAIK, clang is the only compiler that has implemented it.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com





signature.asc
Description: OpenPGP digital signature
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] proto-11 progress report

2012-06-25 Thread Bart Janssens
On Sun, Jun 24, 2012 at 1:10 AM, Eric Niebler  wrote:
> A custom transform is simply a struct that inherits from
> proto::transform and that has an operator() that accepts an arbitrary
> number of parameters.

Aha, nice! We are using the current proto to enable "custom terminals"
that also take any number of parameters through operator(), and then
pass on the evaluation to a user-defined struct. Looks like this
mechanism may simplify the implementation.

> Data parameter uses a slot mechanism
> 
> In proto today, transforms take 3 parameters: expression, state and
> data. As you can see from above, transforms in proto-11 take an
> arbitrary number of parameters. However, that can make it hard to find
> the piece of data you're looking for. Which position will it be in?
> Instead, by convention most transforms will still only deal with the
> usual 3 parameters. However, the data parameter is like a fusion::map:
> it will have slots that you can access in O(1) by tag.

Intersting! Our current "data" element contains a fusion vector, also
to enable grouping of strongly typed data. This is a major source of
complexity in our code, so it will be simpler if we can reuse this
mechanism.

So as far as I'm concerned, you hit the nail right on the head with
these features ;)

Cheers,

-- 
Bart
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] proto-11 progress report

2012-06-25 Thread Mathias Gaunard

On 24/06/2012 01:10, Eric Niebler wrote:


As for what is not changing:

Grammars, Transforms and Algorithms
===
It would be wonderful if there were a more natural syntax for describing
proto algorithms rather than with structs, function objects, proto::or_,
proto::when, and friends. If there is one, I haven't found it yet. On
the up side, it means that many current proto-based libraries can be
upgraded with little effort. On the down side, the learning curve will
still be pretty steep. If anybody has ideas for how to use C++11 to
simplify pattern matching and the definition of recursive tree
transformation algorithms, I'm all ears.


There is a function which is very simple and that I found to be very 
useful when dealing with expression trees.


unpack(e, f0, f1) which calls
f0(f1(e.child0), f1(e.child1), ..., f1(e.childN))

I can do recursion or not with the right f1, and I can 'unpack' an 
expression to an n-ary operation f0.


Here f0 is typically a function that uses its own overloading-based 
dispatching mechanism.




It needs clang trunk to compile.


Why doesn't it work with GCC?
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] proto-11 progress report

2012-06-25 Thread Eric Niebler
On 6/25/2012 12:39 AM, Joel Falcou wrote:
> On 06/24/2012 01:10 AM, Eric Niebler wrote:

>>int i = LambdaEval()(_1 + 42, 0, proto::tag::data = 8);
>> 
>> The 3rd parameter associates the value 8 with the data tag.

> 
> How do you set up new tag  ? Is just mytag some
> 
> mytag_type mytag = {};
> 
> ?
> 
> or should mytag_type inherit/be wrapped from some special stuff

Special stuff. Tags are defined as follows:

struct my_tag_type
  : proto::tags::def
{
using proto::tags::def::operator=;
};

namespace
{
constexpr my_tag_type const & my_tag =
proto::utility::static_const::value;
}

The gunk in the unnamed namespace is for strict ODR compliance. A simple
global const would be plenty good for most purposes.

>> As for what is not changing:
>>
>> Grammars, Transforms and Algorithms
>> ===
>> It would be wonderful if there were a more natural syntax for describing
>> proto algorithms rather than with structs, function objects, proto::or_,
>> proto::when, and friends. If there is one, I haven't found it yet. On
>> the up side, it means that many current proto-based libraries can be
>> upgraded with little effort. On the down side, the learning curve will
>> still be pretty steep. If anybody has ideas for how to use C++11 to
>> simplify pattern matching and the definition of recursive tree
>> transformation algorithms, I'm all ears.
> 
> There is not so much way to describe something that looks like
> a grammar definition anyway. BNF/EBNF is probably the simplest
> way to do it.

That would be overkill IMO. Proto grammars don't need to worry about
precedence and associativity. Forcing folks to write E?BNF would mean
forcing them to think about stuff they don't need to think about.

> Now on the syntactic clutter front, except wrapping everything in round
> lambda
> or use object/function call in a hidden decltype call, I don't see what we
> can do better :s

More round lambda, sure. Fixing inconsistencies, also. But I tend to
doubt that using grammar-like expressions in a decltype would be a
significant improvement. Folks still can't write transforms in straight,
idiomatic C++, which is what I want.

> Glad it is picking up steam :D

C++11 has made this pretty fun. I'm ready to stop supporting all my
C++03 libraries now. :-)

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com



signature.asc
Description: OpenPGP digital signature
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] proto-11 progress report

2012-06-25 Thread Joel Falcou

On 06/24/2012 01:10 AM, Eric Niebler wrote:

I've made some good progress on the C++11 proto rewrite that I'd like to
share. So far, it's been a less radical shift than I expected.


You didn't try hard enough ;)


Expressions vs. Grammars

Many new users are confused by the difference between terminal and
terminal::type. In proto.next, there is no difference. Forget the
::type. Things just work.


Neat


Custom transforms are simpler
=
Currently, defining a custom transform means defining a struct with a
nested impl class template of 3 parameters, correctly inheriting and
following a protocol. In the rewrite, I wanted to simplify things. Here
for instance, is how the _expr transform is defined:

 struct _expr
   : transform<_expr>
 {
 template
 auto operator()(E && e, Rest &&...) const
 BOOST_PROTO_AUTO_RETURN(
 static_cast(e)
 )
 };

A custom transform is simply a struct that inherits from
proto::transform and that has an operator() that accepts an arbitrary
number of parameters. (The use of BOOST_PROTO_AUTO_RETURN is not
necessary. It simply handles the return statement, the return type, and
the noexcept clause.)


Good


Data parameter uses a slot mechanism

In proto today, transforms take 3 parameters: expression, state and
data. As you can see from above, transforms in proto-11 take an
arbitrary number of parameters. However, that can make it hard to find
the piece of data you're looking for. Which position will it be in?
Instead, by convention most transforms will still only deal with the
usual 3 parameters. However, the data parameter is like a fusion::map:
it will have slots that you can access in O(1) by tag.

Here is how a proto algorithm will be invoked:

   int i = LambdaEval()(_1 + 42, 0, proto::tag::data = 8);

The 3rd parameter associates the value 8 with the data tag. The _data
transform returns the data associated with that tag. Additionally, you
can define you own tags and pass along another blob of data, as follows:

   int i = LambdaEval()(_1 + 42, 0, (proto::tag::data = 8, mytag = 42));

The _data transform will still just return 8, but you can use
_env to fetch the 42. The third parameter has been
generalized from an unstructured blob of data to a structured collection
of environment variables. Slots can even be reused, in which case they
behave like FILO queues (stacks).


How do you set up new tag  ? Is just mytag some

mytag_type mytag = {};

?

or should mytag_type inherit/be wrapped from some special stuff


As for what is not changing:

Grammars, Transforms and Algorithms
===
It would be wonderful if there were a more natural syntax for describing
proto algorithms rather than with structs, function objects, proto::or_,
proto::when, and friends. If there is one, I haven't found it yet. On
the up side, it means that many current proto-based libraries can be
upgraded with little effort. On the down side, the learning curve will
still be pretty steep. If anybody has ideas for how to use C++11 to
simplify pattern matching and the definition of recursive tree
transformation algorithms, I'm all ears.


There is not so much way to describe something that looks like
a grammar definition anyway. BNF/EBNF is probably the simplest
way to do it.

Now on the syntactic clutter front, except wrapping everything in round 
lambda

or use object/function call in a hidden decltype call, I don't see what we
can do better :s


Glad it is picking up steam :D

___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto