Re: [proto] problems with proto::matches

2013-01-07 Thread Thomas Heller

On 12/13/2012 06:44 PM, Eric Niebler wrote:

On 12/13/2012 4:51 AM, Thomas Heller wrote:

Hi,

I recently discovered a behavior which i find quite odd:
proto::matches::type fails when Expression is not a
proto expression. I would have expected that it just returns false in
this case. What am I missing. Patch is attached for what i think would
be a better behavior of that meta function.

Hi Thomas,

Thanks for the patch. Pros and cons to this. Pro: it works in more
situations, including yours. (Could you tell me a bit about your
situation?) Also, the implementation is dead simple and free of extra
TMP overhead.
Let me try to explain what I am up to ... I am trying to implement a DSL 
which has named parameter support (I adapted the toy example we did 3 
years ago at https://github.com/sithhell/boosties/tree/master/options).

Consider this example proto expression:

a = foo[b = 1], c = bar[d = a]

where a,b,c and d are the named parameters. and foo and bar some 
operations (doesn't matter at this point what exactly those do).
Now, since bar needs to access the parameter defined as the return value 
of foo, i need to somehow concatenate the parameters. This works fine. 
However, at the very beginning of the concatenation i try to concat a 
"struct unused {};" with the "a" parameter expression. unused plays the 
role of an empty parameter list.
In order to detect if i need to concatenate or just return the parameter 
expression, i tried to use proto::matches... which lead me to the patch. 
Hope that makes sense.


Cons: Someone might expect a non-Proto type to be treated as a terminal
of that type and be surprised at getting a false where s/he expected
true (a fair assumption since Proto treats non-expressions as terminals
elsewhere; e.g., in its operator overloads). It slightly complicates the
specification of matches. It is potentially breaking in that it changes
the template arity of proto::matches. (Consider what happens if someone
is doing mpl::quote2.)

Ok. I admit i haven't thought about these details. Good points.


I'm inclined to say this is not a bug and that it's a prerequisite of
matches that Expression is a proto expression. If you want to use it
with types that aren't expressions, you can already do that:

   template
   struct maybe_matches
 : mpl::if_<  proto::is_expr
   , proto::matches
   , mpl::false_
   >::type
   {};

Would the above work for you? I realize that's more expensive than what
you're doing now. :-(

Sorry for the lat reply ... Your proposed solution works equally fine, 
of course. I'll go with that one.

___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


[proto] problems with proto::matches

2012-12-13 Thread Thomas Heller

Hi,

I recently discovered a behavior which i find quite odd:
proto::matches::type fails when Expression is not a 
proto expression. I would have expected that it just returns false in 
this case. What am I missing. Patch is attached for what i think would 
be a better behavior of that meta function.


Regards,

Thomas
Index: boost/proto/matches.hpp
===
--- boost/proto/matches.hpp (revision 81703)
+++ boost/proto/matches.hpp (working copy)
@@ -500,7 +500,7 @@
 /// [0,n), \c Ax and \c Bx are types
 /// such that \c Ax lambda-matches \c Bx
 template
-struct matches
+struct matches
   : detail::matches_<
 typename Expr::proto_derived_expr
   , typename Expr::proto_grammar
@@ -511,13 +511,18 @@
 /// INTERNAL ONLY
 ///
 template
-struct matches
+struct matches
   : detail::matches_<
 typename Expr::proto_derived_expr
   , typename Expr::proto_grammar
   , typename Grammar::proto_grammar
 >
 {};
+
+template
+struct matches
+  : mpl::false_
+{};
 
 /// \brief A wildcard grammar element that matches any expression,
 /// and a transform that returns the current expression unchanged.
Index: boost/proto/proto_fwd.hpp
===
--- boost/proto/proto_fwd.hpp   (revision 81703)
+++ boost/proto/proto_fwd.hpp   (working copy)
@@ -521,7 +521,7 @@
 template
 struct domain_of;
 
-template
+template
 struct matches;
 
 // Generic expression metafunctions and

___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Precomputing common matrix products in an expression

2012-07-15 Thread Thomas Heller
On Friday, July 13, 2012 21:51:40 Bart Janssens wrote:
> Hi guys,
> 
> I've been thinking about a feature for our DSEL, where lots of matrix
> products can occur in an expression. Part of an expression might be:
> nu_eff * transpose(nabla(u)) * nabla(u) + transpose(N(u) +
> coeffs.tau_su*u_adv*nabla(u)) * u_adv*nabla(u)
> 
> Here, u_adv*nabla(u) is a vector-matrix product that occurs twice, so
> it would be beneficial to calculate it only once. I was wondering if
> it would be doable to construct a fusion map, with as keys the type of
> each product occurring in an expression, and evaluate each member of
> the map before evaluating the actual expression. When the expression
> is evaluated, matrix products would then be looked up in the map.
> 
> Does this sound like something that's doable? I'm assuming the fold
> transform can help me in the construction of the fusion map. Note also
> that each matrix has a compile-time size, so any stored temporary
> would need to have its type computed.

In a way, this reminds me of how local variables in phoenix work.
You have the key (the variables name) which is a distinct type, and you look 
up the value of that key in a fusion map. The only real difference is that 
local variables aren't computed lazily, but that should be a nobrainer to add. 
Might get interesting in a multi-threaded environment though.

> 
> Cheers,
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] _unpack transform

2012-07-11 Thread Thomas Heller

On 07/10/2012 11:18 PM, Eric Niebler wrote:

I just committed to the proto-11 codebase a new transform called
_unpack. You use it like this:

   _unpack

Where Tfx represents any transform (primitive or otherwise) f0 is any
callable or object type, and f1(_) is an object or callable transform.
The "..." denotes pseudo-pack expansion (although it's really an C-style
vararg ellipsis). The semantics are to replace "f1(_)..." with
"f1(_child<0>), f1(_child<1>), etc.".

With this, the _default transform is trivially implemented like this:

struct _default
   : proto::or_<
 proto::when, proto::_value>
   , proto::otherwise<
 proto::_unpack(), _default(_)...)>
 >
 >
{};

...where eval is:

struct eval
{
 template
 auto operator()(proto::tag::plus, E0&&  e0, E1&&  e1) const
 BOOST_PROTO_AUTO_RETURN(
 static_cast(e0) + static_cast(e1)
 )

 template
 auto operator()(proto::tag::multiplies, E0&&  e0, E1&&  e1) const
 BOOST_PROTO_AUTO_RETURN(
 static_cast(e0) * static_cast(e1)
 )

 // Other overloads...
};

The _unpack transform is pretty general, allowing a lot of variation
within the pack expansion pattern. There can be any number of Tfx
transforms, and the wildcard can be arbitrarily nested. So these are all ok:

   // just call f0 with all the children
   _unpack

   // some more transforms first
   _unpack

   // and nest the wildcard deeply, too
   _unpack

I'm still playing around with it, but it seems quite powerful. Thoughts?
Would there be interest in having this for Proto-current? Should I
rename it to _expand, since I'm modelling C++11 pack expansion?

i think _expand would be the proper name. Funny enough i proposed it 
some time ago for proto-current, even had an implementation for it, and 
the NT2 guys are using that exact implementation ;)

Maybe with some extensions.
So yes, Proto-current would benefit from such a transform.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] user docs for advanced features

2012-01-04 Thread Thomas Heller

On 01/03/2012 01:55 AM, Eric Niebler wrote:

Proto's users guide has been behind the times for a while. No longer.
More recent and powerful features are now documented. Feedback welcome.

Sub-domains:

http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/front_end/customizing_expressions_in_your_domain/subdomains.html


The section is very informative and explains the concept nicely!



Per-domain as_child customization:
==
http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/front_end/customizing_expressions_in_your_domain/per_domain_as_child.html


Also a very nice addition to the existing documentation. What's not 
immediately clear after reading that section is when proto::as_expr is 
called within the library, or when you want to call it as a user. Some 
clarification would be good here.



External Transforms:
===
http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/back_end/expression_transformation/external_transforms.html


There is a small typo here: "Just how r_i_diculous would it be to copy..."
One addition which might be helpful to the interested reader is to 
explicitly say that proto's data "slot" is used for the external transform.


Thanks for adding this documentation!

Hope that helps,
Thomas

___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] proto::expr vs. proto::basic_expr

2011-05-15 Thread Thomas Heller
On Mon, May 16, 2011 at 7:22 AM, Eric Niebler  wrote:
> On 5/15/2011 9:19 PM, Thomas Heller wrote:
>> Hi,
>>
>> Today I experimented a little bit with phoenix and proto.
>> My goal was to decrease the compile time of phoenix. When I started the
>> development of phoenix, Eric advised me to use proto::basic_expr to reduce
>> compile times.
>> Which makes sense giving the argumentation that on instatiating the 
>> expression
>> node, basic_expr has a lot less functions etc. thus the compiler needs to
>> instantiate less. So much for the theory.
>> In practice this, sadly is not the case. Today I made sure that phoenix uses
>> basic_expr exclusively (did not commit the changes).
>>
>> The result of this adventure was that compile times stayed the same. I was a
>> little bit disappointed by this result.
>>
>> Does anybody have an explanation for this?
>
> Impossible to say with certainty. I suspect, though, that your use case
> is different than mine or, say, Christophe's. With xpressive or MSM,
> compiles weren't effected by pre-preprocessing, which shows that we're
> hitting limits in the speed of semantic analysis and code gen (possibly
> template instantiation). Pre-preprocessing sped up Phoenix, which shows
> that you're more hamstrung by lexing. The choice of either proto::expr
> or proto::basic_expr doesn't matter for lexing because they're both
> going to be lexed regardless.

Makes sense.

> I know when I introduced basic_expr, I ran tests that showed it was a
> perf win for xpressive. But it was a small win, on the order of about
> 5%, IIRC. YMMV.

You might get even faster. I noticed some places in proto that still
instantiate a proto::expr.
That is in the creation of operators (the lazy_matches in
operators.hpp) instantiate a proto::expr. And the generator of
basic_expr is the default generator which instantiates, when called a
proto::expr as well. Not sure if that may actually matter
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


[proto] proto::expr vs. proto::basic_expr

2011-05-15 Thread Thomas Heller
Hi,

Today I experimented a little bit with phoenix and proto.
My goal was to decrease the compile time of phoenix. When I started the 
development of phoenix, Eric advised me to use proto::basic_expr to reduce 
compile times.
Which makes sense giving the argumentation that on instatiating the expression 
node, basic_expr has a lot less functions etc. thus the compiler needs to 
instantiate less. So much for the theory.
In practice this, sadly is not the case. Today I made sure that phoenix uses 
basic_expr exclusively (did not commit the changes).

The result of this adventure was that compile times stayed the same. I was a 
little bit disappointed by this result.

Does anybody have an explanation for this?

Regards,
Thomas
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] using phoenix in MSM

2011-05-09 Thread Thomas Heller
On Monday, May 09, 2011 11:18:50 PM Christophe Henry wrote:
> Hi,
> 
> Thanks to Eric's latest changes (which removed conflicts between MSM
> and phoenix), and Thomas' help, I added the possibility to use phoenix
> as an action inside eUML.
> There is still quite a bit to be done to be able to do with phoenix
> the same stuff as with standard eUML, but for the moment I can:
> - define a guard or action with a phoenix actor
> - provide a guard with eUML and an action with phoenix and vice-versa
> - use my own placeholders instead of arg1.. (_event, _fsm, ...)
> 
> I have all in a branch
> (https://svn.boost.org/svn/boost/branches/msm/msm_phoenix) with a
> simple example.

Small remark:
Global objects seem to get out of fashion: They have a negative impact on 
compile times and binary size.
instead of using phoenix::function on global (namespace) scope, you could use
the newly introduced macros to define equivalent free functions.
For example, instead of this:

boost::phoenix::function process_play;

you could just have:
   
BOOST_PHOENIX_ADAPT_CALLABLE(process_play, process_play_impl, 1)

For documentation see:
https://svn.boost.org/svn/boost/trunk/libs/phoenix/doc/html/phoenix/modules/function/adapting_functions.html

> It's not a huge amount but usable. As I only allow phoenix with a
> define (for security), I now have 2 possibilities:
> - keep working in the branch and release with 1.48
> - release with 1.47. There is no risk of breaking anything as we have
> the #define so it's safe.
> 
> I'm very tempted to release with 1.47 so that Michael or whoever is
> interested would have a chance to try it out and tell me what he'd
> like to see there first, without having to get a branch. Plus it would
> be one more use case of phoenix.
> Preconditions:
> - phoenix is really out at 1.47

It is merged to release. So there is nothing that holds it back now ;)

> - I have a small worry with expression::argument. The doc indicates
> it starts at index 0, but when I try, it starts at 1. Is the doc not
> up-to-date, the code, or I'm plain wrong?

The docs are wrong, thanks for pointing it out. Consider it fixed.

> Is there some interest to see MSM support phoenix, even only partly, in 1.47?
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [boost] [phoenix][msm] proto limits increased to 10, phoenix broken on trunk (was: [phoenix] not playing nice with other libs)

2011-05-08 Thread Thomas Heller
On Sunday, May 08, 2011 07:36:32 PM Hartmut Kaiser wrote:
> 
> > My pre-preprocessing work continues, and all EDSLs that use Proto will
> > benefit from faster compiles. I'd like to thank Hartmut for his work on
> > Wave and Thomas for getting me set up.
> 
> First results:
> 
>  Spirit/Phoenix V2, MSVC, speedup ~8% (no opt), ~6% (opt)
>  Spirit/Phoenix V3, MSVC, speedup ~5% (no opt), ~4% (opt), includes full 
preprocessed headers for Phoenix

Just to give you an idea of the impact of the recent events:

g++ -I. libs/phoenix/test/core/primitives_tests.cpp -c

real0m1.525s
user0m1.370s
sys 0m0.137s

g++ -I. libs/phoenix/test/core/primitives_tests.cpp -c -
DBOOST_PROTO_DONT_USE_PREPROCESSED_FILES -
DBOOST_PHOENIX_DONT_USE_PREPROCESSED_FILES

real0m2.626s
user0m2.463s
sys 0m0.157s

g++ -I. libs/phoenix/test/core/primitives_tests.cpp -c -
DBOOST_PROTO_DONT_USE_PREPROCESSED_FILES

real0m2.414s
user0m2.283s
sys 0m0.123s

g++ -I. libs/phoenix/test/core/primitives_tests.cpp -c -
DBOOST_PHOENIX_DONT_USE_PREPROCESSED_FILES

real0m1.729s
user0m1.607s
sys 0m0.117s

g++ -I. libs/spirit/phoenix/test/core/primitives_tests.cpp -c

real0m2.291s
user0m2.123s
sys 0m0.160s


This means that phoenix V3 is up to 50% faster than V2!
This is all depending on the usecase of course. Other testcase show that V3 is 
either as fast to compile or a little faster!
 
> Thanks!
I double that! Thanks!
Thomas


___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [phoenix][msm] proto limits increased to 10, phoenix broken on trunk (was: [phoenix] not playing nice with other libs)

2011-05-08 Thread Thomas Heller
On Sunday, May 08, 2011 02:48:19 PM Thomas Heller wrote:
> On Sunday, May 08, 2011 02:05:43 PM Eric Niebler wrote:
> > On 5/2/2011 6:18 PM, Thomas Heller wrote:
> > > On Mon, May 2, 2011 at 12:54 PM, Eric Niebler  
> wrote:
> > >> Phoenix is changing the following fundamental constants:
> > >>
> > >>  BOOST_PROTO_MAX_ARITY
> > >>  BOOST_MPL_LIMIT_METAFUNCTION_ARITY
> > >>  BOOST_PROTO_MAX_LOGICAL_ARITY
> > >>  BOOST_RESULT_OF_NUM_ARGS
> > >>
> > >> IMO, Phoenix shouldn't be touching these. It should work as best it can
> > >> with the default values. Users who are so inclined can change them.
> > > 
> > > Eric,
> > > This problem is well known. As of now I have no clue how to fix it 
properly.
> > 
> > 
> > The Proto pre-preprocessing work on trunk has progressed to the point
> > where compiling with all the arities at 10 now compiles *faster* than
> > unpreprocessed Proto with the arities at 5. So I've bumped everything to 10.

Phoenix is up and running now again and should play nice with other libs!

Eric, thanks for pointing it all out and actively working on a fix for this!

___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [phoenix][msm] proto limits increased to 10, phoenix broken on trunk (was: [phoenix] not playing nice with other libs)

2011-05-08 Thread Thomas Heller
On Sunday, May 08, 2011 02:05:43 PM Eric Niebler wrote:
> On 5/2/2011 6:18 PM, Thomas Heller wrote:
> > On Mon, May 2, 2011 at 12:54 PM, Eric Niebler  
wrote:
> >> Phoenix is changing the following fundamental constants:
> >>
> >>  BOOST_PROTO_MAX_ARITY
> >>  BOOST_MPL_LIMIT_METAFUNCTION_ARITY
> >>  BOOST_PROTO_MAX_LOGICAL_ARITY
> >>  BOOST_RESULT_OF_NUM_ARGS
> >>
> >> IMO, Phoenix shouldn't be touching these. It should work as best it can
> >> with the default values. Users who are so inclined can change them.
> > 
> > Eric,
> > This problem is well known. As of now I have no clue how to fix it properly.
> 
> 
> The Proto pre-preprocessing work on trunk has progressed to the point
> where compiling with all the arities at 10 now compiles *faster* than
> unpreprocessed Proto with the arities at 5. So I've bumped everything to 10.
> 
> A few things:
> 
> 1) Phoenix is now broken. My Proto work involved pruning some
> unnecessary headers, and Phoenix isn't including everything it needs.
> Thomas, I'll leave this for you to fix.
> 
> 2) Phoenix is actually setting Proto's max arity to 11, not to 10. I
> think this is unnecessary. Locally, I un-broke Phoenix and ran its tests
> with 10, and only one test broke. That was due to a bug in Phoenix. I'm
> attaching a patch for that.
> 
> 3) After the patch is applied, Phoenix should be changed such that it
> includes proto_fwd.hpp and then acts accordingly based on the values of
> the constants. IMO, that should mean graceful degradation of behavior
> with lower arities, until a point such that Phoenix cannot function at
> all, in which case it should #error out.
> 
> 4) Phoenix no longer needs to change BOOST_MPL_LIMIT_METAFUNCTION_ARITY
> and BOOST_RESULT_OF_NUM_ARGS, but BOOST_RESULT_OF_NUM_ARGS should be
> given the same treatment as (3).
> 
> 5) MSM do the same.
> 
> My pre-preprocessing work continues, and all EDSLs that use Proto will
> benefit from faster compiles. I'd like to thank Hartmut for his work on
> Wave and Thomas for getting me set up.

Eric, thanks for doing the dirty work here!

This is awesome! With the latest changes, Phoenix V3 now compiles than V2!
Wee!
Just about to fix everything ...
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [phoenix] not playing nice with other libs

2011-05-04 Thread Thomas Heller
On Thu, May 5, 2011 at 5:47 AM, Eric Niebler  wrote:
> On 5/5/2011 2:27 AM, Bart Janssens wrote:
>> On Wed, May 4, 2011 at 7:55 PM, Eric Niebler 
>>  wrote:
>>> Bart, how high can N go in your EDSL? Is it really arbitrarily large?
>>
>> I didn't hit any limit in the real application (most complicated case
>> is at 9) and just did a test that worked up to 30. Compilation (debug
>> mode) took about 2-3 minutes at that point, with some swapping, so I
>> didn't push it any further.
>>
>> I've attached the header defining the grouping grammar, there should
>> be no dependencies on the rest of our code.
>
> We're talking about picking a sane and useful default for
> BOOST_PROTO_MAX_ARITY. You seem to be saying that 10 would cover most
> practical uses of your EDSL. Is that right?

I think having BOOST_PROTO_MAX_ARITY set to 10 is a good default!
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [phoenix] not playing nice with other libs

2011-05-04 Thread Thomas Heller
On Wed, May 4, 2011 at 10:58 AM, Eric Niebler  wrote:
> (cross-posting to the Proto list and cc'ing Hartmut.)
>
> On 5/2/2011 6:18 PM, Thomas Heller wrote:
>> On Mon, May 2, 2011 at 12:54 PM, Eric Niebler  wrote:
>>> Phoenix is changing the following fundamental constants:
>>>
>>>  BOOST_PROTO_MAX_ARITY
>>>  BOOST_MPL_LIMIT_METAFUNCTION_ARITY
>>>  BOOST_PROTO_MAX_LOGICAL_ARITY
>>>  BOOST_RESULT_OF_NUM_ARGS
>>>
>>> IMO, Phoenix shouldn't be touching these. It should work as best it can
>>> with the default values. Users who are so inclined can change them.
>>
>> Eric,
>> This problem is well known. As of now I have no clue how to fix it properly.
>>
>> Let me sketch why i changed these constants:
>> 1) Phoenix V2 has a composite limit of 10:
>>    This is equivalent to the number of child expressions a expression can 
>> hold.
>>    This is controlled by BOOST_PROTO_MAX_ARITY for the number of
>>    template arguments for proto::expr and proto::basic_expr
>> 2) Boost.Bind can take up to 10 parameters in the call to boost::bind
>
> It's still not clear to me why you're changing
> BOOST_MPL_LIMIT_METAFUNCTION_ARITY and BOOST_PROTO_MAX_LOGICAL_ARITY.

BOOST_MPL_LIMIT_METAFUNCTION_ARITY:
./boost/proto/matches.hpp:56:8: error: #error
BOOST_MPL_LIMIT_METAFUNCTION_ARITY must be at least as large as
BOOST_PROTO_MAX_ARITY

and
BOOST_PROTO_MAX_LOGICAL_ARITY:
./boost/proto/proto_fwd.hpp:326: error: provided for ‘template struct boost::proto::and_’

But i guess this can be fixed by simply splitting the call to proto::and_ 
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [phoenix] not playing nice with other libs

2011-05-04 Thread Thomas Heller
On Wed, May 4, 2011 at 10:58 AM, Eric Niebler  wrote:
> (cross-posting to the Proto list and cc'ing Hartmut.)
> On 5/2/2011 6:18 PM, Thomas Heller wrote:
>> On Mon, May 2, 2011 at 12:54 PM, Eric Niebler  wrote:

>>> Phoenix is changing the following fundamental constants:
>>>
>>>  BOOST_PROTO_MAX_ARITY
>>>  BOOST_MPL_LIMIT_METAFUNCTION_ARITY
>>>  BOOST_PROTO_MAX_LOGICAL_ARITY
>>>  BOOST_RESULT_OF_NUM_ARGS
>>>
>>> IMO, Phoenix shouldn't be touching these. It should work as best it can
>>> with the default values. Users who are so inclined can change them.
>>
>> Eric,
>> This problem is well known. As of now I have no clue how to fix it properly.
>>
>> Let me sketch why i changed these constants:
>> 1) Phoenix V2 has a composite limit of 10:
>>    This is equivalent to the number of child expressions a expression can 
>> hold.
>>    This is controlled by BOOST_PROTO_MAX_ARITY for the number of
>>    template arguments for proto::expr and proto::basic_expr
>> 2) Boost.Bind can take up to 10 parameters in the call to boost::bind
>
> It's still not clear to me why you're changing
> BOOST_MPL_LIMIT_METAFUNCTION_ARITY and BOOST_PROTO_MAX_LOGICAL_ARITY.

I don't remember the exact reasons anymore ... just checked the proto
code again ...
Seems like there have been some changes regarding these macros.
At the time i wrote the code for these macro redefintions, it was
necessary to make phoenix
compile.

>> The default BOOST_PROTO_MAX_ARITY is 5.
>
> I see. So this is inherently a limitation in Proto. I set Proto's max
> arity to 5 because more than that causes compile time issues. That's
> because there are N*M proto::expr::operator() overloads, where N is
> Proto's max arity and M is Proto's max function call arity. However:
>
> - IIRC, Phoenix doesn't use proto::expr. It uses proto::basic_expr, a
> lighter weight expression container that has no member operator overloads.

Correct. But we also need the arity in:
proto::call, proto::or_ and maybe some others

> - Compile time could be improved by pre-preprocessing, like MPL. That's
> something I've been meaning to do anyway.

Yes, we (Hartmut and me) keep saying for quite some time now.

> - The max function-call arity can already be set separately from the max
> number of child expressions.
>
> - The compile-time problem is a temporary one. Once more compilers have
> support for variadic templates, all the operator() overloads can be
> replaced with just one variadic one. Which should be done anyway.

Right.

> The solution then is in some combination of (a) allowing basic_expr to
> have a greater number of child expressions than expr, (b) bumping the
> max arity while leaving the max function call arity alone, (c)
> pre-preprocessing, (d) adding a variadic operator() for compilers that
> support it, and (e) just living with worse compile times until compilers
> catch up with C++0x.
>
> Not sure where the sweet spot is, but I'm pretty sure there is some way
> we can get Proto to support 10 child expressions for Phoenix's usage
> scenario. It'll take some work on my end though. Help would be appreciated.

Yes, I was thinking of possible solutions:
1) splittling the expressions in half, something like this:
proto::basic_expr<
tag
  , proto::basic_expr<
sub_tag
  , Child0, ..., Child(BOOST_PROTO_MAX_ARITY)
>
  , proto::basic_expr<
sub_tag
  , Child(BOOST_PROTO_MAX_ARITY), ... Child(BOOST_PROTO_MAX_ARITY * 2)
>
>

This would only need some additional work on the phoenix side.
Not sure if its actually worth it ... or even working.

2) Have some kind of completely variadic proto expression. Not by
having variadic
templates but by creating the list of children by some kind of cons list.
This might requires a quite substantial change in proto, haven't fully
investigated
that option.

>> The BOOST_RESULT_OF_NUM_ARGS constant needed to be changed because i
>> needed to provide 11 arguments in a "call" to boost::result_of. But i
>> guess a workaround
>> can be found in this specific case.
>
> What workaround did you have in mind?

Calling F::template result<...> directly, basically reimplementing
result_of for our
phoenix' own limits.

>> I wonder what qualifies as "User". Phoenix is certainly a user of mpl,
>> result_of and proto. Spirit is a user of proto and phoenix. Spirit needs an 
>> arity of 7 (IIRC).
>
> By "user" I meant "end-user" ... a user of Boost. You have to consider
> that someone may want to use Phoenix and MPL and Numeric and ... all in
> the same translation unit. We

Re: [proto] Using Phoenix inside eUML: mixing grammars

2011-05-01 Thread Thomas Heller
On Monday, April 25, 2011 06:39:14 PM Christophe Henry wrote:
> Hi Thomas,
> 
> Sorry to come back to the subject so late, I didn't manage before :(
> 
> > If you want to use it as a transform you need the evaluator with an
> > appropriate action that does the desired transform... here is an
> > example:
> >
> > struct BuildEventPlusGuard
> >  : proto::when<
> >  proto::subscript,
> > phoenix::meta_grammar >,
> >  TempRow > phoenix::evaluator(proto::_right, some_cool_action())>()
> >  >
> > {};
> >
> > Now, some_cool_action can do the transform that BuildGuards was doing.
> 
> Hmmm, I get a compiler error, which was expected (would be too easy
> otherwise ;- ) ), but the error is surprising. The error is that
> phoenix::evaluator seems to believe some_cool_action should be a
> random access fusion sequence (expects an environment?).

You are right ... slippery on my side ... evaluator expects a context, which is 
a tuple containing the environment and the actions: http://goo.gl/24fU9

> Anyway, I am hoping not to write any cool transform but simply save
> the type of the phoenix expression so that I can re-create an actor
> later. If I need to rewrite differently what BuildGuards was doing, I
> gain little. I would like phoenix to do the grammar parsing and
> building of actor.

It does ... just pass on proto::_right and it should be good:

struct BuildEventPlusGuard
  : proto::when<
proto::subscript<
proto::terminal
  , phoenix::meta_grammar  // match the phoenix actor
>
  , TempRow<
none
  , proto::_left
  , none
  , none
  , proto::_right // Pass the type along. which is a phoenix actor.
>(proto::_right)  // Pass the object along. Which is the actor (1)
>
{};

(1): Here you can work around the thing with the possibly uninitialized stuff. 
Just copy the phoenix actor (should be cheap, if not optimized away completely).

> Thanks,
> Christophe
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Manipulating an expression tree

2011-04-08 Thread Thomas Heller
On Fri, Apr 8, 2011 at 2:03 PM, Karsten Ahnert
 wrote:
>>
>> Why not just write a transform that calculates one derivative and call
>> it N times to get the Nth derivative?
>
> Yes, that may be easy if you need two or three higher derivatives. For my
> application I need 10 to 20 or even more. I guess that currently no
> compiler can handle such large trees. For example, the simple product rule
> will result in 2^N terms.

Point taken. The expressions might get very long.
However, you could do an algeabric simplification transform (for
example constant propagation)
after every derivation step. Thus reducing the number of terms.

> But in the case of the product rule, one can use Leibnitz rule: If
> f(x)=g1(x) g2(x), then
> the N-th derivative of f(x) is sum_{k=0}^N binomial(N , k ) g1^k(x)
> g2^(N-k)(x). (g1^k is the k-th derivative of g1). This is exactly the
> point where I need intermediate values, to store previously calculated
> values of the derivatives of g1 and g2.
>
> Nevertheless, thank you for your example. I am a beginner with proto such
> that every example is highly illuminating.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Manipulating an expression tree

2011-04-08 Thread Thomas Heller
On Fri, Apr 8, 2011 at 10:56 AM, Karsten Ahnert
 wrote:
 If you need to compute intermediate values, you can use a transform to
 build a parallel structure.
>>>
>>> Do you mean to build an entire new tree, or just to replace some nodes?
>>
>> If only some nodes have associated intermediate result, then you could
>> just replace some nodes.
>
> Ok, this is a clear hint.
>
>>> In my current algorithm I use callable contexts to do the work. I think
>>> this is more favorable since I have to evaluate the tree N times to
>>> obtain
>>> the result.
>>
>> Why does that matter? Transforms are almost always a better choice.
>>
>>> I think it would be nice to replace some nodes and customizing
>>> the evaluation context, such that these nodes can be used to store the
>>> intermediates.
>>
>> If you're doing calculation *and* tree transformation, then drop the
>> callable contexts. They can't do the job.
>
> First I do tree transformation and then calculation. A Callable context
> will not do the job, since one only gets the tag of the current node, but
> not the node itself. So I have to implement my own context.
>
> I am not sure if transforms can do that job. It is definitely not possible
> to obtain the full tree for 10th derivative. Maybe some other tricks are
> possible with transforms. At the moment I don't understand them in detail,
> but I will try to change this. Thanks for your advice.

Why not just write a transform that calculates one derivative and call
it N times to get the Nth derivative?

Something like:

struct derivative
: proto::or_<
proto::when<
proto::plus
  , proto::_make_plus
   >
   proto::when<
   proto::multiplies
 , proto::_make_plus<
proto::_make_multiplies
   ,proto::_make_multiplies
>
>
>
   /* add more derivation rules*/
{};

template 
struct derive_impl
{
template 
struct result;

template 
struct result
{
typedef typename boost::result_of::type
first_derivative;
typedef typename
boost::result_of(first_derivative)>::type type;
};

template 
typename result::type
operator()(Expr const & expr) const
{
return derive_impl(derivative()(expr));
}
};

template <>
struct derive_impl<0>
{
template 
struct result;

template 
struct result
: result
{};

template 
struct result
{ typedef Expr type; };

template 
typename result::type
operator()(Expr const & expr) const
{
return expr;
}
};

template 
typename boost::result_of(Expr const &)>::type
derive(Expr const & expr)
{
 return derive_impl()(expr);
}
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Using Phoenix inside eUML: mixing grammars

2011-03-21 Thread Thomas Heller
On Wed, Mar 16, 2011 at 9:56 PM, Christophe Henry
 wrote:
> Hi,

Sorry for the late reply ...

> I have my eUML grammar, defing, for example a transition as:
>
> SourceState+ Event [some_guard] == TargetState
>
> I want to write for some_guard an expression of a phoenix grammar. The
> relevant part of the expression is:
> Event [some_guard]
>
> Where the corresponding part of my grammar is:
>
> struct BuildEventPlusGuard
>    : proto::when<
>            proto::subscript,BuildGuards >,
>            TempRow()
>        >
> {};
>
> BuildGuards is, at the moment, a proto::switch_ grammar, which I want
> to replace with something matching a phoenix grammar and returning me
> a phoenix::actor, which I will then save into TempRow.
> I suppose I could typeof/decltype the phoenix expression but it
> doesn't seem like the best solution, I'd prefer to delay this.
> Is there something like a phoenix grammar which I can call to check if
> the expression is valid, and if yes, which will return me the correct
> phoenix::actor?

There is boos::phoenix::meta_grammar which can be used to check for
valid phoenix expressions. It is really just the grammar, you can't
use it as a transform.
In your case you can reuse the meta grammar in a great way to restrict
certain constructs:

struct my_custom_phoenix_grammar
: proto::switch_
{
template 
struct case_ : meta_grammar::case_ {};
};

The above, by default allows everything that's in meta_grammar with
the ability to override some of meta_grammar's rules.

If you want to use it as a transform you need the evaluator with an
appropriate action that does the desired transform... here is an
example:

 struct BuildEventPlusGuard
    : proto::when<
            proto::subscript,
phoenix::meta_grammar >,
            TempRow()
        >
 {};

Now, some_cool_action can do the transform that BuildGuards was doing.

> Second question. Now it's becoming more "interesting". And not easy to
> explain :(
>
> For eUML, a guard can be defined as "g1 && g2", where g1 and g2 are
> functors, taking 4 arguments. For example (to make short):
> struct g1_
> {
>    template 
>    void operator()(EVT const& ,FSM&,SourceState& ,TargetState& )
>    {
>        ...
>    }
> };
>  g1_ g1;
>
> The fact that there are 4 arguments is the condition to make this work
> without placeholders.
>
> I 'm pretty sure that, while this clearly should be a function for
> phoenix, I would not like the syntax:
> g1(arg1,arg2,arg3,arg4) && g2(arg1,arg2,arg3,arg4).
>
> Is it possible to define g1 and g2 as custom terminals, but still get
> them treated like functions?

Yes!

Here is an example:

template 
struct msm_guard
{};

template 
struct msm_guard_actor;

template 
struct msm_guard_expression
: phoenix::terminal, msm_guard_actor>
{};

template 
struct msm_guard_actor
{
typedef actor base_type;
base_type expr;
msm_guard_actor(base_type const & base) : expr(base) {}

// define the operator() overloads here to allow something
// that michael suggested. The result should be full blown phoenix
// expressions (BOOST_PHOENIX_DEFINE_EXPRESSION)
};

namespace boost { namespace phoenix {
namespace result_of
{
template 
is_nullary > : mpl::false_ {};
}
template 
struct is_custom_terminal > : mpl::true_ {};

template 
struct custom_terminal >
: proto::call<
G(
proto::functional::at(_env, mpl::int_<1>())
  , proto::functional::at(_env, mpl::int_<2>())
  , proto::functional::at(_env, mpl::int_<3>())
  , proto::functional::at(_env, mpl::int_<4>())
   )
   >
{};
}}

I hope that above example helps you and clarifies how to customize
phoenix V3 even further.
Note on that terminal thing: It's not in trunk yet ... I did on
another computer, cause the spirit port needed it ... I don't have
access to that computer right now ... I will commit it later today.

> (To solve the problem, my current implementation generates me a
> functor of type And_ which has an operator () with 4
> arguments).
>
> Thanks,
> Christophe
> ___
> proto mailing list
> proto@lists.boost.org
> http://lists.boost.org/mailman/listinfo.cgi/proto
>
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] My own lambda for MSM / wish list

2011-03-16 Thread Thomas Heller
On Monday, March 14, 2011 10:12:09 PM Christophe Henry wrote:
> >Calling phoenix expressions from the statement module return void.
> >Calling phoenix expressions from any other modules return whatever ...
> >depending on the "C++-Sense".
> 
> It's ok, I can live with it, though I'll need to find a way around
> because I do need this return stuff.
> 
> >> If you allow a short criticism, I'd say that phoenix is great and
> >> maybe even offers all I want, but the doc is doing it a great
> >> disservice. I got so frustrated I started my own implementation and
> >> seeing my lack of time, it means something ;-)
> >
> >Maybe you should have asked before you started re-implementing
> >everything ;)
> 
> And maybe I did ;-)
> As a matter of fact, I did, twice. First at the BoostCon09 I talked
> about it with JdG, but my question was not well formulated, so I'm not
> sure he got what I wanted to do.
> Second time at BoostCon10, I mentioned it during my talk and Eric's
> view was that what I wanted to do was not possible with the state of
> Phoenix at that time (though maybe I again failed to explain my
> point).

Well, because it wasn't possible at the time, I assume ;)
Times have changed since then ... 

> Then I had a look at the doc and still didn't find my answers.
> 
> So, I decided to invest some time (not too long) to present using a
> basic implementation what I was looking for.
> At least with this example, it's easier to discuss, I can ask much
> more targeted questions and I had a lot of fun in the process :)
> 
> >Phoenix comes with a whole bunch of testcases, which can be seen as
> >examples as well.
> 
> I had a look at that too, but I looked at the "statement" testcases,
> which still didn't answer my question about return types. True, I
> didn't think of looking into "operator".

Sure ... the things about return types can't be found in the examples ... To 
quote the docs:

"Unlike lazy functions and lazy operators, lazy statements always return 
void."
https://svn.boost.org/svn/boost/trunk/libs/phoenix/doc/html/phoenix/modules/statement.html



> >I tried to document the internals. What do you think is missing?
> >The questions you had could have all been solved by looking at the
> >documentation, there wasn't any need to know the internals.
> 
> What I wanted to know:
> - can I pass a phoenix expression to a decltype / BOOST_TYPEOF and
> default-construct it?

In general, no, because of various terminals that need to be initialized.
You can however limit the phoenix grammar to only those expressions that can 
be default constructed.
Why not copy construct them?

> - what is the return value of a phoenix expression?

boost::result_of::type

This is a little hidden in the docs ... but its all here:
https://svn.boost.org/svn/boost/trunk/libs/phoenix/doc/html/phoenix/inside/actor.html

Follow the link to "Polymorphic Function Object".

> - how do I add stuff I want (return for ex.)

https://svn.boost.org/svn/boost/trunk/doc/html/phoenix/examples/adding_an_expression.html

> Do I really find this in the doc?
> With the ref/cref inside the lambda, it's also not shown, but I admit
> I could have thought about it by myself.
> 
> >Ok, I admit, the whole capture by value semantics probably isn't
> >discussed at full length. Of course there is always room for
> >improvement! Do you have anything specific?
> 
> Ah, value semantics isn't my main interest anyway. Where I really
> started to doubt was reading the internals section, then I came to
> this:
> 
> // Transform plus to minus
> template <>
> struct invert_actions::when
> 
> : proto::call<
> 
> proto::functional::make_expr(
> phoenix::evaluator(proto::_left, phoenix::_context)
>   , phoenix::evaluator(proto::_right, phoenix::_context)
> )
> 
> {};
> 
> I understand a bit of proto but this mix of proto:: and phoenix:: is
> just killing me. It just doesn't fit into my standard design layered
> model. Either I work at the proto layer, or at the phoenix layer, not
> at both. Having to understand both is simply increasing the cost of
> playing with phoenix.

Well, the phoenix internals are all implemented using proto ... if you dive 
deep into the actions you need both, stuff from proto and stuff from 
phoenix. This part of the library is just proto using stuff that was defined 
within phoenix ... 
You need to know what the evaluator and what actions are:
https://svn.boost.org/svn/boost/trunk/libs/phoenix/doc/html/phoenix/inside/actor.html#phoenix.inside.actor.evaluation
https://svn.boost.org/svn/boost/trunk/libs/phoenix/doc/html/phoenix/inside/actions.html
 
> Now, I have the feeling you think I'm here for a round of phoenix
> bashing. It can't be less true. I'm a big admirer of Joel's work and I
> think phoenix is a really cool library. My problem is simple, I want
> to combine ET with decltype and bring it into new places (MSM, MPL,
> proto for example), and I need to answer 2 questions:
> - can phoenix fulfill my 

Re: [proto] My own lambda for MSM / wish list

2011-03-14 Thread Thomas Heller
On Monday, March 14, 2011 01:39:41 PM Christophe Henry wrote:
> Hi Thomas,
> 
> >> Row < source_state , event , target_state, action , guard >
> >
> >I suggest you to look into spirit how semantic actions are dealt with, it 
reminds me of exactly this:
> Ok I will, thanks.

Keep in mind, spirit uses Phoenix V2, but the idea is the same ...

> >Phoenix actors are POD, they are default constractible. Do you have a
> >testcase where this is not the case?
> 
> Unfortunately no more, it was quite some time ago, maybe I did it with
> V2. I got something which was not usable by BOOST_TYPEOF.
> I'll try it again.
> 
> >> std::set s2;
> >
> >As Eric noted, decltype is the answer. I can't see why this should not
> >be possible within phoenix.
> 
> Wait, I'm confused, it IS possible, or it SHOULD be possible? Can I
> pass any valid phoenix expression? Not just a simple operator
> expression, but an if/else, while, or an expression made of several
> statements?
> I want an arbitrary phoenix expression.
> That I need a decltype is clear and what I'm already doing. But an
> expression containing references, pointers, or even an int would be of
> no use for a decltype.

Yes, this is a problem. I don't know how this can be solved without actually 
passing the expression.

> >> std::transform(vec.begin(),vec.end(),vec.begin(), ++_1 + foo_(_1) );
> >
> >To answer your question, phoenix actors return their last statement. In
> >your case the result of ++_1 + foo_(_1) is returned.
> 
> Hmmm now I'm lost. Just after you write that if_/else_ returns nothing.

See below.

> >> std::transform(vec.begin(),vec.end(),vec.begin(),
> >> if_(_1)[ ++_1,return_(++_1)].else_ [return_(++_1)] );
> >
> >Ok, if_(...)[...] and if(...)[...].else_[...] are statements in phoenix,
> >meaning they return nothing (aka void). The answer to your question is
> >to simply use phoenix::if_else which is in the operator module because
> >it emulates the ?: operator.
> 
> Ok but this is just a partial answer, right? What about while / do or
> whatever I might need?
> I would like a clear statement: If I have an expression composed of
> any number of statements of any type and any order inside the full
> extent of phoenix (say if_().else_[], while_(), ++_1,fun_()), what is
> the return value?
>
> From what you write I understand "the last statement except in some
> cases". Which ones?

Calling phoenix expressions from the statement module return void.
Calling phoenix expressions from any other modules return whatever ... 
depending on the "C++-Sense".
You might ask for example:
What does _1 + _2 return?

The answer is: It returns a callable expressions. When the expression gets 
called with two arguments that call will return the result of the addition, 
or however operator+ is overloaded for the types passed ...

In Phoenix you also have the overloaded operator, - which emulates 
sequences.
Now there is no return statement in phoenix. A call to a phoenix expression 
will return whatever expression (now talking in the C++ sense) the last 
expression was, where a phoenix expression can be composed and is maybe 
arbitrary complex.
This can be void or something totally different.

There are only some cases where the return value is fixed to a certain type.
In particular everything from the "Statement" module was defined to return 
void to match the C++ semantics.

> >As Eric already noted, Phoenix captures everything by value. There is no
> >special syntax needed to capture other variables in scope (if you want
> >to capture them by value, meaning that they will get copied into the
> >phoenix expression). Your solution is similar to what is there in
> >phoenix already: Local variables and the let/lambda expressions. Your
> >examples will have your exact "better syntax":
> >
> >lambda[ _a = q ] [ _1 + _a /* first"captured" */]
> >
> >lambda[ _a = ref(q) ] [ _a++ /* first "captured" */ ]
> >
> >lambda[ _a = cref(q) ] [ _1 + _a /* first "captured" */ ]
> 
> Ah perfect! This is what I'm looking for.
> 
> >> Ok, these ones I didn't implement them, but I can dream.
> >
> >I am dreaming with you! :)
> 
> Let's not dream much longer then ;-)
> We have here people who can do it.
> 
> >By looking at these examples i can not see what's there that is not
> >provided by Phoenix already.
> 
> Then it's perfect and what I want, but I'll need to give it a closer
> look. Mostly I want:
> - return values
> - the ability to decltype anything. For MSM it's about the same as the
> std::set case.
> 
> If you allow a short criticism, I'd say that phoenix is great and
> maybe even offers all I want, but the doc is doing it a great
> disservice. I got so frustrated I started my own implementation and
> seeing my lack of time, it means something ;-)

Maybe you should have asked before you started re-implementing everything ;)

> What I think we need is:
> - more different examples showing what it can do, not the trivial
> examples it currently took over from V2

Phoenix comes with a who

Re: [proto] My own lambda for MSM / wish list

2011-03-14 Thread Thomas Heller
On Sunday, March 13, 2011 10:07:30 PM Christophe Henry wrote:
> Hi all,
> 
> I started a rewrite of a major part of my eUML front-end to fix the
> issues I left open in the current version, and I'm now at the point
> where I've done enough to present what I have and get some opinions on
> the choice I'm facing: continue or switch to phoenix. I remember
> discussing the subject at the last BoostCon and it looked like my
> whishes were not possible, but it was almost one year ago, so I might
> not be up-to-date.
> 
> I admit I'm having a lot of fun implementing this stuff. It's great to
> play with proto :)
> 
> I wanted to start a discussion on this list to get some feedback (and
> fresh ideas) from much more knowledgeable proto experts, so here are
> some thoughts.
> 
> Phoenix is a great library, but I did not find all I was looking for,
> so here is what I'm still missing. I provide an implementation to
> illustrate my wishes.
> 
> 1. Creation of a default-constructible functor for MSM
> 
> This one is my N°1 requirement. I started this stuff for MSM.
> MSM is based on transition tables, each transition being something like:
> 
> Row  < source_state , event ,  target_state, action , guard  >

I suggest you to look into spirit how semantic actions are dealt with, it 
reminds me of exactly this:

> With action and guard being a functor, say:
> struct guard{
> template 
> bool operator()(Evt const&, Fsm&, SourceState&,TargetState& )
> {
> ...
> }
> };
> 
> Which MSM then calls, for example: return guard()(evt,fsm,src,tgt);
> Note that I need a default-constructible functor whose type can be
> saved as a template argument.

Phoenix actors are POD, they are default constractible. Do you have a 
testcase where this is not the case?

> MSM also offers a "FP" collection of classes to build functors on the
> fly. For example, if I want a more complicated guard, I can write:
> And_< Or_, Not_ >
> 
> eUML builds on top of this so that I can write: (g1 || g2) && !g3 .
> To achieve this, I use a typeof call of a proto expression for all
> these operators. So it's kind of a compile-time proto grammar,
> generating the above "Row".
> This is already implemented (not very well) in eUML, so I know it
> works but needs improvements.
> 
> 2. STL, type definition
> 
> Ideally, as I'm at it, I want to avoid writing functors for use in the
> STL, which is phoenix stuff. But reading the doc I didn't manage to
> find out if I can write something close to:
> std::set s;
> 
> The re-implementation I'm working on is coming close enough for my taste:
> std::set s2;

As Eric noted, decltype is the answer. I can't see why this should not be 
possible within phoenix.

> 3. STL, algorithms
> 
> I found in phoenix's doc many examples with for_each but not many with
> others like transform, which require something to be returned from a
> lambda expression.
> I want:
> std::transform(vec.begin(),vec.end(),vec.begin(), ++_1 + foo_(_1) );

To answer your question, phoenix actors return their last statement. In your 
case the result of ++_1 + foo_(_1) is returned.

> Ideally, I'd also be able to write:
> 
> std::transform(vec.begin(),vec.end(),vec.begin(),
> if_(_1)[ ++_1,return_(++_1)].else_ [return_(++_1)]  );

Ok, if_(...)[...] and if(...)[...].else_[...] are statements in phoenix, 
meaning they return nothing (aka void).
The answer to your question is to simply use phoenix::if_else which is in 
the operator module because it emulates the ?: operator.

> Ok, I didn't succeed yet, but I manage:
> 
> std::transform(vec.begin(),vec.end(),vec.begin(),
> lambda[ if_(_1)[ ++_1, /* return */++_1 ] /*else */  [ ++_1 ]
>  ] );

This is another solution to your problem, yes

> 
> Where there is a kind of implicit return and where ";" is replaced by ","
> : ++_1; return_ (++_1)  => ++_1 , ++_1 /* second ++_1 is returned */
> 
> 
> 4. A taste of C++0x lambda: capturing
> 
> One thing which is bugging me with phoenix is the ref / cref adding
> noise to my nice FP (I'm kind of a new convert). I prefer the C++0x
> solution of "capturing", which looks to me like an extended function
> declaration. For example:
> [x, y](int n) { ...}
> Or:
> [&x, &y](int n) { ... }
> 
> I don't despair to achieve a similar syntax, but for the moment I have
> only: lambda[ /* capture */_capture << q <<... ] [ _1 + _c1 /* first
> "captured" */ ] Or:
> lambda[_capture << ref(q) ] [ _c1++ /* first "captured" */ ]
> Or:
> lambda[_capture << cref(q) ] [ _1 + _c1 /* first "captured" */ ]
> 
> I think a better syntax would be lambda[_c1=q, _c2=4][...]

As Eric already noted, Phoenix captures everything by value. There is no 
special syntax needed to capture other variables in scope (if you want to 
capture them by value, meaning that they will get copied into the phoenix 
expression).
Your solution is similar to what is there in phoenix already: Local 
variables and the let/lambda expressions. Your examples will have your exact 

Re: [proto] expanding Proto's library of callables

2010-12-28 Thread Thomas Heller
Eric Niebler wrote:

> On 12/28/2010 11:43 AM, Thomas Heller wrote:
>> Eric Niebler wrote:
>> 
>>> On 12/28/2010 5:39 AM, Thomas Heller wrote:
>>>> I just saw that you added functional::at.
>>>> I was wondering about the rationale of your decision to make it a non
>>>> template.
>>>> My gut feeling would have been to have proto::functional::at(seq)
>>>> and not proto::functional::at(seq, N).
>>>
>>> Think of the case of Phoenix placeholders, where in the index is a
>>> parameter:
>>>
>>>   when< terminal >, _at(_state, _value) >
>> 
>> vs:
>> 
>> when >, _at<_value>(_state)>
> 
> Have you tried that? Callable transforms don't work that way. It would
> have to be:
> 
>  lazy(_state)>
> 
> Blech.

Right ... i keep forgetting about that ...

>>> For the times when the index is not a parameter, you can easily do:
>>>
>>>   _at(_state, mpl::int_())
>> 
>> vs:
>> 
>> _at >(_state)
>> 
>> just wondering ... the second version looks more "natural" and consistent
> 
> Still think so?


Nope. Let's have it your way :)

One other thing though ...

struct at
{
BOOST_PROTO_CALLABLE()

template
struct result;

template
struct result
  : fusion::result_of::at<
typename boost::remove_reference::type
  , typename boost::remove_const::type>::type
>
{};

template
typename fusion::result_of::at::type
operator ()(Seq &seq, N const & 
BOOST_PROTO_DISABLE_IF_IS_CONST(Seq)) const
{
return fusion::at(seq);
}

template
typename fusion::result_of::at::type
operator ()(Seq const &seq, N const &) const
{
return fusion::at(seq);
}
};

VS:

struct at
{
BOOST_PROTO_CALLABLE()

template
struct result;

template
struct result
: result
{};

template
struct result
  : fusion::result_of::at<
Seq
  , typename proto::detail::uncvref::type
>
{};

template
typename fusion::result_of::at::type
operator ()(Seq &seq, N const & 
BOOST_PROTO_DISABLE_IF_IS_CONST(Seq)) const
{
return fusion::at(seq);
}

template
typename fusion::result_of::at::type
operator ()(Seq const &seq, N const &) const
{
return fusion::at(seq);
}
};

I think the second version instantiates less templates than the first one.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] expanding Proto's library of callables

2010-12-28 Thread Thomas Heller
Eric Niebler wrote:

> On 12/28/2010 5:39 AM, Thomas Heller wrote:
>> I just saw that you added functional::at.
>> I was wondering about the rationale of your decision to make it a non
>> template.
>> My gut feeling would have been to have proto::functional::at(seq)
>> and not proto::functional::at(seq, N).
> 
> Think of the case of Phoenix placeholders, where in the index is a
> parameter:
> 
>   when< terminal >, _at(_state, _value) >

vs:

when >, _at<_value>(_state)>

> For the times when the index is not a parameter, you can easily do:
> 
>   _at(_state, mpl::int_())

vs:

_at >(_state)

just wondering ... the second version looks more "natural" and consistent
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] expanding Proto's library of callables

2010-12-28 Thread Thomas Heller
Eric Niebler wrote:

> Proto ships with a very small collection of callables for use in Proto
> transforms: wrappers for fusion algorithms like reverse and pop_front
> and the like. For 1.46, there will be a few more: make_pair, first,
> second, and wrappers for a few more Fusion algorithms. It's woefully
> incomplete, though.
> 
> I have an idea. Phoenix3 defines *all* these juicy callables under the
> stl/ directory. I can't #include them in Proto because that would create
> a circular dependency. Why don't we just move the definitions of the
> function objects into Proto and make them Proto callables? Phoenix3 can
> just #include 'em and use 'em.
> 
> Thoughts?

I just saw that you added functional::at.
I was wondering about the rationale of your decision to make it a non 
template.
My gut feeling would have been to have proto::functional::at(seq)
and not proto::functional::at(seq, N).
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] looking for an advise

2010-12-27 Thread Thomas Heller
Eric Niebler wrote:

> On 12/27/2010 5:26 AM, Joel Falcou wrote:
>> On 27/12/10 11:02, Maxim Yanchenko wrote:
>>> Hi Eric and other gurus,
>>>
>>> Sorry in advance for a long post.
>>>
>>> I'm making a mini-language for message processing in our system.
>>> It's currently implemented in terms of overloaded functions with
>>> enable_if>  dispatching, but now I see that
>>
>> Dont. this increases copiel time and provide unclear error. Accept
>> anykind of expression adn use matches in a
>> static_assert with a clear error ID.
>> 
>>> (a) I'm reimplementing Phoenix which is not on Proto yet in Boost
>>> 1.45.0 (that's
>>> how I found this mailing list). It would be great to reuse what
>>> Phoenix has;
>>
>> Isn't it in trunk already Thomas ?
> 
> No, it's still in the sandbox. Maxim, you can find Phoenix3 in svn at:
> 
> https://svn.boost.org/svn/boost/sandbox/SOC/2010/phoenix3
> 
> According to Thomas, the docs are still a work in progress, but the code
> is stable.
> 


>> Better see what Thomas has up his sleeves in Phoenix.
> 
> Right. Maxim, you are totally correct that you are reimplementing much
> of Phoenix3, and that the extensibility of Phoenix3 is designed with
> precisely these use cases in mind. The only thing missing are docs on
> the customization points you'll need to hook.
> 
> Thomas, this is a GREAT opportunity to put the extensibility of Phoenix3
> to test. Can you jump in here and comment?

You are right! We designed phoenix3 for exaclty this use case!
Unfortunately, all that is undocumented at the moment, but what you 
describe 
should be doable quite easily.
Right now, the only advice i can give is to have a look at all the 
different 
modules that are implemented.
I will reply the minute the docs are finished ... in the mean time i would 
be happy to help finding a solution to your problem.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


[proto] phoenix 3 refactoring complete.

2010-12-23 Thread Thomas Heller
Hi,

I just wanted you to know that phoenix3 is in a working state once again.
I refactored everything with the changes we discussed ...
All tests from boost.bind are passing!
Placeholder unification is in place!
Now ... up to documentation writing and checking for BLL compatibility ...

Regards,
Thomas
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


[proto] fix for gcc-4.2 ICE

2010-12-23 Thread Thomas Heller
Hi,

I recently run into an ICE by compiling phoenix3 with gcc-4.2. It seems this 
particular can not handle proto::detail::poly_function_traits properly.
the problem is the default Switch template initialization ... by replacing 
std::size_t Switch = sizeof(test_poly_function(0,0)) by typename Switch 
= mpl::size_t(0,0))> the ICE is gone (the 
specialisations are adapted as well.
The patch is attached.

Best regards,
Thomas
Index: boost/proto/detail/poly_function.hpp
===
--- boost/proto/detail/poly_function.hpp	(revision 67425)
+++ boost/proto/detail/poly_function.hpp	(working copy)
@@ -15,6 +15,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -185,7 +186,7 @@
 template unknown_function_t test_poly_function(T *, ...);
 
 
-template(0,0))>
+template(0,0))> >
 struct poly_function_traits
 {
 typedef typename Fun::template result::type result_type;
@@ -194,7 +195,7 @@
 
 
 template
-struct poly_function_traits
+struct poly_function_traits >
 {
 typedef typename Fun::result_type result_type;
 typedef Fun function_type;
@@ -265,7 +266,7 @@
 
 
 template
-struct poly_function_traits
+struct poly_function_traits >
 {
 typedef typename PolyFun::template impl function_type;
 typedef typename function_type::result_type result_type;
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] expanding Proto's library of callables

2010-12-19 Thread Thomas Heller


Eric Niebler wrote:

> Proto ships with a very small collection of callables for use in Proto
> transforms: wrappers for fusion algorithms like reverse and pop_front
> and the like. For 1.46, there will be a few more: make_pair, first,
> second, and wrappers for a few more Fusion algorithms. It's woefully
> incomplete, though.
> 
> I have an idea. Phoenix3 defines *all* these juicy callables under the
> stl/ directory. I can't #include them in Proto because that would create
> a circular dependency. Why don't we just move the definitions of the
> function objects into Proto and make them Proto callables? Phoenix3 can
> just #include 'em and use 'em.
>
>
>
> Thoughts?

Wonderful idea! I like it.

___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] grammars, domains and subdomains

2010-12-10 Thread Thomas Heller
Eric Niebler wrote:

> On 12/10/2010 3:23 AM, Thomas Heller wrote:

>> However, the solution I am looking for is different.
>> The sub-domain i tried to define should also extend its super domain, BUT
>> expressions valid in this sub-domain should not be valid in the super
>> domain, only in the sub-domain itself.
> 
> Because you don't want phoenix::_a to be a valid Phoenix expression
> outside of a phoenix::let, right?

Correct.
 
>> I think proto should support both forms of operation. The first one can
>> be easily (more or less) achieved by simply changing proto::matches in
>> the way you demonstrated earlier, I think. I am not sure, how to do the
>> other stuff properly though.
> 
> OK, let's back up. Let's assume for the moment that I don't have time to
> do intensive surgery on Proto sub-domains for Phoenix (true). How can we
> get you what you need?

Good question :)
I can maybe help here once the port is finished once and for all. Let's 
prioritize. Too bad your day job does not involve proto :(

> My understanding of your needs: you want a way to define the Phoenix
> grammar such that (a) it's extensible, and (b) it guarantees that local
> variables are properly scoped. You have been using proto::switch_ for
> (a) and sub-domains for (b), but sub-domains don't get you all the way
> there. Have I summarized correctly?

Absolutely!
 
> My recommendation at this point is to give up on (b). Don't enforce
> scoping in the grammar at this point. You can do the scope checking
> later, in the evaluator of local variables. If a local is not in scope
> by then, you'll get a horrific error. You can improve the error later
> once we decide what the proper solution looks like.

I tried that and failed miserably ... There might have another workaround, I 
already have something i mind i missed earlier.

> If I have mischaracterized what you are trying to do, please clarify.

Nope, I guess this is the way it should be. Please disregard my other mail 
in this thread ;)
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] grammars, domains and subdomains

2010-12-10 Thread Thomas Heller


Eric Niebler wrote:

> On 12/8/2010 5:30 PM, Thomas Heller wrote:
>> I don't really now how to do it otherwise with the current design.
>> There is really only this part of the puzzle missing. If it is done, we
>> have a working and clean Phoenix V3.
>> For the time being, I can live with the workaround I did.
>> However, I will focus my efforts of the next days on working out a patch
>> for this to work properly.
> 
> I made a simple fix to proto::matches on trunk that should get you
> moving. The overloads can still be smarter about sub-domains, but for
> now you can easily work around that by explicitly allowing expressions
> in sub-domains in your super-domain's grammar. See the attached solution
> to your original problem.

It solves the problem as it succeeds to compile.
However, there are two problems with that solution:
  1) t2 is matched by grammar1
  2) I have to add the plus rule in grammar2 (this could be solved with the 
 grammar parametrisation from my earlier post)
  3) The expression in a subdomain is matched in grammar1 on the pure fact 
 that it is a subdomain of domain1, it should be matched against the 
 subdomains grammar as well.

Right now, i am questioning the whole deduce_domain domain part and
selection of the resulting domains in proto's operator overload.
Here is what i think should happen (Without the loss of genericity I
restrict myself to binary expressions):

If the two operands are in the same domain, the situation is clear:
The operands need to match the grammar belonging to the domain, and the
result has to as well.

If one of the operands is in a different domain the situation gets
complicated. IMHO, the domain of the resulting expression should be
selected differently.
Given is a domain (domain1) which has a certain grammar (grammar1) and
sub-domain (domain2) with another grammar (grammar2).
When combining two expressions from these domains with a binary op, the
resulting expression should be in domain2.
Why? Because there is no way that when writing grammar1 to account the
expressions which should be valid grammar2. With the current deduce_domain,
this is determined to fail always. Additionally, conceptionally, it makes
no sense that a expression containing t1 and t2 be in domain1.
When the domains are not compatible (meaning they have no domain - sub
domain relationship), the resulting domain should be common_domain.

These considerations are based on the assumption that a expression in a
sub-domain should not be matched by the grammar of the super domain.
Which makes sense, given the context of the local variables in phoenix.
Remember, local variables shall only be valid when embedded in a let or
lambda expression.
Maybe, the sub-domain idea is not suited at all for that task.

OK ... thinking along ... The stuff which is already in place, and your
suggested fix, makes sense when seeing sub-domains really as extensions to
the super domain, and the grammar of these ...

Thoughts?



___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] grammars, domains and subdomains

2010-12-09 Thread Thomas Heller
Eric Niebler wrote:

> On 12/8/2010 5:30 PM, Thomas Heller wrote:
>> I don't really now how to do it otherwise with the current design.
>> There is really only this part of the puzzle missing. If it is done, we
>> have a working and clean Phoenix V3.
>> For the time being, I can live with the workaround I did.
>> However, I will focus my efforts of the next days on working out a patch
>> for this to work properly.
> 
> I made a simple fix to proto::matches on trunk that should get you
> moving. The overloads can still be smarter about sub-domains, but for
> now you can easily work around that by explicitly allowing expressions
> in sub-domains in your super-domain's grammar. See the attached solution
> to your original problem.

I am afraid that this does not really solve the problem.

I think there is a misunderstanding in how "sub-domain"ing works.
The solution you propose is that a sub domain extends its super domain in 
the way that the expressions in the sub domain also become valid in the 
super domain. I guess this is the way it should work.
However, the solution I am looking for is different.
The sub-domain i tried to define should also extend its super domain, BUT 
expressions valid in this sub-domain should not be valid in the super 
domain, only in the sub-domain itself.

I think proto should support both forms of operation. The first one can be 
easily (more or less) achieved by simply changing proto::matches in the way 
you demonstrated earlier, I think. I am not sure, how to do the other stuff 
properly though.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] grammars, domains and subdomains

2010-12-08 Thread Thomas Heller
Eric Niebler wrote:

> On 12/8/2010 4:44 PM, Thomas Heller wrote:
>> I will try to present a patch. I urgently need this feature to be
>> become officially supported to use it for phoenix3 (a scope opened by let
>> or lambda should be it's own sub domains in order to allow local
>> variables, theses shall not be allowed in a "regular" expression, and
>> they should be combinable with other expressions).

Basically, yes.
The other point is to detect if a expression is 
nullary. With local variables we have a problem. They are not nullary, 
however need a proper environment in order to be evaluated. I only do the 
evaluation if the expression (evaluation also means result type calculation, 
the problem here is operator()() as it gets instantiated every time. And I 
can't calculate a result type for something like _1 or _a):
  1) Matches the grammar (this is without extension)
  2) Is nullary (no placeholders in the expression)

So, The sub-domain with sub-grammar part is the easiest and cleanest 
solution, I actually looked it up in your old proto prototype.

> 
> IIUC, this is to do "proper" error checking and detect scoping problems
> early. But if you don't have this feature, you can still make properly
> scoped expressions work, right? Can you move ahead with a looser grammar
> and tighten it up later?

I don't really now how to do it otherwise with the current design.
There is really only this part of the puzzle missing. If it is done, we have 
a working and clean Phoenix V3.
For the time being, I can live with the workaround I did.
However, I will focus my efforts of the next days on working out a patch for 
this to work properly.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] grammars, domains and subdomains

2010-12-08 Thread Thomas Heller
Eric Niebler wrote:

> On 12/8/2010 5:30 AM, Thomas Heller wrote:
>> Eric Niebler wrote:
>>> On 12/7/2010 2:37 PM, Thomas Heller wrote:
>>>> So, How to handle that correctly?
>>>
>>> Yup, that's a problem. I don't have an answer for you at the moment,
>>> sorry.
>> 
>> I think i solved the problem. The testcase for this solution is attached.
>> Let me restate what I wanted to accomplish:
> 
> 
> Thomas,
> 
> A million thanks for following through. The holidays and my day job are
> taking their toll, and I just don't have the time to dig into this right
> now. It's on my radar, though. I'm glad you have a work-around, but it
> really shouldn't require such Herculean efforts to do this. There are 2
> bugs in Proto:
> 
> 1) Proto operator overloads are too fragile in the presence of
> subdomains. Your line (5) should just work. It seems like a problem that
> Proto is conflating grammars with subdomain relationships the way it is,
> but I really need to sit down and think it through.
> 
> One possible solution is an implicit modification of the grammar used to
> check expressions when the children are in different domains. For
> instance, in the expression "A+B", the grammar used to check the
> expression currently is: common_domain::type::proto_grammar.
> Instead, it should be:
> 
>or_<
>typename common_domain::type::proto_grammar
>  , if_<
>is_subdomain_of<
>typename common_domain::type
>  , domain_of< _ >
>>()
>>
>>
> 
> That is, any expression in a subdomain of the common domain is by
> definition a valid expression in the common domain. is_subdomain_of
> doesn't exist yet, but it's trivial to implement. However ...
> 
> 2) Using domain_of<_> in proto::if_ doesn't work because grammar
> checking is currently done after stripping expressions of all their
> domain-specific wrappers. That loses information about what domain an
> expression is in. Fixing this requires some intensive surgery on how
> Proto does pattern matching, but I foresee no inherent obstacles.
> 
> It'd be a big help if you could file these two bugs.

I will try to present a patch. I urgently need this feature to be 
become officially supported to use it for phoenix3 (a scope opened by let or 
lambda should be it's own sub domains in order to allow local variables, 
theses shall not be allowed in a "regular" expression, and they should be 
combinable with other expressions).
Looking at the bug tracker i find two bugs, which are directly related to 
this:
https://svn.boost.org/trac/boost/ticket/4675
and
https://svn.boost.org/trac/boost/ticket/4668
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] grammars, domains and subdomains

2010-12-08 Thread Thomas Heller
Eric Niebler wrote:

> On 12/7/2010 2:37 PM, Thomas Heller wrote:
>> Hi,
>> 
>> I have been trying to extend a domain by "subdomaining" it. The sole
>> purpose of this subdomain was to allow another type of terminal
>> expression.
>> 
>> Please see the attached code, which is a very simplified version of what
>> I was trying to do.
> 
> 
>> So, How to handle that correctly?
> 
> Yup, that's a problem. I don't have an answer for you at the moment,
> sorry.

I think i solved the problem. The testcase for this solution is attached.
Let me restate what I wanted to accomplish:

Given a domain, which serves as a common superdomain for certain different
subdomains.
The grammar associated with that domain will serve as the base grammar for
that EDSL.
There might be usecases, where certain proto expressions want to be allowed
in a sub domain, and additionally allowing that expression to be mixed with
the already defined grammar rules in the super domain.
So, in order to achieve that, the grammar of our common super domain needs
to be parametrized on a Grammar type. This will allow to reuse that
grammar, and extend it with other rules.
The implementation looks like this:

struct super_grammar
   : proto::switch_
{
template 
struct case_;
};

With the different case_ specializations on Tag we can define what valid
expressions are, with respect to the Grammar type. This defaults to
super_grammar.

To extend that grammar in terms of allowing additional expressions, but do
not change what expressions super_grammar matches we can do the following:

struct sub_grammar
   : proto::switch_
{
template 
struct case_ : super_grammar::case_
{};
};

So far so good. With this technique, every expression which was valid in
super_grammar are now valid in sub_grammar with the addition of the
extensions.

This might be what people refer to as type-2 grammars.

Now, super_grammar belongs to super_domain and sub_grammar to sub_domain,
which is a sub domain of super_domain.
At the end of the day, I want to mix expressions from super_domain with
expressions from sub_domain.
The default operator overloads are not suitable for this, cause the deduced
domain of super_domain and sub_domain is super_domain.
This makes expressions of the form t1 OP t2 invalid (where t1 is from
super_domain and t2 from sub_domain) because t1 OP t2 is not a valid
expression in super_domain. However it is in sub_domain.
In this case we do not want the deduced_domain, but the strongest domain,
which is basically the most specialised domain, or strongest domain as I
tagged it.

I hope that makes sense.

Regards,

Thomas
#include 

namespace proto = boost::proto;
namespace mpl = boost::mpl;

typedef proto::terminal int_terminal;
typedef proto::terminal double_terminal;


template 
struct actor1;

struct grammar1;

// define our base domain
struct domain1
: proto::domain<
proto::pod_generator
  , grammar1
  , proto::default_domain
>
{};

// define the grammar for that domain
struct grammar1
: proto::switch_
{
// The actual grammar is parametrized on a Grammar parameter as well
// This allows us to effectively reuse that grammar in subdomains.
template 
struct case_
: proto::or_<
proto::plus
  , int_terminal
>
{};
};

// boring expression wrapper
template 
struct actor1
{
BOOST_PROTO_BASIC_EXTENDS(Expr, actor1, domain1)
};



template 
struct actor2;

struct grammar2;

// our domain2 ... this is a subdomain of domain1
struct domain2
: proto::domain<
proto::pod_generator
  , grammar2
  , domain1
>
{};

// the grammar2
struct grammar2
: proto::switch_
{
// again parameterized on a Grammar, defaulting to grammar2
// This is not really needed here, but allows to reuse that grammar as well
template 
struct case_
: proto::or_<
// here we go ... reuse grammar1::case_ with our new grammar
grammar1::case_
  , double_terminal
>
{};
};

// boring expression wrapper
template 
struct actor2
{
BOOST_PROTO_BASIC_EXTENDS(Expr, actor2, domain2)
};


actor1 t1 = {{42}};
actor2 t2 = {{3.14}};



// specialize is_extension trait, so actor1 and actor2 don't get picked up
// by proto's default operators.
// otherwise, expression like t1 + t2 would not work.
namespace boost { namespace proto {
template 
struct is_extension > : mpl::false_ {};

templ

Re: [proto] grammars, domains and subdomains

2010-12-07 Thread Thomas Heller
Eric Niebler wrote:

> On 12/7/2010 2:37 PM, Thomas Heller wrote:
>> Hi,
>> 
>> I have been trying to extend a domain by "subdomaining" it. The sole
>> purpose of this subdomain was to allow another type of terminal
>> expression.
>> 
>> Please see the attached code, which is a very simplified version of what
>> I was trying to do.
> 
> 
>> So, How to handle that correctly?
> 
> Yup, that's a problem. I don't have an answer for you at the moment,
> sorry.

This is a real bummer ... I need it for the phoenix local variables :(
I might be able to work something out ... The idea is the following:

template 
struct grammar_of
{
typedef typename proto::domain_of::type domain_type;
typedef typename domain_type::proto_grammar type;
};

struct grammar
: proto::or_<
  proto::plus<
   grammar_of<_child_c<0> >()
 , grammar_of<_child_c<1> >()
  >
>
{};


This looks just insane though ... But looks like what I want ... need to 
test this properly ...
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] : Proto transform with state

2010-12-07 Thread Thomas Heller
Eric Niebler wrote:

> On 12/7/2010 3:13 PM, Thomas Heller wrote:
>> Eric Niebler wrote:
>>> Now they do: T()(e,s,d). Inside T::impl, D had better be the type of d.
>>> Nowhere does the _data transform appear in this code, so changing _data
>>> to be "smart" about environments and scopes won't save you if you've
>>> monkeyed with the data parameter.
>> 
>> Very true. Something like proto::transform_env_impl could help. Introduce
>> a new type of primitive transform which is aware of this environment. The
>> usual transform_impl can still be used.
>> By calling T()(e,s,d) you just create a 2-tuple. The first parameter is
>> the state, second data.
>> Just thinking out loud here ...
> 
> So transform_impl strips the data parameter of the let scope stuff, and
> the local variables like _a don't use transform_impl and see the scope
> with the locals?
> 
> Well, it's not that simple Consider:
> 
>   make< int(_a) >
> 
> If make::impl uses transform_impl, it strips the let scope before _a
> gets to see the local variables. If this is to work, most of Proto's
> transforms must be "special" and pass the let scope through. That means
> proto::call, proto::make, proto::lazy and proto::when, at least. But ...
> it just might work. Good thinking!
> 
> 
> 
> No, wait a minute. Look again:
> 
> struct T : transform {
> 
>   template
>   struct impl : transform_impl {
> 
> // do something with D
> 
>   };
> 
> };
> 
> T::impl gets passed D *before* transform_impl has a change to fix it up.
> Any existing transform that actually uses D and assumes it's what they
> passed in is going to be totally hosed. In order to make existing
> transforms let-friendly, they would all need to be changed. That's no
> good.
> 
> Bummer, I was excited for a minute there. :-/

Yes ... this would basically mean a complete rewrite ... no good.
--> Proto V5, you might save the thought for the C++0x rewrite :)
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] : Proto transform with state

2010-12-07 Thread Thomas Heller
Eric Niebler wrote:

> On 12/7/2010 2:58 PM, Thomas Heller wrote:
>> Eric Niebler wrote:
>> 
>>> On 12/7/2010 12:57 PM, Thomas Heller wrote:
>>>>
>>>> *Dough* misunderstanding here. I didn't mean to clean up the phoenix
>>>> scope expressions with the help of proto::let. I was thinking, maybe
>>>> proto::let can borrow something from phoenix scopes on a conceptual
>>>> level.
>>>
>>> Oh, sure. How does Phoenix handle let scopes? Are local variables
>>> statically or dynamically scoped? How is it accomplished?
>> 
>> Phoenix in general has an abstract concept of an environment. This
>> environment is used to store the arguments of a lambda expression in a
>> tuple. This leads to the only requirement this environment must have:
>> access the arguments with a compile-time index in that tuple, using
>> fusion::at_c(env).
>> When having a let statement, A new environment is created which acts as a
>> map like data structure, indexed by the local names. Additionally the
>> above requirements are fulfilled.
>> Local variables are dynamically scoped. Additionally you can access
>> locals defined in some outer let scope.
>> 
>> This is basically how it is working. I was thinking, maybe, proto can
>> adapt this concept of an environment. This would allow pretty nifty
>> things, like not being tied to only state and data, which really can be a
>> limitation, sometimes. Backwards compatibility can be provided by
>> transparently change proto::_state and proto::_data to do the right
>> thing.
> 
> That is not sufficient to preserve back-compat. Imagine someone has
> defined a PrimitiveTransform T:
> 
> struct T : transform {
> 
>   template
>   struct impl : transform_impl {
> 
> // do something with D
> 
>   };
> 
> };
> 
> Now they do: T()(e,s,d). Inside T::impl, D had better be the type of d.
> Nowhere does the _data transform appear in this code, so changing _data
> to be "smart" about environments and scopes won't save you if you've
> monkeyed with the data parameter.

Very true. Something like proto::transform_env_impl could help. Introduce a 
new type of primitive transform which is aware of this environment. The 
usual transform_impl can still be used.
By calling T()(e,s,d) you just create a 2-tuple. The first parameter is the 
state, second data.
Just thinking out loud here ...
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] : Proto transform with state

2010-12-07 Thread Thomas Heller
Eric Niebler wrote:

> On 12/7/2010 12:57 PM, Thomas Heller wrote:
>> 
>> *Dough* misunderstanding here. I didn't mean to clean up the phoenix
>> scope expressions with the help of proto::let. I was thinking, maybe
>> proto::let can borrow something from phoenix scopes on a conceptual
>> level.
> 
> Oh, sure. How does Phoenix handle let scopes? Are local variables
> statically or dynamically scoped? How is it accomplished?

Phoenix in general has an abstract concept of an environment. This 
environment is used to store the arguments of a lambda expression in a 
tuple. This leads to the only requirement this environment must have:
access the arguments with a compile-time index in that tuple, using 
fusion::at_c(env).
When having a let statement, A new environment is created which acts as a 
map like data structure, indexed by the local names. Additionally the above 
requirements are fulfilled.
Local variables are dynamically scoped. Additionally you can access locals 
defined in some outer let scope.

This is basically how it is working. I was thinking, maybe, proto can adapt 
this concept of an environment. This would allow pretty nifty things, like 
not being tied to only state and data, which really can be a limitation, 
sometimes. Backwards compatibility can be provided by transparently change 
proto::_state and proto::_data to do the right thing.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


[proto] grammars, domains and subdomains

2010-12-07 Thread Thomas Heller
Hi,

I have been trying to extend a domain by "subdomaining" it. The sole purpose 
of this subdomain was to allow another type of terminal expression.
Please see the attached code, which is a very simplified version of what I 
was trying to do.

Ok, so let me try to explain what happens with a small table showing what 
works, and what not (fixed size font helps here).

   expression | compiles? | matches grammar1? | matches grammar2?|
 -+---+---+--+
1. t1 | yes   | yes   | no   |
2. t1 + t1| yes   | yes   | no   |
3. t2 | yes   | no| yes  |
4. t2 + t2| no| no| no   |
5. t1 + t2| no| no| no   |

So, 1.-3. are working as expected. 4. and 5. obviously doesn't work 
(operator+ gets SFINAE'd out because of these rules: 
https://svn.boost.org/trac/boost/ticket/4668

So far so good ... Or not.

Let me remind you that I want to get 4. and 5. compiled, matching grammar2, 
but not grammar1.

One attempt would be to change grammar1 to:
struct grammar1
: proto::or_<
proto::plus
  , int_terminal
>
{};

Which enables the operator+ for 4. however, the expressio 4. starts to match 
both grammars. Not really what I.
Expression 5. still fails, and i could not get it to work.

So, How to handle that correctly?
Let me remind you, that adding double_terminal to grammar1 not an option.

Regards,

Thomas

#include 

namespace proto = boost::proto;

typedef proto::terminal int_terminal;
typedef proto::terminal double_terminal;

template 
struct actor1;

struct grammar1;

struct domain1
: proto::domain<
proto::pod_generator
  , grammar1
  , proto::default_domain
>
{};

struct grammar1
: proto::or_<
proto::plus
  , int_terminal
>
{};

template 
struct actor1
{
BOOST_PROTO_BASIC_EXTENDS(Expr, actor1, domain1)
};

template 
struct actor2;

struct grammar2;

struct domain2
: proto::domain<
proto::pod_generator
  , grammar2
  , domain1
>
{};

struct grammar2
: proto::or_<
grammar1
  , double_terminal
>
{};

template 
struct actor2
{
BOOST_PROTO_BASIC_EXTENDS(Expr, actor2, domain2)
};

actor1 t1 = {{0}};
actor2 t2 = {{0}};___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] : Proto transform with state

2010-12-07 Thread Thomas Heller
Eric Niebler wrote:

> On 12/6/2010 4:50 PM, Thomas Heller wrote:
>> Eric Niebler wrote:
>>> I played with the let transform idea over the weekend. It *may* be
>>> possible to accomplish without the two problems I described above. See
>>> the attached let transform (needs latest Proto trunk). I'm also
>>> attaching the Renumber example, reworked to use let.
> 
>> 
>> Without having looked at it too much ... this looks a lot like the
>> environment in phoenix. Maybe this helps in cleaning it out a bit.
> 
> I tend to doubt it would help clean up the implementation of Phoenix
> environments. These features exist on different meta-levels: one
> (proto::let) is a feature for compiler-construction (Proto), the other
> (phoenix::let) is a language feature (Phoenix). The have roughly the
> same purpose within their purview, but as their purviews are separated
> by one great, big Meta, it's not clear that they have anything to do
> with each other.

*Dough* misunderstanding here. I didn't mean to clean up the phoenix scope 
expressions with the help of proto::let. I was thinking, maybe proto::let 
can borrow something from phoenix scopes on a conceptual level. 
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] : Proto transform with state

2010-12-06 Thread Thomas Heller
Eric Niebler wrote:

> On 11/18/2010 3:31 PM, Eric Niebler wrote:
>> On 11/18/2010 1:45 PM, Thomas Heller wrote:
>>> Eric Niebler  writes:
>>>> It's REALLY hard. The let context needs to be bundled with the Expr,
>>>> State, or Data parameters somehow, but in a way that's transparent. I
>>>> don't actually know if it's possible.
>>>
>>> Very hard ... yeah. I am thinking that we can maybe save these variables
>>> in the transform?
>> 
>> I'm thinking we just stuff it into the Data parameter. We have a
>> let_scope template that is effectively a pair containing:
>> 
>> 1. The user's original Data, and
>> 2. A Fusion map from local variables (_a) to values.
>> 
>> The let transform evaluates the bindings and stores the result in the
>> let_scope's Fusion map alongside the user's Data. We pass the let_scope
>> as the new Data parameter. _a is itself a transform that looks up the
>> value in Data's Fusion map. The proto::_data transform is changed to be
>> aware of let_scope and return only the original user's Data. This can
>> work. We also need to be sure not to break the new
>> proto::external_transform.
>> 
>> The problems with this approach as I see it:
>> 
>> 1. It's not completely transparent. Custom primitive transforms will see
>> that the Data parameter has been monkeyed with.
>> 
>> 2. Local variables like _a are not lexically scoped. They are, in fact,
>> dynamically scoped. That is, you can access _a outside of a let<>
>> clause, as long as you've been called from within a let clause.
>> 
>> Might be worth it. But as there's no pressing need, I'm content to let
>> this simmer. Maybe we can think of something better.
> 
> I played with the let transform idea over the weekend. It *may* be
> possible to accomplish without the two problems I described above. See
> the attached let transform (needs latest Proto trunk). I'm also
> attaching the Renumber example, reworked to use let.
> 
> This code is NOT ready for prime time. I'm not convinced it behaves
> sensibly in all cases. I'm only posting it as a curiosity. You're insane
> if you use this in production code. Etc, etc.

Without having looked at it too much ... this looks a lot like the 
environment in phoenix. Maybe this helps in cleaning it out a bit.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] problem with constness of operator return types

2010-12-02 Thread Thomas Heller
Eric Niebler wrote:

> On 12/2/2010 6:51 AM, Thomas Heller wrote:
>> Hi,
>> 
>> I just encountered a somehow stupid problem. Possibility is high that i
>> missed something.
>> 
>> The problem is, that proto's default transform cannot handle op_assign
>> correctly. This is due to the fact that operator OP returns a const proto
>> expression, which turns every value type in proto terminals into a const
>> value.
> 
> Actually, it doesn't. See below:



You are right ... the problem vanished...
 
> Proto holds children by reference by default, and respects their
> constness. So what problem are you seeing, exactly?
> 
> I can guess that in Phoenix, you are seeing this problem because we told
> Proto that in the Phoenix domain, children need to be held by value. In
> this case, the top-level const really is a problem.
> 
> What's the right way to fix this? 


No fix needed.

> I don't actualy believe this is the right fix. Phoenix stores terminals
> by value *by design*. I argue that "t1 += t2" should NOT compile.
> Consider this Phoenix code:
> 
>   auto t1 = val(9);
>   std::for_each(c.begin(), c.end(), t1 += _1);
> 
> What's the value of t1 after for_each returns? It's 9! The for_each is
> actually mutating a temporary object. The const stuff catches this error
> for you at compile time. You're forced to write:
> 
>   int t1 = 9;
>   std::for_each(c.begin(), c.end(), ref(t1) += _1);
> 
> Now the terminal is held by reference, and it works as expected.
> 
> If you think there are legitimate usage scenarios that are busted by
> this const stuff, please let me know.

Sorry about the noise! There were some references missing in the result type 
calculation.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


[proto] problem with constness of operator return types

2010-12-02 Thread Thomas Heller
Hi,

I just encountered a somehow stupid problem. Possibility is high that i 
missed something.

The problem is, that proto's default transform cannot handle op_assign 
correctly. This is due to the fact that operator OP returns a const proto 
expression, which turns every value type in proto terminals into a const 
value. Meaning, that codes like the following don't compile:

// following lines copy the literal 9 into the terminal expression:
boost::proto::terminal::type t1 = {9};
boost::proto::terminal::type t2 = {9};

// try to plus assign t2 to t1:
boost::proto::default_context ctx;
boost::proto::eval(t1 += t2, ctx);

This fails due to the following code:
(out of proto/operators.hpp)
template
typename boost::proto::detail::enable_binary<
DOMAIN
  , DOMAIN::proto_grammar
  , BOOST_PROTO_APPLY_BINARY_(TRAIT, Left, Right)
  , TAG
  , Left
  , Right const
>::type const // The const is the problem here.
operator OP(Left &left, Right const &right)
{
return boost::proto::detail::make_expr_()(left, right);
}

With that type of expression creation, proto's default evaluation fails with 
this error message:
./boost/proto/context/default.hpp:142:13: error: read-only variable is not 
assignable

So far so good ... or not good. This should work!
The problem is that expressions const qualified above, return a const 
terminal value as well.

Changing the code above to the following fixes the problem:
template
typename boost::proto::detail::enable_binary<
DOMAIN
  , DOMAIN::proto_grammar
  , BOOST_PROTO_APPLY_BINARY_(TRAIT, Left, Right)
  , TAG
  , Left
  , Right const
>::type
operator OP(Left &left, Right const &right)
{
return boost::proto::detail::make_expr_()(left, right);
}

By changing this, the OP assignment can still be lead to hard errors by 
having this:
boost::proto::terminal::type t1 = {9};
boost::proto::terminal::type t2 = {9};

// try to plus assign t2 to t1:
boost::proto::default_context ctx;
boost::proto::eval(t1 += t2, ctx);

This leads to exact same error, and is, imho the expected behaviour.
Attached is a patch (against current trunk) which fixes these return types.
Maybe there is another solution to that problem ...

Regards,
Thomas
Index: boost/proto/operators.hpp
===
--- boost/proto/operators.hpp	(revision 66967)
+++ boost/proto/operators.hpp	(working copy)
@@ -62,6 +62,10 @@
 template
 struct enable_binary
   : boost::lazy_enable_if_c<
+//Trait::value
+//lazy_matches, Grammar>::value
+//lazy_matches, Grammar>::value
+//lazy_matches, Grammar>::value
 boost::mpl::and_<
 Trait
   , lazy_matches, Grammar>
@@ -105,7 +109,7 @@
   , BOOST_PROTO_APPLY_UNARY_(TRAIT, Arg)\
   , TAG \
   , Arg \
->::type const   \
+>::type   \
 operator OP(Arg &arg BOOST_PROTO_UNARY_OP_IS_POSTFIX_ ## POST)  \
 {   \
 return boost::proto::detail::make_expr_()(arg); \
@@ -118,7 +122,7 @@
   , BOOST_PROTO_APPLY_UNARY_(TRAIT, Arg)\
   , TAG \
   , Arg const   \
->::type const   \
+>::type \
 operator OP(Arg const &arg BOOST_PROTO_UNARY_OP_IS_POSTFIX_ ## POST)\
 {   \
 return boost::proto::detail::make_expr_()(arg);   \
@@ -134,7 +138,7 @@
   , TAG \
   , Left\
   , Right   \
->::type const   \
+>::type 

Re: [proto] : Proto transform with state

2010-11-18 Thread Thomas Heller
Eric Niebler  writes:
> On 11/18/2010 6:09 AM, Thomas Heller wrote:
> > Here goes the renumbering example:
> > http://codepad.org/K0TZamPb

> Unfortunately, this doesn't actually solve the reevaluation problem ...
> it just hides it.

Yes, exactly.

> If "_a" gets replaced eagerly everywhere with
> "Renumber(_, second(proto::_state))", then that transform will actually
> get run every time. What I want is a way to evaluate the transform,
> store the result in a temporary, and have "_a" refer to that temporary.

Yes, I thought of doing this ... but could not find a solution. The question 
is, 
do we really need this behaviour? Is replacement not enough?
Could you make up a usecase? I can not think of one ... Regarding that counting 
example ... it can be solved by this pair hack.

> It's REALLY hard. The let context needs to be bundled with the Expr,
> State, or Data parameters somehow, but in a way that's transparent. I
> don't actually know if it's possible.

Very hard ... yeah. I am thinking that we can maybe save these variables in the 
transform?

___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] : Proto transform with state

2010-11-18 Thread Thomas Heller
Thomas Heller  writes:
> Thomas Heller  writes:
> > Eric Niebler  writes:
> 
> > > Notice that the Renumber algorithm needs to be invoked twice with the
> > > same arguments. In this case, we can avoid the runtime overhead of the
> > > second invocation by just using the type information, but that's not
> > > always going to be the case. There doesn't seem to be a way around it,
> > > either.
> > > 
> > > I think Proto transforms need a "let" statement for storing intermediate
> > > results. Maybe something like this:

> > > 
> > > It's fun to think about this stuff, but I wish it actually payed the 
bills.
> > 
> > Ok ... I implemented let!
> > 
> > Here goes the renumbering example:
> > http://codepad.org/K0TZamPb
> > 
> > The change is in line 296 rendering RenumberFun to:
> > struct RenumberFun
> >   : proto::fold<
> > _
> >   , make_pair(fusion::vector<>(), proto::_state)
> >   , let<
> > _a(Renumber(_, second(proto::_state)))
> >   , make_pair(
> > push_back(
> > first(proto::_state)
> >   , first(_a)
> > )
> >   , type_of
> > )
> > >
> > >
> > {};
> > 
> > The implementation of let actually was quite easy ... here is how it works:
> > 
> > let is a transform taking definitions of local variables 
> and 
> > the transform these locals will get applied to.
> > A local definition is in the form of: LocalName(LocalTransform)
> > If the specified transform has LocalName embedded, it will get replaced by 
> > LocalTransform.
> > I also implemented the definition of more than one local ... this is done 
> > by 
> > reusing proto::and_:
> > 
> > let, 
> > Transform>
> > The replacement is done from the end to the beginning, making it possible 
> > to 
> > refer in a LocalTransformN to a LocalNameN-1, this gets replaced 
> automatically!
>
> Ok, the implementation in the previous post had some bugs.
> Here is a updated one: http://codepad.org/ljjBYqHr
> 
> With the relevant example, showing of reuse of locals:
> struct RenumberFun
>   : proto::fold<
> _
>   , make_pair(fusion::vector<>(), proto::_state)
>   , let<
> proto::and_<
> _a(second(proto::_state))
>   , _b(Renumber(_, _a))
> >
>   , make_pair(
> push_back(
> first(proto::_state)
>   , first(_b)
> )
>   , type_of
> )
> >
> >
> {};

Ok ... it was pointed out that you might want to give _a a meaningful name.
So here it goes:

struct RenumberFun
  : proto::fold<
_
  , make_pair(fusion::vector<>(), proto::_state)
  , let<
Renumber(Renumber(_, second(proto::_state)))
  , make_pair(
push_back(
first(proto::_state)
  , first(Renumber)
)
  , type_of
)
>
>
{};

Which aliases Renumber to Renumber(_, second(proto::_state)) which is quite 
cool. It turns out that you can have any type as the local name ;)




___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] : Proto transform with state

2010-11-18 Thread Thomas Heller
Thomas Heller  writes:

> 
> Eric Niebler  writes:

> > 
> > Notice that the Renumber algorithm needs to be invoked twice with the
> > same arguments. In this case, we can avoid the runtime overhead of the
> > second invocation by just using the type information, but that's not
> > always going to be the case. There doesn't seem to be a way around it,
> > either.
> > 
> > I think Proto transforms need a "let" statement for storing intermediate
> > results. Maybe something like this:
> > 
> >   struct RenumberFun
> > : proto::fold<
> >   _
> > , make_pair(fusion::vector0<>(), proto::_state)
> > , let<
> >   _a( Renumber(_, second(proto::_state))> )
> > , make_pair(
> >   push_back(
> >   first(proto::_state)
> > , first(_a)
> >   )
> > , type_of
> >   )
> >   >
> >   >
> >   {};
> > 
> > I haven't a clue how this would be implemented.
> > 
> > It's fun to think about this stuff, but I wish it actually payed the bills.
> 
> Ok ... I implemented let!
> 
> Here goes the renumbering example:
> http://codepad.org/K0TZamPb
> 
> The change is in line 296 rendering RenumberFun to:
> struct RenumberFun
>   : proto::fold<
> _
>   , make_pair(fusion::vector<>(), proto::_state)
>   , let<
> _a(Renumber(_, second(proto::_state)))
>   , make_pair(
> push_back(
> first(proto::_state)
>   , first(_a)
> )
>   , type_of
> )
> >
> >
> {};
> 
> The implementation of let actually was quite easy ... here is how it works:
> 
> let is a transform taking definitions of local variables 
and 
> the transform these locals will get applied to.
> A local definition is in the form of: LocalName(LocalTransform)
> If the specified transform has LocalName embedded, it will get replaced by 
> LocalTransform.
> I also implemented the definition of more than one local ... this is done by 
> reusing proto::and_:
> 
> let LocalNameN(LocalTransformN)>, 
> Transform>
> The replacement is done from the end to the beginning, making it possible to 
> refer in a LocalTransformN to a LocalNameN-1, this gets replaced 
automatically!
> 
> Hope that helps!
> 
> Thomas
> 
> 

Ok, the implementation in the previous post had some bugs.
Here is a updated one: http://codepad.org/ljjBYqHr

With the relevant example, showing of reuse of locals:

struct RenumberFun
  : proto::fold<
_
  , make_pair(fusion::vector<>(), proto::_state)
  , let<
proto::and_<
_a(second(proto::_state))
  , _b(Renumber(_, _a))
>
  , make_pair(
push_back(
first(proto::_state)
  , first(_b)
)
  , type_of
)
>
>
{};



___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] : Proto transform with state

2010-11-18 Thread Thomas Heller
Eric Niebler  writes:
> 
> On 11/17/2010 2:18 PM, joel falcou wrote:
> > On 17/11/10 19:46, Eric Niebler wrote:
> >> See the attached code. I wish I had a better answer. It sure would be
> >> nice to generalize this for other times when new state needs to bubble
> >> up and back down.
> > 
> > Just chiming in. We had the exact same problem in quaff where needed to
> > carry on a process ID over the trasnform of parallel statement. If it can
> > make you worry less Eric, we ended with the exact same workaround.
> 
> There's another issue. Look here:
> 
>   // don't evaluate T at runtime, but default-construct
>   // an object of T's result type.
>   template
>   struct type_of
> : proto::make >
>   {};
> 
>   struct RenumberFun
> : proto::fold<
>   _
> , make_pair(fusion::vector0<>(), proto::_state)
> , make_pair(
>   push_back(
>   first(proto::_state)
> //--1
> , first(Renumber(_, second(proto::_state)))
>   )
> //---2
> , type_of
>   )
>   >
>   {};
> 
> Notice that the Renumber algorithm needs to be invoked twice with the
> same arguments. In this case, we can avoid the runtime overhead of the
> second invocation by just using the type information, but that's not
> always going to be the case. There doesn't seem to be a way around it,
> either.
> 
> I think Proto transforms need a "let" statement for storing intermediate
> results. Maybe something like this:
> 
>   struct RenumberFun
> : proto::fold<
>   _
> , make_pair(fusion::vector0<>(), proto::_state)
> , let<
>   _a( Renumber(_, second(proto::_state))> )
> , make_pair(
>   push_back(
>   first(proto::_state)
> , first(_a)
>   )
> , type_of
>   )
>   >
>   >
>   {};
> 
> I haven't a clue how this would be implemented.
> 
> It's fun to think about this stuff, but I wish it actually payed the bills.

Ok ... I implemented let!

Here goes the renumbering example:
http://codepad.org/K0TZamPb

The change is in line 296 rendering RenumberFun to:
struct RenumberFun
  : proto::fold<
_
  , make_pair(fusion::vector<>(), proto::_state)
  , let<
_a(Renumber(_, second(proto::_state)))
  , make_pair(
push_back(
first(proto::_state)
  , first(_a)
)
  , type_of
)
>
>
{};

The implementation of let actually was quite easy ... here is how it works:

let is a transform taking definitions of local variables and 
the transform these locals will get applied to.
A local definition is in the form of: LocalName(LocalTransform)
If the specified transform has LocalName embedded, it will get replaced by 
LocalTransform.
I also implemented the definition of more than one local ... this is done by 
reusing proto::and_:

let, 
Transform>
The replacement is done from the end to the beginning, making it possible to 
refer in a LocalTransformN to a LocalNameN-1, this gets replaced automatically!

Hope that helps!

Thomas



___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] : Proto transform with state

2010-11-17 Thread Thomas Heller
Eric Niebler wrote:

> On 11/17/2010 2:18 PM, joel falcou wrote:
>> On 17/11/10 19:46, Eric Niebler wrote:
>>> See the attached code. I wish I had a better answer. It sure would be
>>> nice to generalize this for other times when new state needs to bubble
>>> up and back down.
>> 
>> Just chiming in. We had the exact same problem in quaff where needed to
>> carry on a process ID over the trasnform of parallel statement. If it can
>> make you worry less Eric, we ended with the exact same workaround.
> 
> There's another issue. Look here:
> 
>   // don't evaluate T at runtime, but default-construct
>   // an object of T's result type.
>   template
>   struct type_of
> : proto::make >
>   {};
> 
>   struct RenumberFun
> : proto::fold<
>   _
> , make_pair(fusion::vector0<>(), proto::_state)
> , make_pair(
>   push_back(
>   first(proto::_state)
> //--1
> , first(Renumber(_, second(proto::_state)))
>   )
> //---2
> , type_of
>   )
>   >
>   {};
> 
> Notice that the Renumber algorithm needs to be invoked twice with the
> same arguments. In this case, we can avoid the runtime overhead of the
> second invocation by just using the type information, but that's not
> always going to be the case. There doesn't seem to be a way around it,
> either.
> 
> I think Proto transforms need a "let" statement for storing intermediate
> results. Maybe something like this:
> 
>   struct RenumberFun
> : proto::fold<
>   _
> , make_pair(fusion::vector0<>(), proto::_state)
> , let<
>   _a( Renumber(_, second(proto::_state))> )
> , make_pair(
>   push_back(
>   first(proto::_state)
> , first(_a)
>   )
> , type_of
>   )
>   >
>   >
>   {};
> 
> I haven't a clue how this would be implemented.
> 
> It's fun to think about this stuff, but I wish it actually payed the
> bills.
>

WOW! A let statement this would indeed make proto even more awesome!
Let's see what we can do ;)


Btw, i just finished implementing the unpack feature we were talking about 
...
Short description:

Calling some_callable with the expression unpacked:
proto::when

Calling some_callable with an arbitrary sequence unpacked:
proto::when

Calling some_callable with an arbitrary sequence unpacked, and apply a proto 
transform before:
proto::when


Additionally it is possible to have arbitrary parameters before or after 
unpack. The implementation is located at:
http://svn.boost.org/svn/boost/sandbox/SOC/2010/phoenix3/boost/phoenix/core/unpack.hpp

Just a whole mess of preprocessor generation ... this is not really fast to 
compile at the moment, PROTO_MAX_ARITY of 5 is fine, everything above will 
just blow the compiler :(
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Visitor Design Pattern

2010-10-25 Thread Thomas Heller
On Mon, Oct 25, 2010 at 11:57 AM, Joel de Guzman
 wrote:
> On 10/25/2010 12:39 PM, Eric Niebler wrote:
>>
>> On 10/24/2010 8:32 PM, Joel de Guzman wrote:
>>>
>>> On 10/25/2010 8:49 AM, Eric Niebler wrote:

 Like "visitor", "actor" comes with lots of baggage that we don't want:

    http://en.wikipedia.org/wiki/Actor_model

 Besides, as these things define algorithms, a name that's a verb would
 be better (std::transform, std::accumulate, etc. are verbs).
>>>
>>> That's a totally different synonym. The actor that I meant is
>>> connected to the meaning of "semantic action" and you are using
>>> action/actions anyway (as well as grammars and rules). It's the
>>> same domain.
>>
>> Oh. I didn't make the connection between "actor" and "semantic action".
>> For me, "actor" is so strongly evocative of concurrent programming that
>> I can't read it any other way.
>
> It's the name we use in Spirit for objects that dispatch semantic
> actions. What does an actor do: it executes an action for a
> particular sub-rule (node).
>
 How about "evaluate":

    proto::evaluate()(a);
>>>
>>> Hmmm... :P
>>
>> You're not loving that, either, are you? evaluate_with? with_actions?
>>   They're all terrible. I won't bother the list any more with
>> naming issues. Thanks to everybody who contributed ideas to the design
>> of the-yet-to-be-named-thingie. Especial Thomas.
>
> evaluate or eval in short is ok. I don't dislike it but
> it's not at the top of my list.
>
> Ditto, that's it for me. naming is only productive until a
> specific point. After that, it becomes a bike-shed issue.
> Any name from our list will do just fine. What's important is
> that you've done an amazing work under the hood. So, kudos
> to you and Thomas! hats off, sirs!

Thank you very much! So, we are good to changing the internals of
phoenix3 to use this extension mechanism?
Regarding naming 
I like renaming phoenix::actor to phoenix::lambda. But what about the existing
 phoenix::lambda? Rename it to protect (from Boost.Lambda)?
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Visitor Design Pattern

2010-10-24 Thread Thomas Heller
On Sun, Oct 24, 2010 at 9:59 AM, Joel de Guzman
 wrote:
> On 10/24/2010 1:16 PM, Eric Niebler wrote:
>>
>> Now, what to call the traveral/algorithm/action/on thingy. None of those
>> feel right. Maybe if I describe in words what it does, someone can come
>> up with a good name. Given a Proto grammar that has been built with
>> named rules, and a set of actions that can be indexed with those rules,
>> it creates a Proto algorithm. The traversal is fixed, the actions can
>> float. It's called.
>
> Actor
>
>  std::cout << actor()(a)  << "\n"; // printing
> char
>
> :-)

Cool, then we'd need some new name for phoenix, hmn?
I like:
 apply
 actor
 algorithm
 traverse
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Visitor Design Pattern

2010-10-23 Thread Thomas Heller
On Saturday 23 October 2010 19:47:59 Eric Niebler wrote:
> On 10/23/2010 10:45 AM, Thomas Heller wrote:
> > On Saturday 23 October 2010 19:30:18 Eric Niebler wrote:
> >> On 10/23/2010 10:12 AM, Eric Niebler wrote:
> >>> I've tweaked both the traversal example you sent around as well as my
> >>> over toy Phoenix. Tell me what you guys think.
> >> 
> >> Actually, I think it's better to leave the definition of "some_rule"
> >> alone and wrap it in "named_rule" at the point of use. A bit cleaner.
> >> See attached.
> > 
> > I like that.
> > With that named_rule approach, we have some kind of in code
> > documentation: Look, here that rule is a customization point.
> 
> Exactly.
> 
> > Why not just rule? Less characters to type.
> 
> I almost called it "rule", but *everything* in Proto is a rule including
> proto::or_ and proto::switch_. What makes these rules special is that
> they have a name.

True. But you could look at proto::or_ and proto::switch_ or any other 
already exisiting rules as anonymous rules. While rule or named_rule 
explicitly name them.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Visitor Design Pattern

2010-10-23 Thread Thomas Heller
On Saturday 23 October 2010 19:30:18 Eric Niebler wrote:
> On 10/23/2010 10:12 AM, Eric Niebler wrote:
> > I've tweaked both the traversal example you sent around as well as my
> > over toy Phoenix. Tell me what you guys think.
> 
> Actually, I think it's better to leave the definition of "some_rule"
> alone and wrap it in "named_rule" at the point of use. A bit cleaner.
> See attached.

I like that.
With that named_rule approach, we have some kind of in code documentation: 
Look, here that rule is a customization point.
Why not just rule? Less characters to type.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Visitor Design Pattern

2010-10-23 Thread Thomas Heller
On Saturday 23 October 2010 07:13:56 e...@boostpro.com wrote:
> Actually, I may have been hasty. I've had a hard time porting my
> mini-Phoenix to proto::algorithm. Thomas, can you, either with your
> version or mine? Proto::switch_ doesn't play nicely with it.

Yes you are right ... there is a pitfall to that ...
Having the cases look like:

struct cases::case_ : some_rule {};

There is no way the grammar can pick up some_rule to index the action.
Possible workarounds i can think of:

struct cases::case_ : proto::or_ {};

or something like this in the actions:

struct actions::when : do_it {};

I will get to adapting the mini phoenix today ...

> \e
> 
> Sent via tiny mobile device
> 
> -Original Message-
> From: Joel de Guzman 
> Sender: proto-boun...@lists.boost.org
> Date: Sat, 23 Oct 2010 09:29:27
> To: 
> Reply-To: "Discussions about Boost.Proto and DSEL design"
>   
> Subject: Re: [proto] Visitor Design Pattern
> 
> On 10/23/2010 5:36 AM, Eric Niebler wrote:
> > On 10/22/2010 10:45 AM, Eric Niebler wrote:
> >> On 10/22/2010 10:01 AM, Thomas Heller wrote:
> >>> I think this is the simplification of client proto code we searched
> >>> for. It probably needs some minor polishment though.
> >> 
> >> 
> >> 
> >> Hi Thomas, this looks promising. I'm digging into this now.
> > 
> > This is so wonderful, I can't /not/ put this in Proto. I just made a
> > small change to proto::matches to VASTLY simplify the implementation
> > (sync up). I also made a few changes:
> > 
> > - The action CRTP base is no more. You can't legally specialize a
> > member template in a derived class anyway.
> > 
> > - Proto::or_, proto::switch_ and proto::if_ are the only branching
> > grammar constructs and need special handling to find the sub-rule that
> > matched. All the nasty logic for that is now mixed in with the
> > implementation of proto::matches (where the information was already
> > available but not exposed).
> > 
> > - You have the option (but not the obligation) to select a rule's
> > default transform when defining the primary "when" template in your
> > actions class. (You can also defer to another action class's "when"
> > template for inheriting actions, but Thomas' code already had that
> > capability.)
> > 
> > - I changed the name from "_traverse" to "algorithm" to reflect its
> > role as a generic way to build algorithms by binding actions to
> > control flow as specified by grammar rules. I also want to avoid any
> > possible future conflict with Dan Marsden's Traverse library, which I
> > hope to reuse in Proto. That said ... the name "algorithm" sucks and
> > I'd like to do better. Naming is hard.
> > 
> > - The algorithm class is also a grammar (in addition to a transform)
> > that matches its Grammar template parameter. When you pass an
> > expression that does not match the grammar, it is now a precondition
> > violation. That is consistent with the rest of Proto.
> > 
> > That's it. It's simple, elegant, powerful, and orthogonal to and fits
> > in well with the rest of Proto. I think we have a winner. Good job!
> 
> Sweet! It's so deliciously good!
> 
> I too don't quite like "algorithm". How about just simply "action"?
> 
>std::cout << action()(a)  << "\n"; //
> printing char
> 
> or maybe "on":
> 
>std::cout << on()(a)  << "\n"; // printing
> char
> 
> Regards,
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Visitor Design Pattern

2010-10-22 Thread Thomas Heller
On Friday 22 October 2010 11:29:07 Joel de Guzman wrote:
> On 10/22/10 4:17 PM, Thomas Heller wrote:
> > On Friday 22 October 2010 09:58:25 Eric Niebler wrote:
> >> On 10/22/2010 12:33 AM, Thomas Heller wrote:
> >>> On Friday 22 October 2010 09:15:47 Eric Niebler wrote:
> >>>> On 10/21/2010 7:09 PM, Joel de Guzman wrote:
> >>>>> Check out the doc I sent (Annex A). It's really, to my mind,
> >>>>> generic languages -- abstraction of rules and templated grammars
> >>>>> through metanotions and hyper-rules.
> >>>> 
> >>>> Parameterized rules. Yes, I can understand that much. My
> >>>> understanding stops when I try to imagine how to build a parser
> >>>> that recognizes a grammar with parameterized rules.
> >>> 
> >>> And I can't understand how expression templates relate to parsing.
> >> 
> >> It doesn't in any practical sense, really. No parsing ever happens in
> >> Proto. The C++ compiler parses expressions for us and builds the tree.
> >> Proto grammars are patterns that match trees. (It is in this sense
> >> they're closer to schemata, not grammars that drive parsers.)
> >> 
> >> They're called "grammars" in Proto not because they drive the parsing
> >> but because they describe the valid syntax for your embedded language.
> > 
> > Ok, this formulation makes it much clearer :)
> 
> It's just the metaphor! And what I saying is that you will get into
> confusion land if you mix metaphors from different domains. Proto uses
> the parsing domain and it makes sense (*). It may (and I say may) be
> possible to extend that metaphor and in the end it may be possible
> to incorporate that into proto instead of phoenix (if it is indeed
> conceptually understandable and reusable) --an opportunity that may
> be missed if you shut the door and dismiss the idea prematurely.
> 
> It is OK to switch metaphors and have a clean cut. But again,
> my point is: use only one metaphor. Don't mix and match ad-hoc.
> 
> (* regardless if it doesn't do any parsing at all!)
> 
> >>>>> I have this strong feeling that that's the intent of Thomas and
> >>>>> your recent designs. Essentially, making the phoenix language a
> >>>>> metanotion in itself that can be extended post-hoc through
> >>>>> generic means.
> >>>> 
> >>>> I don't think that's what Thomas and I are doing. vW-grammars
> >>>> change the descriptive power of grammars. But we don't need more
> >>>> descriptive grammars. Thomas and I aren't changing the grammar of
> >>>> Phoenix at all. We're just plugging in different actions. The
> >>>> grammar is unchanged.
> >>> 
> >>> Exactly.
> >>> Though, I think this is the hard part to wrap the head around. We
> >>> have a grammar, and this very same grammar is used to describe
> >>> "visitation".
> >> 
> >> It's for the same reason that grammars are useful for validating
> >> expressions that they are also useful for driving tree traversals:
> >> pattern matching. There's no law that the /same/ grammar be used for
> >> validation and evaluation. In fact, that's often not the case.
> > 
> > True.
> > However it seems convenient to me reusing the grammar you wrote for
> > validating your language for the traversal of an expression matching
> > that grammar.
> > This is what we tried with this rule based dispatching to Semantic
> > Actions. I am currently thinking in another direction, that is
> > separating traversal and grammar again, very much like proto contexts,
> > but with this rule dispatching and describing it with proto transforms
> > ... the idea is slowly materializing in my head ...
> 
> Again I should warn against mixing metaphors. IMO, that is the basic
> problem why it is so deceptively unclear. There's no clear model
> that conceptualizes all this, and thus no way to reason out on
> an abstract level. Not good.

Alright, I think mixing metaphors is indeed a very bad idea.
IMHO, it is best to stay in the grammar with semantic actions domain as it 
always(?) has been.
I broke my head today, and developed a solution which stays in these very 
same proto semantics, and reuses "keywords" (more on that) already in proto.
Attached, you will find the implementation of this very idea.

So, semantic actions in are kind of simple right now. the work by having
   proto::

Re: [proto] Visitor Design Pattern

2010-10-22 Thread Thomas Heller
On Friday 22 October 2010 11:29:07 Joel de Guzman wrote:
> On 10/22/10 4:17 PM, Thomas Heller wrote:
> > On Friday 22 October 2010 09:58:25 Eric Niebler wrote:
> >> On 10/22/2010 12:33 AM, Thomas Heller wrote:
> >>> On Friday 22 October 2010 09:15:47 Eric Niebler wrote:
> >>>> On 10/21/2010 7:09 PM, Joel de Guzman wrote:
> >>>>> Check out the doc I sent (Annex A). It's really, to my mind,
> >>>>> generic languages -- abstraction of rules and templated grammars
> >>>>> through metanotions and hyper-rules.
> >>>> 
> >>>> Parameterized rules. Yes, I can understand that much. My
> >>>> understanding stops when I try to imagine how to build a parser
> >>>> that recognizes a grammar with parameterized rules.
> >>> 
> >>> And I can't understand how expression templates relate to parsing.
> >> 
> >> It doesn't in any practical sense, really. No parsing ever happens in
> >> Proto. The C++ compiler parses expressions for us and builds the tree.
> >> Proto grammars are patterns that match trees. (It is in this sense
> >> they're closer to schemata, not grammars that drive parsers.)
> >> 
> >> They're called "grammars" in Proto not because they drive the parsing
> >> but because they describe the valid syntax for your embedded language.
> > 
> > Ok, this formulation makes it much clearer :)
> 
> It's just the metaphor! And what I saying is that you will get into
> confusion land if you mix metaphors from different domains. Proto uses
> the parsing domain and it makes sense (*). It may (and I say may) be
> possible to extend that metaphor and in the end it may be possible
> to incorporate that into proto instead of phoenix (if it is indeed
> conceptually understandable and reusable) --an opportunity that may
> be missed if you shut the door and dismiss the idea prematurely.
> 
> It is OK to switch metaphors and have a clean cut. But again,
> my point is: use only one metaphor. Don't mix and match ad-hoc.
> 
> (* regardless if it doesn't do any parsing at all!)

Makes sense. Letting the idea of two level grammars sinking in ... I still 
have problems to adapt it to the parameterized semantic actions solution we 
developed

> >>>>> I have this strong feeling that that's the intent of Thomas and
> >>>>> your recent designs. Essentially, making the phoenix language a
> >>>>> metanotion in itself that can be extended post-hoc through
> >>>>> generic means.
> >>>> 
> >>>> I don't think that's what Thomas and I are doing. vW-grammars
> >>>> change the descriptive power of grammars. But we don't need more
> >>>> descriptive grammars. Thomas and I aren't changing the grammar of
> >>>> Phoenix at all. We're just plugging in different actions. The
> >>>> grammar is unchanged.
> >>> 
> >>> Exactly.
> >>> Though, I think this is the hard part to wrap the head around. We
> >>> have a grammar, and this very same grammar is used to describe
> >>> "visitation".
> >> 
> >> It's for the same reason that grammars are useful for validating
> >> expressions that they are also useful for driving tree traversals:
> >> pattern matching. There's no law that the /same/ grammar be used for
> >> validation and evaluation. In fact, that's often not the case.
> > 
> > True.
> > However it seems convenient to me reusing the grammar you wrote for
> > validating your language for the traversal of an expression matching
> > that grammar.
> > This is what we tried with this rule based dispatching to Semantic
> > Actions. I am currently thinking in another direction, that is
> > separating traversal and grammar again, very much like proto contexts,
> > but with this rule dispatching and describing it with proto transforms
> > ... the idea is slowly materializing in my head ...
> 
> Again I should warn against mixing metaphors. IMO, that is the basic
> problem why it is so deceptively unclear. There's no clear model
> that conceptualizes all this, and thus no way to reason out on
> an abstract level. Not good.

Agree.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Visitor Design Pattern

2010-10-22 Thread Thomas Heller
On Friday 22 October 2010 09:58:25 Eric Niebler wrote:
> On 10/22/2010 12:33 AM, Thomas Heller wrote:
> > On Friday 22 October 2010 09:15:47 Eric Niebler wrote:
> >> On 10/21/2010 7:09 PM, Joel de Guzman wrote:
> >>> Check out the doc I sent (Annex A). It's really, to my mind,
> >>> generic languages -- abstraction of rules and templated grammars
> >>> through metanotions and hyper-rules.
> >> 
> >> Parameterized rules. Yes, I can understand that much. My
> >> understanding stops when I try to imagine how to build a parser
> >> that recognizes a grammar with parameterized rules.
> > 
> > And I can't understand how expression templates relate to parsing.
> 
> It doesn't in any practical sense, really. No parsing ever happens in
> Proto. The C++ compiler parses expressions for us and builds the tree.
> Proto grammars are patterns that match trees. (It is in this sense
> they're closer to schemata, not grammars that drive parsers.)
> 
> They're called "grammars" in Proto not because they drive the parsing
> but because they describe the valid syntax for your embedded language.

Ok, this formulation makes it much clearer :)

> >>> I have this strong feeling that that's the intent of Thomas and
> >>> your recent designs. Essentially, making the phoenix language a
> >>> metanotion in itself that can be extended post-hoc through
> >>> generic means.
> >> 
> >> I don't think that's what Thomas and I are doing. vW-grammars
> >> change the descriptive power of grammars. But we don't need more
> >> descriptive grammars. Thomas and I aren't changing the grammar of
> >> Phoenix at all. We're just plugging in different actions. The
> >> grammar is unchanged.
> > 
> > Exactly.
> > Though, I think this is the hard part to wrap the head around. We
> > have a grammar, and this very same grammar is used to describe
> > "visitation".
> 
> It's for the same reason that grammars are useful for validating
> expressions that they are also useful for driving tree traversals:
> pattern matching. There's no law that the /same/ grammar be used for
> validation and evaluation. In fact, that's often not the case.

True.
However it seems convenient to me reusing the grammar you wrote for 
validating your language for the traversal of an expression matching that 
grammar.
This is what we tried with this rule based dispatching to Semantic Actions.
I am currently thinking in another direction, that is separating traversal 
and grammar again, very much like proto contexts, but with this rule 
dispatching and describing it with proto transforms ... the idea is slowly 
materializing in my head ...
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Visitor Design Pattern

2010-10-22 Thread Thomas Heller
On Friday 22 October 2010 09:15:47 Eric Niebler wrote:
> On 10/21/2010 7:09 PM, Joel de Guzman wrote:
> > Check out the doc I sent (Annex A). It's really, to my mind,
> > generic languages -- abstraction of rules and templated grammars
> > through metanotions and hyper-rules.
> 
> Parameterized rules. Yes, I can understand that much. My understanding
> stops when I try to imagine how to build a parser that recognizes a
> grammar with parameterized rules.

And I can't understand how expression templates relate to parsing.

> > I have this strong feeling that
> > that's the intent of Thomas and your recent designs. Essentially,
> > making the phoenix language a metanotion in itself that can be
> > extended post-hoc through generic means.
> 
> I don't think that's what Thomas and I are doing. vW-grammars change the
> descriptive power of grammars. But we don't need more descriptive
> grammars. Thomas and I aren't changing the grammar of Phoenix at all.
> We're just plugging in different actions. The grammar is unchanged.

Exactly.
Though, I think this is the hard part to wrap the head around. We have a 
grammar, and this very same grammar is used to describe "visitation".
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Visitor Design Pattern

2010-10-21 Thread Thomas Heller
On Friday 22 October 2010 04:09:52 Joel de Guzman wrote:
> On 10/22/2010 10:01 AM, Eric Niebler wrote:
> >> If you want to go "meta" on parsing, then you might
> >> get some inspiration on 2-level grammars (inspired by van Wijngaarden
> >> 
> >> grammars) with the notion of hyper-rules, etc. This document:
> >>   http://www.cl.cam.ac.uk/~mgk25/iso-14977.pdf
> >> 
> >> gives a better glimpse into 2-level grammars (see Annex A).
> >> 
> >>"Although the notation (also known as a van Wijngaarden grammar,
> >>or a W-grammar) is more powerful, it is more complicated and,
> >>as the authors of Algol 68 recognized, “may be difficult
> >>for the uninitiated reader”.
> >> 
> >> I'm not really sure how this relates to the current design, but
> >> I think we should be getting closer to this domain and it deserves
> >> some notice.
> > 
> > You're not the first to bring up vW-grammars in relation to Proto.
> > Someone suggested them to implement EDSL type systems. I spent a good
> > amount of time reading about them, and could get my head around it. My
> > understanding is that they're powerfully descriptive, but that building
> > compilers with vW-grammars is very expensive. I don't really know. I
> > think I'd need to work with a domain expert to make that happen. Any
> > volunteers? :-)
> 
> Heh, people make it look needlessly complicated than it really is :-)
> Check out the doc I sent (Annex A). It's really, to my mind,
> generic languages -- abstraction of rules and templated grammars
> through metanotions and hyper-rules. I have this strong feeling that
> that's the intent of Thomas and your recent designs. Essentially,
> making the phoenix language a metanotion in itself that can be
> extended post-hoc through generic means.

Hmm. I don't understand these level-2 grammars either.
Heck, I am not even sure where the "parsing" part happens inside the 
expression template engine.
Thus I can not say if this really was my intent ;)

I was more trying to focus on how the language once parsed (whatever that 
means) into some kind of AST can be processed.

I think we agree upon how the AST of a proto expression looks like. 
Additionally, it is clear how to check the AST for validity.
What is left to determine is a general meaning of how to traverse this AST.
I think the notion of Semantic Actions (SA) is ill formed here as well. 
Since SAs are used during the parsing process. IMHO, this process is over 
once we retrieved the full AST.
To put it into the context of phoenix:
We have
template  struct actor { operator()(...) { ... return 
eval(*this, ...); } };

inside operator() we have the AST which is represented as a type with 
actor and the *this object. By calling eval, we traverse the AST to 
evaluate it with one specific behavior (some notion of C++ behavior).

In traditional compilers this is done with the Visitor Pattern.
This last part is IMHO currently missing (not entirely) in proto. In proto, 
you define what happens if some rule matched, which of course more of a 
semantic action than a visitor.

My intention is to traverse the proto tree multiple times and apply various 
cool transformations to it, which ultimately lead to evaluation.
IMHO, this traversal can be best expressed in terms of the Visitor Pattern.
I understand however, that it is hard to get the right formal description 
for it for proto. This might have several reasons. One of which i can think 
of is that proto grammars are used as two kind of things: 
  1) Description of how the language looks like, i.e. what a valid 
expression is
  2) As the definition of a "dynamic" node type we would like to "visit" (to 
stay in the OOP parlance)
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-20 Thread Thomas Heller
On Thu, Oct 21, 2010 at 7:50 AM, Thomas Heller
 wrote:
> On Thursday 21 October 2010 05:11:49 Eric Niebler wrote:
>> On 10/20/2010 7:49 AM, Thomas Heller wrote:

>> > Here it goes:
>> >     namespace detail
>> >     {
>> >
>> >         template <
>> >
>> >             typename Grammar, typename Visitor, typename IsRule = void>
>> >
>> >         struct algorithm_case
>> >
>> >             : Grammar
>> >
>> >         {};
>>
>> Why inherit from Grammar here instead of:
>>   : proto::when<
>>
>>         Grammar
>>       , typename Visitor::template visit
>>
>>
>> ?
>
> Because I wanted to have an "escape" point. There might be some valid
> usecase, that does not want to dispatch to the Visitor/Actions. This is btw
> the reason i didn't reuse or_, but introduced the rules template. To
> distinguish between: 1) "regular" proto grammars --> no dispatch 2) the
> rules, which do the dispatch.

Ok ... after rereading your mini phoenix you solve that problem with
default_actions.
Very neat as well!
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-20 Thread Thomas Heller
On Thursday 21 October 2010 05:11:49 Eric Niebler wrote:
> On 10/20/2010 7:49 AM, Thomas Heller wrote:
> > I worked a little on trying to simplify that whole grammar with
> > rules that have thing a bit. Forgive me, but i changed the name to
> > Visitor. Why? Simply because i think this is what is done here. We
> > visit a specific node which happened to match our rule.

> Most tree traversal algorithms "visit" each node, but the term "Visitor"
> means something very, very specific: the Gang-of-Four Visitor Design
> Pattern.

We have a tree traversal. *check*
We want to "visit" patterns which represent nodes in our tree. *check*

> It implies an OO hierarchy,

We don't have an OO hierarchy by definition. We have a heterogeneous tree of 
proto expressions. Which is something similar. The Gang-Of-Four didn't new 
about template meta programming.
Suggested line of thought:
 - The proto::expr/proto::basic_expr template is our base class.
 - Subclasses are determined by:
 * tag
 * number of children
 * some specific attributes of children

> a virtual "Dispatch" member that accepts a visitor

By defining a rule in our proto grammar, we "add the virtual dispatch member" 
to a subclass of our hierarchy.

> and a 2nd dispatch to an "Accept" member of the
> visitor object.

this would be: when >

> It is used to "add" members to an OO hierarchy post-hoc
> by putting them in a visitor instead of having to change every type in
> your hierarchy.

Right, with the above attempt of adapting the Gang-Of-Four Pattern, this is 
exactly what we do. It is simply impossible for users to "add" members to 
proto::expr/proto::basic_expr.
 
> If you don't have something like that, then don't call it Visitor. If
> you do, then we need to reformulate this design to make that pattern
> more explicit. Otherwise, it will cause no end of confusion.

What about the above attempt?
 
> Historical note: I originally called the "data" parameter of Proto
> transforms "visitors". During the review, there was such a hue and cry
> (for good reason) that I had to change it. True story.

Wow! FWIW the name data is far better ;)

> > Here it goes:
> > namespace detail
> > {
> > 
> > template <
> > 
> > typename Grammar, typename Visitor, typename IsRule = void>
> > 
> > struct algorithm_case
> > 
> > : Grammar
> > 
> > {};
> 
> Why inherit from Grammar here instead of:
>   : proto::when<
> 
> Grammar
>   , typename Visitor::template visit
> 
> 
> ?

Because I wanted to have an "escape" point. There might be some valid 
usecase, that does not want to dispatch to the Visitor/Actions. This is btw 
the reason i didn't reuse or_, but introduced the rules template. To 
distinguish between: 1) "regular" proto grammars --> no dispatch 2) the 
rules, which do the dispatch.


> It took a while, but I see what you're up to. You've pushed complexity
> into the generic "algorithm" class. (Not the greatest name, but I can't
> do better at the moment.) The benefit here is the cleaner separation
> between rules and actions, and the nicer syntax for specifying the rules
> associated with a tag. E.g.:
> 
>   rules
> 
> where A, B, and C are simple Proto rules without semantic actions,
> instead of:
> 
>   proto::or_<
>   rules_with_actions
> , rules_with_actions
> , rules_with_actions
> 
> 
> ... which conflates grammar with actions in an unpleasant way. That's a
> significant improvement. I ported my mini-Phoenix to use this, and I
> like it. (See attached.)

Exactly!

> Now the outstanding question is: does this control flow really mirror
> the Visitor design pattern, and if so can we jigger this to more closely
> match that pattern? If so, we could make this easier to use and
> understand.

See above.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-20 Thread Thomas Heller
On Wednesday 20 October 2010 21:24:49 joel falcou wrote:
> Using thomas code and my own functor class , i designed a new
> computation transform but it fails
> to dispatch.
Actually it is Eric's code and idea we are using. So it is him who needs to 
take credit for it ;)
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [phoenix3] New design proposal

2010-10-20 Thread Thomas Heller
On Wednesday 20 October 2010 05:19:17 Joel de Guzman wrote:
> On 10/20/2010 12:08 AM, Eric Niebler wrote:
> > On 10/19/2010 1:33 AM, Thomas Heller wrote:
> >> On Tue, Oct 19, 2010 at 6:21 AM, Joel de Guzman wrote:
> >>> Can we also focus on one very specific use-case that demonstrates
> >>> the motivation behind the need of such a refactoring and why the
> >>> old(er) design is not sufficient? I'd really want to sync up with
> >>> you guys.
> >> 
> >> With the old design (the one which is currently in the gsoc svn
> >> sandbox) I had problems with defining what phoenix expressions really
> >> are. We had at least two types of expressions. First were the ones we
> >> reused from proto (plus, multiplies, function and so on), Second were
> >> these proto::function constructs which had a funcwrap  struct and
> >> an env placeholder. This env placeholder just wastes a valuable slot
> >> for potential arguments. The second point why this design is not
> >> good, is that data and behaviour is not separated. The T in funcwrap
> >> defines how the phoenix expression will get evaluated.
> >> 
> >> This design solves this two problems: Data and behaviour are cleanly
> >> separated. Additionally we end up with only one type of expressions:
> >> A expression is a structure which has a tag, and a variable list of
> >> children. You define what what a valid expression is by extending the
> >> phoenix_algorithm template through specialisation for your tag. The
> >> Actions parameter is responsible for evaluating the expression. By
> >> template parametrisation of this parameter we allow users to easily
> >> define their own evaluation schemes without worrying about the
> >> validity of the phoenix expression. This is fixed by the meta grammar
> >> class.
> > 
> > What Thomas said. We realized that for Phoenix to be extensible at the
> > lowest level, we'd need to document its intermediate form: the Proto
> > tree. That way folks have the option to use Proto transforms on it.
> > (There are higher-level customization points that don't expose Proto,
> > but I'm talking about real gear-heads here.)
> > 
> > There were ugly things about the intermediate form we wanted to clean
> > up before we document it. That started the discussion. Then the
> > discussion turned to, "Can a user just change a semantic actions here
> > and there without having to redefine the whole Phoenix grammar in
> > Proto, which is totally non-trivial?" I forget offhand what the use
> > case was, but it seemed a reasonable thing to want to do in general.
> > So as Thomas says, the goal is two-fold: (a) a clean-up of the
> > intermediate form ahead of its documentation, and (b) a way to easily
> > plug in user-defined semantic actions without changing the grammar.
> > 
> > I think these changes effect the way to define new Phoenix syntactic
> > constructs, so it's worth doing a before-and-after comparison of the
> > extensibility mechanisms. Thomas, can you send around such a
> > comparison? How hard is it to add a new statement, for instance?
> 
> Yes, exactly, that's what I want. Anyway, while I'd still want to see
> this, I looked at the code and I like it, except for some nits here
> and there (especially naming). More on that later.

I am very interested to here your naming suggestions. In the meantime, I 
haven't sit still.

Before discussing the extension mechanism for introducing new phoenix types, 
I want to take another step to justify why i think this design decision is a 
good idea, based on a hopefully good analogy, i will then explain the 
different steps necessary comparing to that analogy, both, the old mechanism 
and the new one.
Consider compilers for regular languages like C/C++/Java/you name it. How do 
they work? Well, at first, the textual representation is somehow transformed 
into an AST. This AST is built up basically by the rules based on the 
grammar of that language. The AST is one form of data the compiler works on.
After the AST creation the compiler usually wants to apply some other 
transformation to that tree. This can be easily done by using the visitor 
pattern. This means, everything a compiler does is expressed in some form of 
visitors. The visitor pattern is perfectly applicable here because a 
language has a fixed set of language constructs, and if someone adds new 
constructs, all visitors have to be adapted.
So far so good.
"How does phoenix and proto fit into that picture?" you might ask.
Phoenix is the language that we are trying to build, and proto is the 
lang

Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-20 Thread Thomas Heller
On Wednesday 20 October 2010 15:02:01 Eric Niebler wrote:
> On 10/14/2010 12:27 PM, Eric Niebler wrote:
> 
> 
> > - A new set of actions can be created easily by delegating
> > 
> >   to MyActions::action by default, and specializing only those
> >   rules that need custom handling.
> 
> The code I sent around actually falls short on this promise. Here's an
> updated design that corrects the problem. The key is realizing that the
> actions need to be parameterized by ... the actions. That way, they can
> be easily subclassed.
> 
> That is, we now have this:
> 
> // A collection of semantic actions, indexed by phoenix grammar rules
> struct PhoenixDefaultActions
> {
> template
> struct action;
> };
> 
> struct MyActions
> {
> // Inherit default behavior from PhoenixDefaultActions
> template
> struct action
> 
>   : PhoenixDefaultActions::action
> 
> {};
> 
> // Specialize action<> for custom handling of certain
> // grammar rules here...
> };
> 
> If you don't ever pass MyActions to PhoenixDefaultActions, then any
> specializations of MyActions::actions will never be considered.

Good catch! I worked a little on trying to simplify that whole grammar with 
rules that have thing a bit. Forgive me, but i changed the name to Visitor. 
Why? Simply because i think this is what is done here. We visit a specific 
node which happened to match our rule.

Here it goes:
namespace detail
{
template <
typename Grammar, typename Visitor, typename IsRule = void>
struct algorithm_case
: Grammar
{};

template <
typename Rule, typename Visitor, int RulesSize = Rule::size>
struct algorithm_case_rule;

template 
struct algorithm_case_rule
: proto::when<
  typename Rule::rule0
, typename Visitor::template visit
  >
{};

// add more ...

template 
struct algorithm_case
: algorithm_case_rule
{};

// algorithm is what 
template 
struct algorithm
: proto::switch_ >
{
template 
struct case_
: algorithm_case<
  typename Cases::template case_
, Visitor
  >
{};
};

template 
struct rule
{
typedef void is_rule;

static int const size = 1;
typedef Grammar rule0;
};

template <
typename Grammar0 = void
  , typename Grammar1 = void
  , typename Grammar2 = void
  , typename Dummy = void>
struct rules;

template 
struct rules
{
typedef void is_rule;

static int const size = 1;
typedef Grammar rule0;
};

 // add more ... 
}

Making your example:

// A collection of actions indexable by rules.
struct MyActions
{
template
struct action;
};

// An easier way to dispatch to a tag-specific sub-grammar
template
struct MyCases
  : proto::not< proto::_ >
{};

struct MyCasesImpl
{
template
struct case_
  : MyCases
{};
};

// Define an openly extensible grammar using switch_
template
struct MyGrammar
  : detail::algorithm< MyCasesImpl, Actions>
{};

// Define a grammar rule for int terminals
struct IntTerminalRule
  : proto::terminal
{};

// Define a grammar rule for char terminals
struct CharTerminalRule
  : proto::terminal
{};

// OK, handle the terminals we allow:
template<>
struct MyCases
  : proto::rules<
IntTerminalRule
  , CharTerminalRule
>
{};

// Now, populate the MyActions metafunction class
// with the default actions:

template<>
struct MyActions::action< IntTerminalRule >
  : DoIntAction
{};

template<>
struct MyActions::action< CharTerminalRule >
  : DoCharAction
{};
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [phoenix3] New design proposal

2010-10-19 Thread Thomas Heller
On Wednesday 20 October 2010 05:19:17 Joel de Guzman wrote:
> On 10/20/2010 12:08 AM, Eric Niebler wrote:
> > On 10/19/2010 1:33 AM, Thomas Heller wrote:
> >> On Tue, Oct 19, 2010 at 6:21 AM, Joel de Guzman wrote:
> >>> Can we also focus on one very specific use-case that demonstrates
> >>> the motivation behind the need of such a refactoring and why the
> >>> old(er) design is not sufficient? I'd really want to sync up with
> >>> you guys.
> >> 
> >> With the old design (the one which is currently in the gsoc svn
> >> sandbox) I had problems with defining what phoenix expressions really
> >> are. We had at least two types of expressions. First were the ones we
> >> reused from proto (plus, multiplies, function and so on), Second were
> >> these proto::function constructs which had a funcwrap  struct and
> >> an env placeholder. This env placeholder just wastes a valuable slot
> >> for potential arguments. The second point why this design is not
> >> good, is that data and behaviour is not separated. The T in funcwrap
> >> defines how the phoenix expression will get evaluated.
> >> 
> >> This design solves this two problems: Data and behaviour are cleanly
> >> separated. Additionally we end up with only one type of expressions:
> >> A expression is a structure which has a tag, and a variable list of
> >> children. You define what what a valid expression is by extending the
> >> phoenix_algorithm template through specialisation for your tag. The
> >> Actions parameter is responsible for evaluating the expression. By
> >> template parametrisation of this parameter we allow users to easily
> >> define their own evaluation schemes without worrying about the
> >> validity of the phoenix expression. This is fixed by the meta grammar
> >> class.
> > 
> > What Thomas said. We realized that for Phoenix to be extensible at the
> > lowest level, we'd need to document its intermediate form: the Proto
> > tree. That way folks have the option to use Proto transforms on it.
> > (There are higher-level customization points that don't expose Proto,
> > but I'm talking about real gear-heads here.)
> > 
> > There were ugly things about the intermediate form we wanted to clean
> > up before we document it. That started the discussion. Then the
> > discussion turned to, "Can a user just change a semantic actions here
> > and there without having to redefine the whole Phoenix grammar in
> > Proto, which is totally non-trivial?" I forget offhand what the use
> > case was, but it seemed a reasonable thing to want to do in general.
> > So as Thomas says, the goal is two-fold: (a) a clean-up of the
> > intermediate form ahead of its documentation, and (b) a way to easily
> > plug in user-defined semantic actions without changing the grammar.
> > 
> > I think these changes effect the way to define new Phoenix syntactic
> > constructs, so it's worth doing a before-and-after comparison of the
> > extensibility mechanisms. Thomas, can you send around such a
> > comparison? How hard is it to add a new statement, for instance?
> 
> Yes, exactly, that's what I want. Anyway, while I'd still want to see
> this, I looked at the code and I like it, except for some nits here
> and there (especially naming). More on that later.

Yep, naming was one of the major problems i had ;)

I don't have much time now. But here is a side by side comparision of the 
old and new design:
http://www.scribd.com/doc/39713335/Design-ion

When looking at it, please keep in mind, that with the new design, we can 
reuse for_expr in a proto grammar. Additionally, we can reuse it in any 
extendable Actions class. While in the old design, everything is wrapped
behind a proto::function, >. Reusing it, for example
for generating debug output, is not directly supported by the old design. 
While in the new design, we have a unified, documented way on how to interact 
with out phoenix expression AST. Additionally, keep in mind, that in the old 
design, everything basically was a phoenix expression. The grammar wasn't 
really definining what can be used and what not. While in the new design, we 
explicitly say: here is the expression, this is how we want to use it.

Sorry, I am in kind of a hurry right now.
Hope that helped this far.

Thomas
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [phoenix3] New design proposal

2010-10-19 Thread Thomas Heller
On Tue, Oct 19, 2010 at 6:21 AM, Joel de Guzman
 wrote:
> On 10/19/2010 3:25 AM, Thomas Heller wrote:
>>
>> Hi all,
>>
>> Based on recent discussions [1] I want to propose a new design for the
>> intermediate structure of phoenix3.
>>
>> This email will be the attempt to both explain the design, and formalize
>> what phoenix is, and what phoenix expressions are.
>> An implementation can be found at [2]. Please feel free to comment on the
>> concepts and design decision that were made. I urge you to read through
>> [1]
>> to better understand the motivation behind the need of such a refactoring.
>
> Can we also focus on one very specific use-case that demonstrates the
> motivation behind the need of such a refactoring and why the old(er)
> design is not sufficient? I'd really want to sync up with you guys.

With the old design (the one which is currently in the gsoc svn sandbox) I had
problems with defining what phoenix expressions really are. We had at least
two types of expressions. First were the ones we reused from proto (plus,
multiplies, function and so on), Second were these proto::function constructs
which had a funcwrap struct and an env placeholder. This env placeholder
just wastes a valuable slot for potential arguments.
The second point why this design is not good, is that data and behaviour is not
separated. The T in funcwrap defines how the phoenix expression will
get evaluated.

This design solves this two problems: Data and behaviour are cleanly separated.
Additionally we end up with only one type of expressions: A expression
is a structure
which has a tag, and a variable list of children. You define what what
a valid expression
is by extending the phoenix_algorithm template through specialisation
for your tag.
The Actions parameter is responsible for evaluating the expression.
By template parametrisation of this parameter we allow users to easily
define their own
evaluation schemes without worrying about the validity of the phoenix
expression. This is
fixed by the meta grammar class.
I can imagine a lot of usecases that benefit from this feature. Let me
list a few here:
   - Multi Stage programming: evaluate the phoenix expression to
another language that can
 be compiled by some external compiler. The prime example i
imagine for this is that someone
 picks that topic up, and writes a shader DSL based on phoenix
reusing the already existing
 phoenix constructs.
  - Debugging: In order to debug a phoenix expression, we certainly do
not want to evaluate the
phoenix expression in a C++ context, but probably into some kind
of string, giving detailed information
about certain intrinsics of that expression
  - Optimiziers: With the help of this Actions parameter, it almost
gets trivial to write optimization
passes that work on phoenix expression. I think this is worth
exploring, because a optimizer working
on these high level expression has way more information than for
example the GIMPLE representation
of GCC.

HTH,
Thomas
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [phoenix3] New design proposal

2010-10-18 Thread Thomas Heller
On Tue, Oct 19, 2010 at 12:33 AM, Eric Niebler  wrote:
> On 10/18/2010 12:25 PM, Thomas Heller wrote:
>> Hi all,
>>
>> Based on recent discussions [1] I want to propose a new design for the
>> intermediate structure of phoenix3.
> 
>
>> I hope this explanation attempt wasn't too confuse.
>> Feel free to ask and please do comment :)
>
> Awesome. It sounds just like the solution you and I thrashed out here a
> few days ago. Have there been any changes? Non-obvious roadblocks you've
> run into?

Yes, exactly the solution we discussed. Other than renaming some
constructs, there are no changes.
No problems i ran into this far. I don't think there will be any.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


[proto] [phoenix3] New design proposal

2010-10-18 Thread Thomas Heller
Hi all,

Based on recent discussions [1] I want to propose a new design for the 
intermediate structure of phoenix3.

This email will be the attempt to both explain the design, and formalize 
what phoenix is, and what phoenix expressions are.
An implementation can be found at [2]. Please feel free to comment on the 
concepts and design decision that were made. I urge you to read through [1] 
to better understand the motivation behind the need of such a refactoring.
I am not very happy with the names of most of the classes I implemented. 
Suggestions welcome!
Please also note, that this will only affect the "low level" design (as in 
how to make proto work as we want ;)). The "high level API as documented in 
[3] will not be affected by this changes.

At the very heart of the design, are phoenix_algorithm and meta_grammar.

template 
struct phoenix_algorithm;

template 
struct meta_grammar;

The responsibility of meta_grammar is to dispatch based on the expression 
tag to phoenix_algorithm. phoenix_algorithm then specifies what expression is 
valid (for the given tag), this is called ruled,  and binds the Action to 
this rule. This is done through template specializations and matching 
against proto grammars. This mechanism is also our first customization point 
in that design which is useful for defining new phoenix expressions.

Inside the core, terminals are already being handled.
Terminals are:
   1) regular, stored by value, C++ objects
   2) C++ value stored with some wrapper around (i.e. 
boost::reference_wrapper)
   3) placeholders

The first point is simply implemented with a plain proto::terminal<_> and 
needs no further attention, phoenix::val is implement with the help of this.
The second point is actually more interesting, which is our next 
customization point: Based on a hook-able is_custom_terminal trait, users 
can register whatever wrapper they have. This allows transparent handling of 
these wrapper inside of phoenix.
Last but not least are the placeholder. This design finally solves the 
placeholder unification problem, by deciding on a newly introduced 
boost::is_placeholder trait whether a given terminal value is a 
placeholder! This way, any placeholder meeting phoenix' requirements can be 
used inside phoenix now! The other way round, is not in the scope of 
phoenix.
With these tools in our box, we are able to define a very powerful EDSL. 
Please note, that until now, I only explained the data part of this design, 
that means, everything explained above has no specific behavior attached.
After setting up all we need with these constructs (oh, please do not forget 
all the tools proto already provides), the only thing we can say is if, or 
if not a given expression is a valid phoenix lambda expression. By default, 
everything but the terminals described above is disabled, and can be enabled 
on a need-by-need basis.

The next thing worth explaining is the Action template parameter to 
phoenix_algorithm and meta_grammar. With this parameter, the evaluation of 
our phoenix expression is controlled and customizable.
As part of the core, the evaluator Actions are provided and defined for the 
above mentioned terminal rules. Adding one more customization point:
unwrap_custom_terminal, which needs specialization on the wrapper to 
transparently handle it.

With the help of these customization points it will be a pleasure to fulfill 
all the promised goodies [4] and [5]

I hope this explanation attempt wasn't too confuse.
Feel free to ask and please do comment :)


[1] http://thread.gmane.org/gmane.comp.lib.boost.proto/160
[2] http://github.com/sithhell/boosties/tree/master/phoenix
[3] 
https://svn.boost.org/svn/boost/sandbox/SOC/2010/phoenix3/libs/phoenix/doc/html/index.html
[4] http://boost-spirit.com/home/2010/07/23/phoenix3-gsoc-project/
[5] http://cplusplus-soup.com/2010/07/23/lisp-macro-capability-in-c/
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-18 Thread Thomas Heller
On Saturday 16 October 2010 04:10:23 Eric Niebler wrote:
> On Thu, Oct 14, 2010 at 10:46 PM, Thomas Heller
> 
> wrote:
> > So, one of your points of criticism was that my solution was overly
> > verbose.
> > I like to throw this ball back at you. I think both solutions are quite
> > verbose. I guess verbosity is what it takes to get a solution meeting
> > the requirements.
> 
> You're right, my solution is verbose, too. The problem is the
> open-extensibility requirement. Proto::or_ is concise. The equivalent
> proto::switch_ is verbose. There aren't many constructs in C++ that are
> openly extensible. Namespaces, overload sets and template
> specializations. (Others?) Only template specializations fit the bill,
> and specializing a template takes a lot of typing. And if you want the
> set of transforms to be also openly extensible and independent of the
> grammars, that's twice the boilerplate. Seems unavoidable.
> 
> I'm interested in finding some nice wrapper that presents a cleaner
> interface. Maybe even macros could help.

Yes this i a hard problem to tackle.

Anyway ... i started the new prototype. I finished the core.
I added customization points for placeholders and terminals.
I have no time to explain what i have done so far. But want to share if you 
want to have a preview:
http://github.com/sithhell/boosties/tree/master/phoenix/

Will share more this evening.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Phoenix3 at BoostCon? (Fwd: [CfP] 5th Annual BoostCon, Aspen (CO, USA), May 15-20, 2011)

2010-10-16 Thread Thomas Heller
On Saturday 16 October 2010 20:42:46 Eric Niebler wrote:
> On 10/16/2010 7:51 AM, Hartmut Kaiser wrote:
> > Please distribute.
> > --
> > 
> > 5TH ANNUAL BOOST CONFERENCE 2011
> > Aspen CO, USA, May 15 - 20, 2011, www.boostcon.com
> > 
> > CALL FOR PARTICIPATION
> 
> 
> 
> IMO, Phoenix3 is one of the most important Boost development over the
> past year. There should unquestionably be a presentation at BoostCon
> about it. I think I'll go, and would at the very least like to help. Is
> anybody else going, and are they interested in collaborating?
> 
> There is obviously lots to talk about. Choosing a direction would be
> tough, but I think it should focus the things that are new in v3 over
> v2. I think an end-user-centric talk would be more valuable than a talk
> about implementation details (despite how much fun it would be to give
> such a talk). So I'm thinking about a talk on AST manipulation a-la
> Scheme macros. There is evidence that there is already excitement about
> this topic.[*] Thoughts?

This would be awesome! I would be happy to help here. I think I will not be 
able to come, since this it is just too expensive for a student from Europe 
:(

> [*] http://cplusplus-soup.com/2010/07/23/lisp-macro-capability-in-c/
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-16 Thread Thomas Heller
On Saturday 16 October 2010 03:45:45 Eric Niebler wrote:
> On Fri, Oct 15, 2010 at 1:35 AM, Thomas Heller
> 
> wrote:
> > On Wednesday 13 October 2010 07:10:15 Eric Niebler wrote:
> > > I had a really hard time grokking unpack. In Proto, function-types
> > > are used to represent function invocation and object construction.
> > > The type "X(A,B,C)" means either call "X" with three arguments, or
> > > construct "X" with three arguments. In "unpack< X(A, B, C) >" X does
> > > *not* get called with three arguments. It gets called like "X(
> > > A(_0), A(_1), A(_2), ...B, C )", where _0, _1 are the child nodes of
> > > the current expression. This will confuse anyone familiar with
> > > Proto. I'd like to replace this with something a little more
> > > transparent.
> > 
> > Ok, i put to together an alternative version. This hooks onto the
> > already in
> > place transform mechanism and works like this:
> > Whenever the first argument to a transform is proto::vararg it
> > expands the child expressions and applies the Fun transform before the
> > actual transform gets called, example:
> > 
> > struct test
> > 
> >: proto::when<
> >: 
> >proto::function >
> >  
> >  , foo(vararg)
> > 
> > {};
> > 
> > this will call foo(a0, a1 ... aN) where aX are the children of the
> > function expressions. Note that you can replace proto::_ by any
> > transform you like. Thus making the call look like:
> > foo(Fun(a0), Fun(a1), Fun(a2) ...)
> 
> Ah, clever! I like it. I'd like to see this generalized a bit though. For
> instance, if the transform were instead "foo(_state, vararg<_>)" the
> intention is clear, but it doesn't sound like your implementation would
> handle this.

Well, it could be done. Given that vararg expands to all children of the 
current expression, this is possible. Though it would add to compile time. 
The current implementation is in O(N^2), allowing to place vararg anywhere 
in the transform call is tempting, but it would introduce O(N^3) 
preprocessor iterations.
Let me add here, that i am planning on writing a boost.wave driver which 
allows to preprocess headers (as MPL already does), expanding BOOST_PP 
iteration constructs until a certain N. I believe this kind of preprocessing 
will bring down compile times of proto, fusion and phoenix. (There must be a 
reason why MPL does it, right?)

> Actually, the semantics get somewhat tricky. And I fear it
> would lead folks to believe that what gets unpacked by the vararg in the
> transform corresponds exactly to the children that were matched by the
> vararg in the grammar. That's not quite the case, is it?

No, that is not the case. Let me remind you of the implementation of the 
pass_through transform for nary_expr or function. The semantics are almost 
exactly the same. The little difference you have with vararg being part of 
pass_through is that vararg is the last argument, expanding to the remaining 
children.

> How about this ... an "unpack" placeholder that works essentially how
> your vararg does, but with any Fusion sequence (including Proto
> expressions), and the sequence has to be explicitly specified (like the
> first parameter to proto::fold).
> 
> So, instead of foo(vararg<_>), you'd have foo(unpack(_)), where _ means
> "the sequence to unpack", not "the transform to apply to each element".
> If you want to apply a transform to each element first, you could
> specify the transform as a second parameter: foo(unpack(_, Fun)). (That
> form would probably use fusion::transform_view internally.)
> 
> The implementation of this would be very tricky, I think, because unpack
> could appear in any argument position,not just the first. And you'd have
> to be careful about compile times. But a fun little problem!

This sounds interesting as well, but i don't think adding another 
placeholder really helps people in that case.
 
>> Additionally, you can add whatever arguments you like to the transform.
>  
> Which transform? Example?

The foo transform. You can call it with whatever arguments you like, just 
like a regular transform.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-15 Thread Thomas Heller
On Wednesday 13 October 2010 07:10:15 Eric Niebler wrote:
> On 10/4/2010 11:51 PM, Thomas Heller wrote:
> > Eric Niebler wrote:
> >> On Mon, Oct 4, 2010 at 12:43 PM, Thomas Heller



> >Note: i created a unpack transform which is boilerplate code for the
> >
> >  extraction of the children of the current node. So what it
> >  does is, that it calls the proto::callable passed as first
> >  template parameter and is applies an optional transform to
> >  the children prior to passing it to the callable,
> >  additionally it is able to forward the data and state to the
> >  transform
> 
> I had a really hard time grokking unpack. In Proto, function-types are
> used to represent function invocation and object construction. The type
> "X(A,B,C)" means either call "X" with three arguments, or construct "X"
> with three arguments. In "unpack< X(A, B, C) >" X does *not* get called
> with three arguments. It gets called like "X( A(_0), A(_1), A(_2), ...B,
> C )", where _0, _1 are the child nodes of the current expression. This
> will confuse anyone familiar with Proto. I'd like to replace this with
> something a little more transparent.

Ok, i put to together an alternative version. This hooks onto the already in 
place transform mechanism and works like this:
Whenever the first argument to a transform is proto::vararg it expands 
the child expressions and applies the Fun transform before the actual 
transform gets called, example:

struct test
: proto::when<
proto::function >
  , foo(vararg)
>
{};

this will call foo(a0, a1 ... aN) where aX are the children of the function 
expressions. Note that you can replace proto::_ by any transform you like. 
Thus making the call look like:
foo(Fun(a0), Fun(a1), Fun(a2) ...)
Additionally, you can add whatever arguments you like to the transform.
I think this comes closer to what the user expects, and is more transparent.

The implementation can be found at:
http://github.com/sithhell/boosties/blob/master/proto/boost/proto/transform/when_vararg.hpp
A testcase can be found at:
http://github.com/sithhell/boosties/blob/master/proto/libs/proto/test/when_vararg.cpp
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-14 Thread Thomas Heller
Eric Niebler wrote:

> On 10/13/2010 11:08 PM, Thomas Heller wrote:
>> Let me try again ...
>> I tried to solve the problem that when writing a proto grammar+transform
>> you might want to dispatch your transform based on tags.
> 
> You mean expression tags, right? First time I read this, I accepted this
> statement as fact. Now I wonder.

Right, when I mean tags, I refer to expression tags. sorry for the 
confusion.

> *Why* do you want to dispatch your transforms based on tags? You're
> thinking of proto::switch_, which dispatches grammars based on
> expression tags, right? But it doesn't follow that transforms are best
> dispatched on expression tags, too.

You are totally right here. Dispatching transforms *only* on expression tags 
is not enough.

> We are close to the bare
> requirements, but we're not there yet. In my understanding, the
> requirements are as I stated before (and which you agreed with):
> 
> A) Have an openly extensible grammar.
> B) Have an equally extensible set of transforms.
> C) Be able to substitute out a whole other (extensible) set of transforms.
> 
> I have an alternate solution that meets these requirements and doesn't
> dispatch to transforms based on expression tags. It dispatches to
> transforms using grammar rules. This has a number of nice properties.
> Keep reading...

I was waiting for it, as i saw the deficiencies of my design as well. I just 
couldn't get wrap head around another solution. The idea with dispatching on 
rules is great. I tend to forget that proto grammars are types as well :).

>> Additionally you would
>> like to replace that particular transform which evaluates your expression
>> by another one.
>> proto::switch_ allows you to dispatch grammar + transform based on a
>> specific tag already, I think we agree that .
>> The major usecase i had in mind for this was phoenix.
>> First use case is, when defining some generic grammar like this:
>> 
>> struct phoenix_cases
>> {
>>   template 
>>   struct case_
>>: proto::otherwise(proto::_, proto::_state)>
>> /*call the phoenix evaluator, how? My proposed solution is through tag
>> dispatching! */
>>   {};
>> };
>> 
>> struct phoenix_grammar
>>: switch_
>> {};
>> 
>> So far so good. I hope we agree that this might be a good way to define
>> the phoenix grammar.
> 
> My solution works differently. Which one is better is still an open
> question.

So, one of your points of criticism was that my solution was overly verbose. 
I like to throw this ball back at you. I think both solutions are quite 
verbose. I guess verbosity is what it takes to get a solution meeting the 
requirements.

>> The need to further define how a valid phoenix grammar can
>> look like is through specialising phoenix_cases::case_ (*).
>> Ok, I think this is a good solution already.
>> 
>> But now, think about, that people might want to evaluate a phoenix
>> expression differently!
>> One example to this is that i want to generate a valid C++ string out of
>> my phoenix expression, save it for later and compile it again with my
>> native C++ compiler. I might even want to be able to evaluate a phoenix
>> expression to openCL/cuda/glsl/hlsl whatever. Another thing i might want
>> to add to the evaluation process is autoparallelization. I hope you agree
>> that these might be valid usecases!
>> 
>> With the definition of the phoenix grammar above, this is not easily
>> possible! I would have to repeat the phoenix grammar all over again
>> (maybe this is the point where i am totally of the line).
> 
> I'm with you here.
> 
>> The next, logical (IMHO) step is to parameterize our phoenix_grammar to
>> allow an arbitrary transform to be dispatched on the tag basis (Again,
>> there might be a different solution, I can't see any though).
> 
> Dispatch to transforms on grammar rules.

Great!

>> Note: This is the step where the name "Visitor" came in, because it is
>> what it does: it visits an expression and transforms it, somehow.
>> 
>> So, the design will change to this:
>> 
>> template  Visitor>
>> struct phoenix_cases
>> {
>>   template 
>>   struct case_
>>: proto::otherwise(proto::_, proto::_state)>
>> /*call the phoenix evaluator, how? My proposed solution is through tag
>> dispatching! */
>>   {};
>> };
>> 
>> template  Visitor>
>> struct phoenix_grammar
>>: switch_
>> {};
>> 
>> // bring our default evaluator back into the game:
>> t

Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-13 Thread Thomas Heller
On Thursday 14 October 2010 08:29:00 Eric Niebler wrote:
> On 10/13/2010 11:08 PM, Thomas Heller wrote:
> > On Wednesday 13 October 2010 22:46:40 Eric Niebler wrote:
> >> On 10/13/2010 11:54 AM, Thomas Heller wrote:
> >>> On Wednesday 13 October 2010 20:15:55 Eric Niebler wrote:
> > 
> > See my other post about comments on this, I think we agree on the rest.
> > The only problem we are diverging right know seems to be that "visitor"
> > I proposed. See another attempt to justify, and possibly explaining it
> > better below.
> 
> 
> 
> Very quickly before I turn in for the night ... this goes a long way to
> making it clearer. Thanks. I'm going to take my own crack at this
> problem and see if I end up in the same place you did. Then we can
> compare/contrast.

I think having an alternate solution is one way to go. I am very curios on 
what you will com up with :)

> Oh, and I think you're right about proto::make_expr. It cannot make
> nullary expressions with tag types other than tag::terminal. Sorry. :-P

*Digging into how proto::literal works*
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-13 Thread Thomas Heller
On Wednesday 13 October 2010 22:46:40 Eric Niebler wrote:
> On 10/13/2010 11:54 AM, Thomas Heller wrote:
> > On Wednesday 13 October 2010 20:15:55 Eric Niebler wrote:
> >> Possibly. But now, you're explicitly constructing a terminal with a
> >> reference_wrapper as a value. Proto doesn't know how to treat
> >> reference_wrapper when evaluating Phoenix expression because you
> >> haven't told it how. You'll need to fit this into your evaluation
> >> strategy somehow. It will be a good test to see how truly extensible
> >> your proposed mechanism is.
> > 
> > Ok, i have to admit, it gets a little messy here. We probably would
> > like to specialize our generic evaluator based on the terminal tag.
> > Which takes care of: a) regular terminals (as in captured by value) b)
> > things like reference wrapper c) placeholders as you suggested them.
> > I don't have a good solution to that problem. Anyone else?
> 
> Not sure yet. This needs thought.
> 
> > The only thing i can think of right know is to introduce a special tag
> > for a reference. It would be great if we could make proto::terminals
> > with different tags, or let's not call them terminals but nullary
> > expressions.
> 
> Proto already lets you define nullary expressions with custom tags.
> Might be handy here.
> 
> 

Ok, this looks promising. I haven't been able to construct nullary 
expressions with proto::make_expr. So i assumed it was not possible :)
Reinvestigating. I think this might be a good solution to phoenix::value and 
phoenix::reference.

> >> Simple. They become:
> >>   typedef
> >>   
> >> actor > >::type>
> >>   
> >>   arg1_type;
> >>   
> >>   arg1_type const arg1 = {};
> >> 
> >> And then your default evaluator changes to handle placeholders:
> >> template 
> >> struct generic_evaluator
> >> 
> >>   : proto::or_<
> >>   : 
> >> proto::when<
> >> 
> >> proto::terminal >
> >>   
> >>   , fusion_at(proto::_value, proto::_state)
> >>   
> >>   , proto::_default
> >> 
> >> {};
> >> 
> >> or something. If placeholder are part of the core, there's nothing
> >> stopping you from doing this. You can also handle reference_wrappers
> >> here, but that feels like a hack to me. There should be a way for
> >> end-users to customize how Phoenix handles terminal types like
> >> reference_wrapper.
> > 
> > The problem i am having with approach to tackle placeholders, is that i
> > still can't see how this would solve the grand "placeholder
> > unification" problem
> 
> There are two aspects of "placeholder unification":
> 
> 1) Using phoenix placeholders in other EDSLs.
> 2) Using placeholders from other EDSL in Phoenix.
> 
> (1) is solved by making the phoenix domain a sub-domain of
> proto::default_domain. (2) is not yet solved, but can be trivially done
> so by changing the above evaluator to:
> 
> struct Placeholder
> 
>   : proto::when<
> 
> proto::and_<
> proto::terminal
>   , proto::if_()>
> 
>   , fusion_at(proto::_value, proto::_state)
> 
> {};
> 
> template
> struct generic_evaluator>
> 
>   : proto::or_<
> 
> Placeholder
>   , _default
> 
> {};
> 
> Note the use of some hook-able boost::is_placeholder trait.
> 
> 

Right, that makes sense!

For the rest, see the other post.
 see other post.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-13 Thread Thomas Heller
On Thursday 14 October 2010 01:58:16 Eric Niebler wrote:
> On 10/4/2010 12:20 PM, Thomas Heller wrote:
> > On Mon, Oct 4, 2010 at 8:53 PM, joel falcou wrote:
> Joel, I don't recall this particular problem or being unable to solve it
> with Proto transforms.
> 
> > The split example was one of the motivating examples, that is
> > correct, though it suffers the exact points Eric is criticizing. The
> > split example was possible because i added some new transforms which
> > proto currently misses, but i didn't want to shoot out all my
> > ammunition just yet :P
> > 
> > But since you ask for it:
> > 
> > http://github.com/sithhell/boosties/blob/master/proto/libs/proto/test/s
> > plitter.cpp
> > 
> > the new thing i added is transform_expr, which works like
> > fusion::transform: It creates the expression again, with transformed
> > child nodes (the child nodes get transformed according to the
> > transform you specify as template parameter
>
> If the theory is that this particular problem cannot be implemented with
> existing Proto grammars and transforms, then I submit the attached
> solution as contrary evidence. I'll admit that it doesn't have the
> property that the transforms can be substituted post-hoc into an
> existing Proto algorithm, but is that really necessary? There is no
> grammar to speak of; *every* expression is a valid split expression.

Sorry, i think i wasn't clear enough:
The splitter i wrote was _no_ good demonstration of the visitor. As you 
demonstrated, it is much easier to follow with regular grammar + transforms!

> This solution is much, much simpler than Thomas' solution that uses a
> visitor. Heck, it took me longer to understand Thomas' code than it did
> for me to write the solution using standard Proto components.

Indeed! Nice work! Thanks for teaching us again :)

> Still not seeing the big picture with visitors,

I wrote down the evolutionary process that lead me to the "visitor". Please 
see my other post and reevaluate.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-13 Thread Thomas Heller
On Wednesday 13 October 2010 22:46:40 Eric Niebler wrote:
> On 10/13/2010 11:54 AM, Thomas Heller wrote:
> > On Wednesday 13 October 2010 20:15:55 Eric Niebler wrote:

See my other post about comments on this, I think we agree on the rest.
The only problem we are diverging right know seems to be that "visitor" I 
proposed. See another attempt to justify, and possibly explaining it better 
below.

> >>> However, it can be easily added (this is actually where the visitor
> >>> plays out its strength :)).
> >>> In line 47 i defined phoenix_grammar, which can be specialized on
> >>> tags to define a better grammar for one specific tag.
> >> 
> >> I don't get that. There needs to be a phoenex grammar. One, and only
> >> one. I don't see why it's a template, what the tag type is for, or how
> >> one could use a tag to define a "better" phoenix grammar. Because
> >> there's only one. Either an expression is a Phoenix grammar or it
> >> isn't, right?
> > 
> > I disagree here.
> > End users might want to extend phoenix (the EDSL itself), by
> > introducing new expressions (right now by introducing a new tag). Now,
> > the user wants to specifiy what valid expressions look like that can
> > be formed with his new expression type. Here i currently can imagine
> > two different scenarios: First, the user just wants to extend our
> > "regular" grammar. This can be achieved by something like this:
> > 
> > template <>
> > struct phoenix_grammar
> > 
> >   : // some proto grammar which specifies how valid expressions look
> >   : like
> > 
> > The second scenario i can imagine, that someone wants to build some
> > kind of other language reusing phoenix parts, imagine someone who is
> > so obsessed with functional programming, and doesn't like the current
> > approach and wants a pure functional programming EDSL. Thus he wants
> > to disallow certain expressions. The current visitor design makes that
> > perfectly possible! (I hope this makes things clearer)
> 
> No, I'm afraid it doesn't. Extending phoenix by introducing additional
> tags doesn't necessitate parameterizing the *whole* grammar on a tag.
> The old design handled that scenario just fine by using the openly
> extensible proto::switch_. Just one grammar, and users could hook it
> easily.
> 
> And the second scenario also doesn't suggest to me that the grammar
> needs parameterization. It suggests to me that someone who wants a
> radical customization of Phoenix grammar should make their domain a
> sub-domain of Phoenix, reuse the parts they want, and define the grammar
> of their new EDSL.

Yes, domains ... just after sending the post, subdomains came to my mind as 
well ... This might be the more desirable solution. I agree.

> >>> You can then check with (admitting this doesn't work yet, but i plan
> >>> to implement this soon):
> >>> proto::matches, some_expr>
> >>> If an expression is a valid lambda. Note, that you don't really have
> >>> to care about the algorithm (or transform) and just check that the
> >>> data you are passing is valid.
> >> 
> >> That made no sense to me. Care to try again?
> > 
> > Sure.If you look at my code, there is a construct like this:
> > 
> > template  class Visitor> struct phoenix_visitor
> > 
> > : proto::visitor {};
> > 
> > phoenix_grammar now has the definitions on what a valid phoenix
> > expression is (defined by whoever through the tag dispatch). the only
> > part missing is the Visitor. But for validating visitors we don't
> > really care what transforms will be applied to our expression. Thus
> > the proto::_ placeholder for visitor.
> 
> (I'll ignore for now the fact that you can't use proto::_ as a template
> template parameter.)
> 
> This is all very convoluted. After days of looking and much discussion,
> I'm afraid I still don't get it. I think it's a bad sign that so far
> I've heard no simple, comprehensible, high-level description of the
> architecture of this design.
> 
> Diving once more into the code,

Sorry i couldn't come up with something that makes really sense to you.
I thought i might have convinced you with my last two usecases.
I am starting to think that naming it visitor in the first place is what 
troubles you most ;)

Let me try again ...
I tried to solve the problem that when writing a proto grammar+transform you 
might want to dispatch your transform based on tags. Additionally you would 

Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-13 Thread Thomas Heller
On Wednesday 13 October 2010 20:15:55 Eric Niebler wrote:
> On 10/12/2010 11:06 PM, Thomas Heller wrote:
> > On Wednesday 13 October 2010 07:10:15 Eric Niebler wrote:
> >> On 10/4/2010 11:51 PM, Thomas Heller wrote:
> >>> During the last discussions it became clear that the current design
> >>> wasn't as good as it seems to be, it suffered from some serious
> >>> limitations. The main limitation was that data and algorithm wasn't
> >>> clearly separated meaning that every phoenix expression intrinsically
> >>> carried it's behavior/"how to evaluate this expression".
> >> 
> >> Correct. IIRC, in the old scheme, the tags were actually function
> >> objects that implemented the default "phoenix" evaluation behavior
> >> associated with the tag. That didn't preclude other Proto algorithms
> >> being written that ignored the behavior associated with the tags and
> >> just treated them as tags. But it was not as stylistically clean.
> > 
> > Correct. The old scheme had one tag only, proto::tag::function, the
> > "type" of the expression was the first argument to that "function",
> > which implicitly "knew" how to evaluate the expression.
> 
> Oh, that's the OLD, old scheme. I had suggested a NEW old scheme where
> the tags themselves were function objects and the dummy env argument was
> done away with, but that never got implemented. It's probably best to
> leave it that way.

Finally something we can agree on :)

> 
> 
> >> Right. But it's not clear to me from looking at your code how the
> >> evaluation of reference_wrapped terminals is accomplished, though.
> >> Indeed, evaluating "cref(2)(0)" returns a reference_wrapper >> const>, not and int. And in thinking about it, this seems to throw a
> >> bit of a wrench in your design, because to get special handling (as
> >> you would need for reference_wrapper), your scheme requires unary
> >> expressions with special tags, not plain terminals.
> > 
> > Correct, i went for reference_wrapper here, because no matter how hard
> > i tried, the object did not get stored by reference in the terminal,
> > but by value. This might be directly related to
> > phoenix_domain::as_child. Not sure.
> 
> Possibly. But now, you're explicitly constructing a terminal with a
> reference_wrapper as a value. Proto doesn't know how to treat
> reference_wrapper when evaluating Phoenix expression because you haven't
> told it how. You'll need to fit this into your evaluation strategy
> somehow. It will be a good test to see how truly extensible your
> proposed mechanism is.

Ok, i have to admit, it gets a little messy here. We probably would like to 
specialize our generic evaluator based on the terminal tag.
Which takes care of: a) regular terminals (as in captured by value) b) 
things like reference wrapper c) placeholders as you suggested them.
I don't have a good solution to that problem. Anyone else?

The only thing i can think of right know is to introduce a special tag for a 
reference. It would be great if we could make proto::terminals with different 
tags, or let's not call them terminals but nullary expressions.

> >>> Having that said, just having "plain" evaluation of phoenix
> >>> expressions seemed to me that it is wasting of what could become
> >>> possible with the power of proto. I want to do more with phoenix
> >>> expressions, let me remind you that phoenix is "C++ in C++" and with
> >>> that i want to be able to write some cool algorithms transforming
> >>> these proto
> >>> expressions, introspect these proto expression and actively influence
> >>> the way these phoenix expressions get evaluated/optimized/whatever.
> >>> One application of these custom evaluations that came to my mind was
> >>> constant folding, so i implemented it on top of my new prototype. The
> >>> possibilities are endless: A proper design will enable such things as
> >>> multistage programming: imagine an evaluator which does not compute
> >>> the result, but translate a phoenix expression to a string which can
> >>> be compiled by an openCL/CUDA/shader compiler. Another thing might be
> >>> auto parallelization of phoenix expression (of course, we are far
> >>> away from that, we would need a proper graph library for that).
> >>> Nevertheless, these were some thoughts I had in mind.
> >> 
> >> All good goals, but IIUC nothing about the older

Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-12 Thread Thomas Heller
On Wednesday 13 October 2010 07:10:15 Eric Niebler wrote:
> On 10/4/2010 11:51 PM, Thomas Heller wrote:
> > Eric Niebler wrote:
> >> On Mon, Oct 4, 2010 at 12:43 PM, Thomas Heller
> > 
> >> wrote:
> > 
> > 
> >>>> I'll also point out that this solution is FAR more verbose that the
> >>>> original which duplicated part of the grammar. I also played with
> >>>> such visitors, but every solution I came up with suffered from this
> >>>> same verbosity problem.
> >>> 
> >>> Ok, the verbosity is a problem, agreed. I invented this because of
> >>> phoenix, actually. As a use case i wrote a small prototype with a
> >>> constant folder:
> >>> 
> >>> http://github.com/sithhell/boosties/blob/master/proto/libs/proto/test
> >>> /phoenix_test.cpp
> >> 
> >> Neat! You're getting very good at Proto. :-)
> > 
> > Thanks :)
> > Let me comment on your request:
> > 
> > On Tuesday 05 October 2010 03:15:27 Eric Niebler wrote:
> >> I'm looking at this code now, and perhaps my IQ has dropped lately
> >> (possible), but I can't for the life of me figure out what I'm looking
> >> at. I'm missing the big picture. Can you describe the architecture of
> >> this design at a high level? Also, what the customization points are
> >> and how they are supposed to be used? I'm a bit lost. :-(
> > 
> > First, let me emphasize that I try to explain this code:
> > http://github.com/sithhell/boosties/blob/master/proto/libs/proto/test/p
> > hoenix_test.cpp
> > 
> > Ok, i feared this would happen, forgive me the low amount of comments
> > in the code and let me start with my considerations for this new
> > prototype.
> > 
> > During the last discussions it became clear that the current design
> > wasn't as good as it seems to be, it suffered from some serious
> > limitations. The main limitation was that data and algorithm wasn't
> > clearly separated meaning that every phoenix expression intrinsically
> > carried it's behavior/"how to evaluate this expression".
> 
> Correct. IIRC, in the old scheme, the tags were actually function
> objects that implemented the default "phoenix" evaluation behavior
> associated with the tag. That didn't preclude other Proto algorithms
> being written that ignored the behavior associated with the tags and
> just treated them as tags. But it was not as stylistically clean.

Correct. The old scheme had one tag only, proto::tag::function, the "type" 
of the expression was the first argument to that "function", which implicitly 
"knew" how to evaluate the expression.

> > This was the main motivation behind this new design, the other
> > motivation was to simplify certain other customization points.
> > One of the main requirements i set myself was that the major part of
> > phoenix3 which is already written and works, should not be subject to
> > too much change.
> 
> Right.
> 
> > Ok, first things first. After some input from Eric it became clear that
> > phoenix expressions might just be regular proto expressions, wrapped
> > around the phoenix actor carrying a custom tag.
> 
> Not sure I understand this. From the code it looks like it's the other
> way around: phoenix expressions wrap proto expressions (e.g. the actor
> wrapper).

Sorry, got mixed up in naming the things i wrote. Your phrasing is correct.

> > This tag should determine that it
> > really is a custom phoenix expression, and the phoenix evaluation
> > scheme should be able to customize the evaluation on these tags.
> 
> OK.
> 
> > Let me remind you that most phoenix expressions can be handled by
> > proto's default transform, meaning that we want to reuse that wherever
> > possible, and just tackle the phoenix parts like argument
> > placeholders, control flow statements and such.
> > Sidenote: It also became clear that phoenix::value and
> > phoenix::reference can be just mere proto terminals.
> 
> Right. But it's not clear to me from looking at your code how the
> evaluation of reference_wrapped terminals is accomplished, though.
> Indeed, evaluating "cref(2)(0)" returns a reference_wrapper,
> not and int. And in thinking about it, this seems to throw a bit of a
> wrench in your design, because to get special handling (as you would
> need for reference_wrapper), your scheme requires unary expressions with
> special tags, not plain terminals.

Correct, i went for reference_wrapper he

Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-12 Thread Thomas Heller
On Tuesday 12 October 2010 22:24:25 Eric Niebler wrote:
> On 10/4/2010 10:57 PM, Thomas Heller wrote:
> > Eric Niebler wrote:
> >> On 10/4/2010 12:20 PM, Thomas Heller wrote:
> >>> the new thing i added is transform_expr, which works like
> >>> fusion::transform: It creates the expression again, with transformed
> >>> child nodes (the child nodes get transformed according to the
> >>> transform you specify as template parameter
> >> 
> >> How is that different than what the pass_through transform does?
> > 
> > It works slightly different than pass_through:
> > 
> > transform_expr calls
> > when<_, Fun>::template impl::type,
> > State, Data>
> > while pass_through calls
> > Grammar::proto_childN::template impl > N>::type, State, Data>
> > 
> > Truth is, I could not see how I could use pass_through to do what I
> > wanted, and still fail to see what pass_through really does. Another
> > truth is, that i initially wanted to call transform_expr pass_through
> > until I realized that it is already there.
> > Perhaps you can sched some light on why pass_through is implemented
> > that way.
> 
> Getting caught up on this discussion and realized I never responded to
> this.
> 
> It seems to me that:
> 
> transform_expr< Fun >
> 
> is equivalent to:
> 
> nary_expr< _, vararg< when<_,Fun> > >
> 
> with some extra handling for terminals. (The default transform of the
> nary_expr grammar is implemented in terms of pass_through.)
> Transform_expr applies the Fun transform to terminals also, but
> pass_through doesn't do anything to terminals. That's because, in my
> experience, transforms that work for non-terminals rarely work as-is for
> terminals. Is your experience different?

IMHO, transforms should handle every part of the expression, including 
terminals. Terminals are often neglected and not handled correctly, that is 
true, but they should. When evaluating proto expression you usually are not 
interested in the terminal (as in terminal<_>) itself, but in the value (as 
in _value(terminal<_>). That is why i decided to include application to 
terminals.
It is however interesting that some pre built transforms handle terminals 
(like _default) and others not (like pass_through).


___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-12 Thread Thomas Heller
On Tuesday 12 October 2010 22:37:06 Eric Niebler wrote:
> On 10/12/2010 1:24 PM, Eric Niebler wrote:
> > So it really seems to me that transform_expr is not necessary, but I
> > may be wrong.
> 
> I just confirmed this by trivially replacing uses of transform_expr with
> appropriate uses of nary_expr in your phoenix_test.cpp (attached).
> 
> The only difference is the type of the last expression. Since
> pass_through transform leaves terminals alone, _1 ends up stored by
> reference in the transformed expression, which is perfectly OK and saves
> copies (admittedly trivial in this case).

You are right, transform_expr can be replace by nary_expr. transform_expr is 
a little less verbose but not as flexible, I admit.
The motivation behind transform_expr to have something similiar to fold to 
resemble these "standard" algorithms like fold and transform without 
noticing they are already there. So I guess it would be a nice addition to 
the docs to have a mapping between algorithms people already know to how to 
express it with proto.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-10 Thread Thomas Heller
Eric Niebler wrote:

> On 10/8/2010 12:12 AM, Thomas Heller wrote:
>> On Thursday 07 October 2010 23:06:24 Eric Niebler wrote:
>>> On 10/4/2010 1:55 PM, Eric Niebler wrote:
>>>> The idea of being able to specify the transforms separately from the
>>>> grammar is conceptually very appealing. The grammar is the control
>>>> flow, the transform the action. Passing in the transforms to a grammar
>>>> would be like passing a function object to a standard algorithm: a
>>>> very reasonable thing to do. I don't think we've yet found the right
>>>> formulation for it, though. Visitors and tag dispatching are too
>>>> ugly/hard to use.
>>>>
>>>> I have some ideas. Let me think some.
>>>
>>> Really quickly, what I have been thinking of is something like this:
>>>
>>> template
>>> struct MyGrammar
>>>   : proto::or_<
>>> proto::when< rule1, typename Transforms::tran1 >
>>>   , proto::when< rule2, typename Transforms::tran2 >
>>>   , proto::when< rule3, typename Transforms::tran3 >
>>>
>>> {};
>> 
>> I don't think this is far away from what i proposed.
>> Consider the following:
>> 
>> template 
>> struct my_grammar
>> : proto::or_<
>>  rule1
>>, rule2
>>, rule3
>> {};
>> 
>> template  my_transform;
>> 
>> // corresponding to the tag of expression of rule1
> 
> Do you mean expression tag, like proto::tag::plus, or some other more
> abstract tag?

Yes, the expression tag.

>> template <> struct my_transform
>> : // transform
>> {};
>> 
>> // corresponding to the tag of expression of rule2
>> template <> struct my_transform
>> : // transform
>> {};
>> 
>> // corresponding to the tag of expression of rule3
>> template <> struct my_transform
>> : // transform
>> {};
>> 
>> typedef proto::visitor
>> algorithm_with_specific_transforms;
>> 
>> In my approach, both the transform and the grammar can be exchanged at
>> will.
> 
> I don't know what this can possibly mean. Grammars and transforms are
> not substitutable for each other in *any* context.

Ok, sorry. I expressed it wrong. What you described further is exactly what 
i meant here
 
>> What i am trying to say is, both the transforms and the control flow (aka
>> the grammar) intrinsically depend on the tag of the expressions, because
>> the tag is what makes different proto expressions distinguishable.
> 
> This is where I disagree. There are many cases where the top-level tag
> is insufficient to distinguish between two expressions. That's why Proto
> has grammars.

That is correct. But, let's take for example binary expressions, they all 
have two children (arity of two), so the only thing you can distinguish one 
binary expression from one another is by its tag. Ok, there still is the 
domain, but i am assuming only one domain here.

> Proto::switch_ dispatches on tags, but I consider switch_
> to be primarily an optimization technique (although it does have that
> nice open-extensibility feature that we're using for Phoenix).
> 
>> This imminent characteristic of a proto expression is what drove Joel
>> Falcou (i am just guessing here) and me (I know that for certain) to this
>> tag based dispatching of transforms and grammars.
> 
> Understood. OK, the problem you're trying to solve is:
> 
> A) Have an openly extensible grammar.
> B) Have an equally extensible set of transforms.
> C) Be able to substitute out a whole other (extensible) set of transforms.
> 
> Is that correct?

Yes that is correct. To avoid further misunderstandings: I am not arguing to 
replace proto grammars or transforms, as you said above, you need more than 
just the tag to describe valid expressions and transform the expression 
trees. I am proposing something something that can coexist and allows the 3 
things you listed above.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-08 Thread Thomas Heller
On Thursday 07 October 2010 23:06:24 Eric Niebler wrote:
> On 10/4/2010 1:55 PM, Eric Niebler wrote:
> > The idea of being able to specify the transforms separately from the
> > grammar is conceptually very appealing. The grammar is the control
> > flow, the transform the action. Passing in the transforms to a grammar
> > would be like passing a function object to a standard algorithm: a
> > very reasonable thing to do. I don't think we've yet found the right
> > formulation for it, though. Visitors and tag dispatching are too
> > ugly/hard to use.
> > 
> > I have some ideas. Let me think some.
> 
> Really quickly, what I have been thinking of is something like this:
> 
> template
> struct MyGrammar
> 
>   : proto::or_<
> 
> proto::when< rule1, typename Transforms::tran1 >
>   , proto::when< rule2, typename Transforms::tran2 >
>   , proto::when< rule3, typename Transforms::tran3 >
> 
> {};

I don't think this is far away from what i proposed.
Consider the following:

template 
struct my_grammar
: proto::or_<
 rule1
   , rule2
   , rule3
{};

template  my_transform;

// corresponding to the tag of expression of rule1
template <> struct my_transform
: // transform
{};

// corresponding to the tag of expression of rule2
template <> struct my_transform
: // transform
{};

// corresponding to the tag of expression of rule3
template <> struct my_transform 
: // transform
{};

typedef proto::visitor 
algorithm_with_specific_transforms;

In my approach, both the transform and the grammar can be exchanged at will.

What i am trying to say is, both the transforms and the control flow (aka the 
grammar) intrinsically depend on the tag of the expressions, because the tag 
is what makes different proto expressions distinguishable.
This imminent characteristic of a proto expression is what drove Joel Falcou 
(i am just guessing here) and me (I know that for certain) to this tag based 
dispatching of transforms and grammars.

> 
> That is, you parameterize the grammar on the transforms, just the way
> you parameterize a std algorithm by passing it a function object. Each
> grammar (I'm thinking of starting to call Proto grammars+transforms
> "Proto algorithms", because really that's what they are) must document
> the concept that must be satisfied by its Transforms template parameter
> (what nested typedefs must be present).
> 
> This is extremely simple and terse. It gives a simple way to extend
> behaviors (by deriving from an existing Transforms model and hiding some
> typedefs with your own).
> 
> I know this is not general enough to meet the needs of Phoenix, and
> possibly not general enough for NT2, but I just thought I'd share the
> direction of my thinking on this problem.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-04 Thread Thomas Heller
Eric Niebler wrote:

> On Mon, Oct 4, 2010 at 12:43 PM, Thomas Heller
> wrote:

>>
>> >
>> > I'll also point out that this solution is FAR more verbose that the
>> > original which duplicated part of the grammar. I also played with such
>> > visitors, but every solution I came up with suffered from this same
>> > verbosity problem.
>>
>> Ok, the verbosity is a problem, agreed. I invented this because of
>> phoenix, actually. As a use case i wrote a small prototype with a
>> constant folder:
>>
>> 
http://github.com/sithhell/boosties/blob/master/proto/libs/proto/test/phoenix_test.cpp
>>
>>
> Neat! You're getting very good at Proto. :-)

Thanks :)
Let me comment on your request:

On Tuesday 05 October 2010 03:15:27 Eric Niebler wrote:
> I'm looking at this code now, and perhaps my IQ has dropped lately
> (possible), but I can't for the life of me figure out what I'm looking
> at. I'm missing the big picture. Can you describe the architecture of
> this design at a high level? Also, what the customization points are and
> how they are supposed to be used? I'm a bit lost. :-(

First, let me emphasize that I try to explain this code:
http://github.com/sithhell/boosties/blob/master/proto/libs/proto/test/phoenix_test.cpp

Ok, i feared this would happen, forgive me the low amount of comments in the 
code and let me start with my considerations for this new prototype.

During the last discussions it became clear that the current design wasn't 
as good as it seems to be, it suffered from some serious limitations. The 
main limitation was that data and algorithm wasn't clearly separated meaning 
that every phoenix expression intrinsically carried it's behavior/"how to 
evaluate this expression".
This was the main motivation behind this new design, the other motivation 
was to simplify certain other customization points.
One of the main requirements i set myself was that the major part of 
phoenix3 which is already written and works, should not be subject to too 
much change.

Ok, first things first. After some input from Eric it became clear that 
phoenix expressions might just be regular proto expressions, wrapped around 
the phoenix actor carrying a custom tag. This tag should determine that it 
really is a custom phoenix expression, and the phoenix evaluation scheme 
should be able to customize the evaluation on these tags.
Let me remind you that most phoenix expressions can be handled by proto's 
default transform, meaning that we want to reuse that wherever possible, and 
just tackle the phoenix parts like argument placeholders, control flow 
statements and such.
Sidenote: It also became clear that phoenix::value and phoenix::reference 
can be just mere proto terminals.

Having that said, just having "plain" evaluation of phoenix expressions 
seemed to me that it is wasting of what could become possible with the power 
of proto. I want to do more with phoenix expressions, let me remind you that 
phoenix is "C++ in C++" and with that i want to be able to write some cool 
algorithms transforming these proto expressions, introspect these proto 
expression and actively influence the way these phoenix expressions get 
evaluated/optimized/whatever. One application of these custom evaluations 
that came to my mind was constant folding, so i implemented it on top of my 
new prototype. The possibilities are endless: A proper design will enable 
such things as multistage programming: imagine an evaluator which does not 
compute the result, but translate a phoenix expression to a string which can 
be compiled by an openCL/CUDA/shader compiler. Another thing might be auto 
parallelization of phoenix expression (of course, we are far away from that, 
we would need a proper graph library for that). Nevertheless, these were 
some thoughts I had in mind.

This is the very big picture.

Let me continue to explain the customization points I have in this design:

First things first, it is very easy to add new expression types by 
specifying:
   1) The new tag of the expression.
   2) The way how to create this new expression, and thus building up the 
  expression template tree.
   3) Hook onto the evaluation mechanism
   4) Write other evaluators which just influence your newly created tag 
  based expression or all the other already existing tags

Let me guide you through this process in detail by explaining what has been 
done for the placeholder "extension" to proto (I reference the line numbers 
of my prototype).

1) define the tag tag::argument: line 307
2) specify how to create this expression: line 309 to 315
  First, define a valid proto expression line 309 through phoenix_expr 
  which had stuff like proto::plus and such as archetypes. The thing it 
  does, it creates a valid prot

Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-04 Thread Thomas Heller
Eric Niebler wrote:

> On 10/4/2010 12:20 PM, Thomas Heller wrote:
>> On Mon, Oct 4, 2010 at 8:53 PM, joel falcou
>>  wrote:
>>> On 04/10/10 20:45, Eric Niebler wrote:
>>>>
>>>> I'm not opposed to such a thing being in Proto, but I (personally)
>>>> don't feel a strong need. I'd be more willing if I saw a more strongly
>>>> motivating example. I believe Joel Falcou invented something similar.
>>>> Joel, what was your use scenario?
>>>>
>>>
>>> NT2 ;)
>>>
>>> More specifically, all our transform are built the same way:
>>> visit the tree, dispatch on visitor type + tag and act accordignly.
>>> It was needed for us cause the grammar could NOT have been written by
>>> hand as we supprot 200+ functions on nt2 terminal. All our code is
>>> somethign like "for each node, do Foo" with variable Foo depending on
>>> the pass and duplicating
>>> the grammar was a no-no.
>>>
>>> We ended up with somethign like this, except without switch_ (which I
>>> like btw), so we
>>> can easily add new transform on the AST from the external view point of
>>> user who
>>> didn't have to know much proto. As I only had to define one grammar (the
>>> visitor) and only specialisation of the
>>> visitor for some tag, it compiled fast and that was what we wanted.
>>>
>>> Thomas, why not showign the split example ? It's far better than this
>>> one and I remember I and Eric
>>> weren't able to write it usign grammar/transform back in the day.
>> 
>> The split example was one of the motivating examples, that is correct,
>> though it suffers the exact points Eric is criticizing.
>> The split example was possible because i added some new transforms
>> which proto currently misses, but i didn't want to shoot out all my
>> ammunition just yet :P
>> But since you ask for it:
>> 
http://github.com/sithhell/boosties/blob/master/proto/libs/proto/test/splitter.cpp
> 
> Can you describe in words or add some comments? It's not immediately
> obvious what this code does.

Sure, please reload that site and follow my line to line description.

line 57 to 68:
We want to determine whether our current expression contains some split 
expression. that is what the fold does.
line 69 to 97:
Fold returned true, which means, we have to do two things, and join the 
results of 1) and 2):
1) replace the split(...) with a placeholder, and place it to our 
   resulting vector
   line 70 to 81:
   Transform the children of our expression: if it is a split, replace 
   it with placeholder, if not, just leave it be.
2) insert the "..." (which is the subexpression we want to split from 
   our main expression into our resulting vector
   line 82 to 96:
   traverse the children again, if we encounter a split expression, 
   insert the children of this expression into our fusion vector, if not 
   return the state (which is our fusion vector)
   Note 1: The traversal of our tree happens here, we pass the result of 
   the recursive call to split_grammar to either join or push_back (line 
   90 and 91)
   Note 2: This is a Bottom-Up traversal.
line 98:
The children of our current node have no split expressions, just leave 
them be, and pack into a fusion vector, the reason for that is because  
of we call split_grammar with an expression containing no split at all, 
and call fusion::as_vector on the result, it will unpack the children of 
the expression.

I hope that makes the code clearer. The hardest part of this exercise was 
not to write down the grammar and the transforms, but to actually change the 
children of an arbitrary expression:
 
>> the new thing i added is transform_expr, which works like
>> fusion::transform: It creates the expression again, with transformed
>> child nodes (the child nodes get transformed according to the transform
>> you specify as template parameter
> 
> How is that different than what the pass_through transform does?

It works slightly different than pass_through:

transform_expr calls
when<_, Fun>::template impl::type,
State, Data>
while pass_through calls
Grammar::proto_childN::template impl::type, State, Data>

Truth is, I could not see how I could use pass_through to do what I wanted, 
and still fail to see what pass_through really does. Another truth is, that 
i initially wanted to call transform_expr pass_through until I realized that 
it is already there.
Perhaps you can sched some light on why pass_through is implemented that 
way.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-04 Thread Thomas Heller
On Mon, Oct 4, 2010 at 8:45 PM, Eric Niebler  wrote:
> On 10/4/2010 6:49 AM, Thomas Heller wrote:
>> Hi,
>>
>> I spent some time on thinking how one could make the traversal of a proto
>> expression tree easier. I know want to share my findings with you all, and
>> propose a new addition to proto.
>>
>> My first idea was to decouple grammars from transforms, to follow the idea of
>> separation of data and algorithm.
>
> Data and algorithm are already separate in Proto. "Data" is the
> expression to traverse. "Algorithm" is the transform, driven by
> pattern-matching in the grammar.

True, but you can only attach one grammar to one transform. Thus for
every transformation you want to make, you need (in theory) replicate
your grammar, see further down that this is not the case in real life.

>> Currently, a transform is only applyable
>> when a certain expression is matched, this is good and wanted, though
>> sometimes you get yourself into a situation that requires you to reformulate
>> your grammar, and just exchange the transform part.
>
> True. But in my experience, the grammar part is rarely unchanged.

Yep, unchanged, and therefore you don't want to write it again.

>> Let me give you an example of a comma separated list of proto expression you
>> want to treat like an associative sequence:
>>
>>     opt1 = 1, opt2 = 2, opt3 = 3
>>
>> A proto grammar matching this expression would look something like this:
>>
>
> 
>
>>
>> This code works and everybody is happy, right?
>> Ok, what will happen if we want to calculate the number of options we
>> provided in our expression?
>> The answer is that we most likely need to repeat almost everything from
>> pack. except the transform part:
>>
>>    struct size :
>>        or_<
>>            when<
>>                 comma
>>               , mpl::plus()
>>            >
>>          , when<
>>                spec
>>              , mpl::int_<1>()
>>            >
>>        >
>>     {};
>
> This trivial example doesn't make your point, because the "grammar" that
> gets repeated ("comma" and "spec") is such a tiny fraction
> of the algorithm.

Right, this example does not show the full potential.

>> Now think of it if you are having a very complicated grammar, c&p the whole
>> grammar, or even just parts of it no fun.
>
> This is true, but it can be alleviated in most (all?) cases by building
> grammars in stages, giving names to parts for reusability. For instance,
> if "comma" were actually some very complicated grammar, you
> would do this:
>
>   struct packpart:
>     : comma
>   {};
>
> And then reuse packpart in both algorithms.

Right, that also works!



>
> I'm not opposed to such a thing being in Proto, but I (personally) don't
> feel a strong need. I'd be more willing if I saw a more strongly
> motivating example. I believe Joel Falcou invented something similar.
> Joel, what was your use scenario?
>
> I've also never been wild for proto::switch_, which I think is very
> syntax-heavy. It's needed as an alternative to proto::or_ to bring down
> the instantiation count, but it would be nice if visitor has a usage
> mode that didn't require creating a bunch of (partial) template
> specializations.

Ok, point taken. The partial specializations is also something i don't
like. I thought about
doing the same as you did with contexts (tag as parameter to operator() calls),
but decided against it, cause you would end up with an enormous set of
operator()
overloads. With this solution you end up with an enormous set of
classes, but this
solution is extendable (even from the outside), both from the grammar
part and the transform part. Meaning you can add new tags without
changing th intrinsics of your other grammars/transforms.

>
> I'll also point out that this solution is FAR more verbose that the
> original which duplicated part of the grammar. I also played with such
> visitors, but every solution I came up with suffered from this same
> verbosity problem.

Ok, the verbosity is a problem, agreed. I invented this because of
phoenix, actually. As a use case i wrote a small prototype with a
constant folder:
http://github.com/sithhell/boosties/blob/master/proto/libs/proto/test/phoenix_test.cpp

Note that the phoenix grammar could end up growing very complex! But
users should still be able to add their own tags for their own
expressions. I decided to propose the visitor because Joel Falcou
showed interest in it (Because he invented something similiar for NT2,
remember one of

Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-04 Thread Thomas Heller
On Mon, Oct 4, 2010 at 8:53 PM, joel falcou  wrote:
> On 04/10/10 20:45, Eric Niebler wrote:
>>
>> I'm not opposed to such a thing being in Proto, but I (personally) don't
>> feel a strong need. I'd be more willing if I saw a more strongly
>> motivating example. I believe Joel Falcou invented something similar.
>> Joel, what was your use scenario?
>>
>
> NT2 ;)
>
> More specifically, all our transform are built the same way:
> visit the tree, dispatch on visitor type + tag and act accordignly.
> It was needed for us cause the grammar could NOT have been written by hand
> as we supprot 200+ functions on nt2 terminal. All our code is somethign like
> "for each node, do Foo" with variable Foo depending on the pass and
> duplicating
> the grammar was a no-no.
>
> We ended up with somethign like this, except without switch_ (which I like
> btw), so we
> can easily add new transform on the AST from the external view point of user
> who
> didn't have to know much proto. As I only had to define one grammar (the
> visitor) and only specialisation of the
> visitor for some tag, it compiled fast and that was what we wanted.
>
> Thomas, why not showign the split example ? It's far better than this one
> and I remember I and Eric
> weren't able to write it usign grammar/transform back in the day.

The split example was one of the motivating examples, that is correct,
though it suffers the exact points Eric is criticizing.
The split example was possible because i added some new transforms
which proto currently misses, but i didn't want to shoot out all my
ammunition just yet :P
But since you ask for it:
http://github.com/sithhell/boosties/blob/master/proto/libs/proto/test/splitter.cpp

the new thing i added is transform_expr, which works like fusion::transform:
It creates the expression again, with transformed child nodes (the
child nodes get transformed according to the transform you specify as
template parameter
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


[proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-04 Thread Thomas Heller
Hi,

I spent some time on thinking how one could make the traversal of a proto 
expression tree easier. I know want to share my findings with you all, and 
propose a new addition to proto.

My first idea was to decouple grammars from transforms, to follow the idea of 
separation of data and algorithm. Currently, a transform is only applyable 
when a certain expression is matched, this is good and wanted, though 
sometimes you get yourself into a situation that requires you to reformulate 
your grammar, and just exchange the transform part.
Let me give you an example of a comma separated list of proto expression you 
want to treat like an associative sequence:

opt1 = 1, opt2 = 2, opt3 = 3

A proto grammar matching this expression would look something like this:

   using namespace proto;

   struct not_found {};

   template 
   struct opt {};

   struct term : terminal > {};
   struct spec : when >, _value(_right)>
   struct pack :
   or_<
   when<
comma
  , if_<
matches<_left(_right), _state>()
  , spec(_right)
  , pack(_left)
>
   >
 , when<
   spec
 , if_<
matches<_left, _state>()
  , spec
  , not_found()
>
   >
   >
{};

proto::terminal >::type const opt1;
proto::terminal >::type const opt2;
proto::terminal >::type const opt3;

template 
typename boost::result_of::type
extract(Expr const& expr, Opt const& opt)
{
return pack()(expr, opt);
}

This code works and everybody is happy, right?
Ok, what will happen if we want to calculate the number of options we 
provided in our expression?
The answer is that we most likely need to repeat almost everything from 
pack. except the transform part:

   struct size :
   or_<
   when<
comma
  , mpl::plus()
   >
 , when<
   spec
 , mpl::int_<1>()
   >
   >
{};

Now think of it if you are having a very complicated grammar, c&p the whole 
grammar, or even just parts of it no fun.

My solution to this problem is proto::visitor (see attached):

the signature looks like the following:

template <
template  class Visitor
  , template  class Grammar
>
struct visitor;

So, visitor is implemented in terms of proto::switch_, What it does is the 
following: Every tag that is encountered gets dispatched to the Grammar 
template specified by the user, is that grammar matched the expression, the 
transform Visitor is called.

Let me demonstrate this with our little example above:

   template 
   struct option_grammar // provide a default, don't match anything
  : proto::not_
   {};

   template  Visitor>
   struct option_visitor
  : proto::visitor
   {};

   template <>
   struct option_grammar
  : proto::terminal
   {};

   template <>
   struct option_grammar
  : proto::assign >
   {};

   template <>
   struct option_grammar
  : proto::comma
   {};

With the defintions of spec and term from the above, this is the grammar  
which matches our expressions. the defintion of our two proto transforms now 
becomes the following:

   template  struct option_eval : proto::_ {};
   
   typedef option_visitor pack;

   template <>
   struct option_eval
  : proto::_value
   {};

   template <>
   struct option_eval
   : if_<
   matches<_left, _state>()
 , pack(_right)
 , not_found()
   >
   {};

   template <>
   struct option_eval
  : if_<
  matches<_left(_right), _state>()
, pack(_right(_right))
, pack(_left)
  >
   {};

How the size calculation would look like is left as an exercise ;)

I hope the motivation and the need for what i called proto::visitor gets 
clear :)

I hear you people scream: what about compile times, when i define my DSL and 
different transforms to the proto trees, i most of the time only need a tiny 
subset of my actual grammar, won't compile times be even higher with this 
approach?
The short answer is no. It seems compile time of proto transforms is not 
really dependent on the grammar's complexity but on the complexity of the 
expressions (I did some basic tests for this, if you don't believe me, i can 
deliver it ;)).

The next thing i wanted to share is about traversals. While working with 
this visitor idea i searched for a good and easy way to specify traversals 
of proto trees. My search was soon over, because, calling transforms 
recursively is what makes proto traverse the tree. I will try to formulate 
some traversal orders in one of my next mails.

Greetings, Thomas
/**

Re: [proto] [Proto]: How to use complex grammars in a domain/extension

2010-09-22 Thread Thomas Heller
On Wednesday 22 September 2010 15:40:37 Eric Niebler wrote:
> Right, there's potentially a problem in Proto here. If RHS and LHS are
> in different domains, a common super-domain is deduced first, and then
> Proto checks to see if the LHS and RHS conform to that common domain.
> That doesn't seem right. Can you open a ticket?

Done. With this fix proposed:

Enable the binary operator iff:
1) Both, LHS and RHS expression have the same super domain (that is probably 
whenever domain deduction was successful)
2) LHS matches the grammar for the domain of LHS (not the super domain) 
2) RHS matches the grammar for the domain of RHS (not the super domain) 
3) resulting expression matches grammar in the super domain.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [Proto]: How to use complex grammars in a domain/extension

2010-09-22 Thread Thomas Heller


Eric Niebler wrote:

> (proto lists members, see this thread for the rest of the discussion,
> which has been happening on -users:
> http://lists.boost.org/boost-users/2010/09/62747.php)
> 
> On 9/21/2010 9:19 AM, Roland Bock wrote:
>> On 09/21/2010 11:55 AM, Thomas Heller wrote:
>>> 
>>> Solved the mistery. here is the code, explanation comes afterward:
>>>
>>> 
>>> So, everything works as expected!
>>>
>> Thomas,
>> 
>> wow, this was driving me crazy, thanks for solving and explaining in all
>> the detail! :-)
>> 
>> I am still not sure if this isn't a conceptual problem, though:
>> 
>> The "equation" grammar is perfectly OK. But without or-ing it with the
>> "addition" grammar, the extension does not allow ANY valid expression to
>> be created. I wonder if that is a bug or a feature?
> 
> It's a feature. Imagine this grammar:
> 
> struct IntsOnlyPlease
>   : proto::or_<
> proto::terminal
>   , proto::nary_expr >
> >
> {};
> 
> And a integer terminal "i" in a domain that obeys this grammar. Now,
> what should this do:
> 
> i + "hello!";
> 
> You want it to fail because if it doesn't, you would create an
> expression that doesn't conform to the domain's grammar. Right? Proto
> accomplishes this by enforcing that all operands must conform to the
> domain's grammar *and* the resulting expression must also conform.
> Otherwise, the operator is disabled.
> 
> Anything else would be fundamentally broken.

Let me add something:

The more I thought i about the problem, the more i thought that this would 
be a perfect use case for sub domains! So i tried to hack something 
together:

#include

using namespace boost;

typedef proto::terminal::type const terminal;

struct equation;

struct addition:
proto::or_
<
   proto::terminal,
   proto::plus
>
{};

struct equation:
proto::or_
<
proto::equal_to
>
{};

template
struct extension;

struct my_domain:
proto::domain
<
 proto::pod_generator,
 proto::or_,
 proto::default_domain
>
{};

template
struct lhs_extension;

struct my_lhs_domain:
proto::domain
<
proto::pod_generator,
addition,
my_domain
>
{};

template
struct rhs_extension;

struct my_rhs_domain:
proto::domain
<
proto::pod_generator,
addition,
my_domain
>
{};


template
struct extension
{
 BOOST_PROTO_BASIC_EXTENDS(
 Expr
   , extension
   , my_domain
 )

void test() const
{}
};

template
struct lhs_extension
{
 BOOST_PROTO_BASIC_EXTENDS(
 Expr
   , lhs_extension
   , my_lhs_domain
 )
};

template
struct rhs_extension
{
 BOOST_PROTO_BASIC_EXTENDS(
 Expr
   , rhs_extension
   , my_rhs_domain
 )
};

template 
void matches(Expr const& expr)
{
expr.test();

 std::cout << std::boolalpha
 << proto::matches::value << "\n";
}

int main()
{
 lhs_extension const i = {};
 rhs_extension const j = {};


 /*matches(i);  // false
 matches(j);  // false
 matches(i + i);  // false
 matches(j + j);  // false*/
 //matches(i + j);  // compile error
 //matches(j + i);  // compile error
 matches(i == j); // true
 matches(i == j + j); // true
 matches(i + i == j); // true
 matches(i + i == j + j); // true
}

This seems to be exactly what Roland wanted to achieve in the first place.
However, it looks like this design just overcomplicates stuff because we 
have to specify the "addition" in our base domain anyway ...

Initially i was under the impression that this wasn't needed, but it seems 
that proto cannot deduce something like:

lhs_expression OP rhs_expression results in expression

So the question is: is it reasonable to add features like that?
It seems valuable to me.

If I omit the addition in the my_domain grammar, I could have defined the 
operator== myself. However I think that somehow defeats the purpose of 
proto, doesn't it?

___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [Proto]: How to use complex grammars in a domain/extension

2010-09-22 Thread Thomas Heller



Eric Niebler wrote:

> On 9/21/2010 9:51 AM, Eric Niebler wrote:
>> (proto lists members, see this thread for the rest of the discussion,
>> which has been happening on -users:
>> http://lists.boost.org/boost-users/2010/09/62747.php)
>> 
>> On 9/21/2010 9:19 AM, Roland Bock wrote:
>>> On 09/21/2010 11:55 AM, Thomas Heller wrote:
>>>> 
>>>> Solved the mistery. here is the code, explanation comes afterward:
>>>>
>>>> 
>>>> So, everything works as expected!
>>>>
>>> Thomas,
>>>
>>> wow, this was driving me crazy, thanks for solving and explaining in all
>>> the detail! :-)
>>>
>>> I am still not sure if this isn't a conceptual problem, though:
>>>
>>> The "equation" grammar is perfectly OK. But without or-ing it with the
>>> "addition" grammar, the extension does not allow ANY valid expression to
>>> be created. I wonder if that is a bug or a feature?
>> 
>> It's a feature. Imagine this grammar:
>> 
>> struct IntsOnlyPlease
>>   : proto::or_<
>> proto::terminal
>>   , proto::nary_expr >
>> >
>> {};
>> 
>> And a integer terminal "i" in a domain that obeys this grammar. Now,
>> what should this do:
>> 
>> i + "hello!";
>> 
>> You want it to fail because if it doesn't, you would create an
>> expression that doesn't conform to the domain's grammar. Right? Proto
>> accomplishes this by enforcing that all operands must conform to the
>> domain's grammar *and* the resulting expression must also conform.
>> Otherwise, the operator is disabled.
>> 
>> Anything else would be fundamentally broken.
> 
> This explanation is incomplete. Naturally, this operator+ would be
> disabled anyway because the resulting expression doesn't conform to the
> grammar regardless of whether the LHS and RHS conform. It's a question
> of *when* the operator gets disabled. For a full explanation, see this
> bug report:
> 
> https://svn.boost.org/trac/boost/ticket/2407
> 
> The answer is simple and logically consistent: make sure *every* valid
> expression in your domain (including lone terminals) is accounted for by
> your grammar.

Let me add something:

The more I thought i about the problem, the more i thought that this would 
be a perfect use case for sub domains! So i tried to hack something 
together:

#include

using namespace boost;

typedef proto::terminal::type const terminal;

struct equation;

struct addition:
proto::or_
<
   proto::terminal,
   proto::plus
>
{};

struct equation:
proto::or_
<
proto::equal_to
>
{};

template
struct extension;

struct my_domain:
proto::domain
<
 proto::pod_generator,
 proto::or_,
 proto::default_domain
>
{};

template
struct lhs_extension;

struct my_lhs_domain:
proto::domain
<
proto::pod_generator,
addition,
my_domain
>
{};

template
struct rhs_extension;

struct my_rhs_domain:
proto::domain
<
proto::pod_generator,
addition,
my_domain
>
{};


template
struct extension
{
 BOOST_PROTO_BASIC_EXTENDS(
 Expr
   , extension
   , my_domain
 )

void test() const
{}
};

template
struct lhs_extension
{
 BOOST_PROTO_BASIC_EXTENDS(
 Expr
   , lhs_extension
   , my_lhs_domain
 )
};

template
struct rhs_extension
{
 BOOST_PROTO_BASIC_EXTENDS(
 Expr
   , rhs_extension
   , my_rhs_domain
 )
};

template 
void matches(Expr const& expr)
{
expr.test();

 std::cout << std::boolalpha
 << proto::matches::value << "\n";
}

int main()
{
 lhs_extension const i = {};
 rhs_extension const j = {};


 /*matches(i);  // false
 matches(j);  // false
 matches(i + i);  // false
 matches(j + j);  // false*/
 //matches(i + j);  // compile error
 //matches(j + i);  // compile error
 matches(i == j); // true
 matches(i == j + j); // true
 matches(i + i == j); // true
 matches(i + i == j + j); // true
}

This seems to be exactly what Roland wanted to achieve in the first place.
However, it looks like this design just overcomplicates stuff because we 
have to specify the "addition" in our base domain anyway ...

Initially i was under the impression that this wasn't needed, but it seems 
that proto cannot deduce something like:

lhs_expression OP rhs_expression results in expression

So the question is: is it reasonable to add features like that?
It seems valuable to me.


___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] cpp-next.com article on proto published

2010-08-31 Thread Thomas Heller
On Tue, Aug 31, 2010 at 5:40 AM, Eric Niebler  wrote:
> It's also been posted to reddit.com. Vote it up if you like it!
>
> http://www.reddit.com/r/programming/comments/d7hbm/expressive_how_to_create_parsers_using_a_c_domain/
>
> This version has been largely rewritten. I'm saving much of the juicy
> Phoenix3 bits for later articles.

Hopefully we finish the refactoring and introspection stuff until then.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


[proto] Difference between BOOST_PROTO_ASSERT_MATCHES(expr, grammar) and BOOST_MPL_ASSERT(proto::matches)

2010-08-18 Thread Thomas Heller
Hi all,

I just noticed that Eric wrote BOOST_MPL_ASSERT(proto::matches) in one of his latest posts here on the list (see thread about 
boost::parameters).

So I wonder, where is the fundamental difference between 
BOOST_PROTO_ASSERT_MATCHES and the BOOS_MPL_ASSERT expression.

Cheers,

Thomas
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Funky exercie: subset of Boost::parameters using proto

2010-08-15 Thread Thomas Heller
On Sunday 15 August 2010 18:16:07 Mathieu - wrote:
> On 15 August 2010 17:46, Tim Moore  wrote:
> > Nice.  I've been meaning to start using Boost.Parameter in one of my
> > projects but I definitely like this syntax better.  I'll probably start
> > using this soon (like this week).
> > 
> > Please post if you make updates.
> > 
> > Cheers,
> > 
> > Tim
> 
> Note that actually it doesn't handle passing parameters by ref ;). And
> that all the boost.parameter's features are not implemented!

You can't expect everything from an afternoon of work ;)
Anyway, its purpose is just for having named parameters for function 
arguments (yeah, ref is missing. if there is more interest, we will see what 
can be done).

> However, this is really great and fast w/r to compile times!

Haven't checked. But the only limit would be the size of fusion::map which 
is 10 by default, and should be enough!
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Funky exercie: subset of Boost::parameters using proto

2010-08-15 Thread Thomas Heller
On Sunday 15 August 2010 16:51:03 joel falcou wrote:
> So, Thomas and I felt bored or some suhc this afternoon. Incidentally, I
> needed to use Boost::Parameters
> in NT² but found the compile time to be somehow slow. So in a flash of
> defiance, we went to reimplementing
> a subset of parameters using proto. The syntax is ratehr different but
> the effect is ratehr nice.
> 
> The code is here:
> 
> http://gist.github.com/525562
> 
> Comments welcome

Just added a static for missing parameters when there was no default given:

http://gist.github.com/525569
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] So I heard proto make AST ...

2010-08-11 Thread Thomas Heller
On Wednesday 11 August 2010 00:45:44 Eric Niebler wrote:
> On 8/10/2010 3:52
PM, Thomas Heller wrote:
> > On Tue, Aug 10, 2010 at 8:56 PM, Eric Niebler
wrote:
> >> Good. Now if you are saying that Proto's existing transforms
are
> >> too low-level and that things like pre- and post-order traversals
>
>> should be first class Proto citizens ... no argument. Patch? :-)
> > 
> >
I think i am a little bit responsible for that whole discussion as i
> > 
>
>  mentiones on IRC that proto transforms are hard for me.
> 
> The #boost
IRC channel? Perhaps I should spend time there.

You definitely should :)

>
> So, why are they so hard. I am currently learning the principles of
> >
compiler construction (lecture at university). We learn all those
> > fancy
algorithms to optimize the hell out of our codes. But these
> > algorithm
all rely on bottom-up, top-down and what-not traversals of
> > your AST.
proto transforms work a little bit different from these
> > tree traversals.
BUT they are very similiar, and probably as you
> > said, a little more low
level. Just my 2 cents ;)
> 
> And a good 2 cents. I never actually took a
compiler construction class.
> Oops! But as I showed earlier in this thread,
pre-order traversal can be
> expressed in terms of fold. Post-order is much
the same. But if I'm
> going to go around claiming Proto is a
compiler-construction toolkit, it
> should have the algorithms that people
would expect, not the ones I just
> made up. :-P

Yeah, i already did some
traversals. But you have to think everytime: Did I make it right? Did I miss
some rule in my DSEL grammar?
IMHO, the important difference between a proto
transform and a proto expression traversal is that the transform has to
replicate your grammar, every time. And a traversal simply does not care if
your grammar is right or not. You just want to go over all elements of your
tree. The validity of your DSEL should have been checked one step before
(think of multi-pass compiler).
Joel Falcou showed a technique which, to
some extend is able to deal with the no-repetition part.

Let me give you an
example of a traversal which simply doesn't care if the expression given is
in the language or not (I will present you some transforms I have written
for phoenix3).

First, in pheonix, we want to know: What is the arity of our
expression, or sometimes a little weaker, do we have a nullary, or a n-ary
expression?

This is easily done with a proto transform, I agree.
For arity
calculation we have this transform (notice something? We have a transform
where a simple traversal would be sufficient):

struct
arity
  : proto::or_<
   
proto::when, mpl::int_<0>()>
  ,
proto::when<
proto::function<
   
proto::terminal >
  ,
proto::terminal
  , proto::_
>

 , mpl::next()
  
 >
  , proto::otherwise<
proto::fold<
  
 proto::_
  , mpl::int_<0>()
   
  , mpl::max()
>
 
  >
>
{};

Yet, very elegant (imho), it is quite
complex and not easily spotted what this code is doing.

With some kind of
post-order traversal and the appropriate visitor this might look simpler. I
have to admit though, i have no idea how such a traversal could look
like.

Eric, in one of your posts you mentioned the potential of phoenix3. I
think, this potential can be highly increased by having some kind of
traversal helper functions, just because I think people are more familiar
with the concepts of traversals and visitors (or, FWIW, iterators) than with
proto transforms. On the other hand, proto transform already are a powerful
tool, perhaps a little too powerful for some simple tasks. Perhaps we have
to educate people for a mind-shift to the direction of proto transforms.

>
And if you think Proto's transforms are hard now, be glad you weren't
>
using Proto v2 in 2006. 

___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] for your review: intro to a series of blog posts about proto and phoenix v3

2010-08-11 Thread Thomas Heller
On Tuesday 10 August 2010 17:21:53 Eric Niebler wrote:
> On 8/10/2010 11:14
AM, Robert Jones wrote:
> > Well, as a complete novice to code of this
sophistication I
> > understood that piece perfectly, as far as it goes.
Naturally, as the
> > opening piece of a series it raises far more questions
than it
> > answers.
> 
> That's great feedback, thank you.

To follow up, I
like it too!

> 
> > It also scares me somewhat. This stuff could mark an
absolute
> > explosion of complexity in the code your average jobbing
programmer
> > is expected to get to grips with, and in my experience the
technology
> > is already slipping from the grasp of most of us! When you
get this
> > stuff wrong, what do the error messages look like? Boost.Bind
&
> > Boost.Lambda errors are already enough to send most of us running
for
> > the hills,
> 
> A great point! (I've held back a whole rant about
how long template
> error messages are library bugs and should be filed as
such. That's a
> whole other blog post.) I sort of address this when I say
that a good
> dsel toolkit would force dsel authors to rigorously define
their dsels,
> leading to "better usage experiences". That's pretty vague,
though. I
> could be more explicit. But certainly the intention here is that
proto
> makes it easier for dsel authors to give their users more succinct
error
> messages.

I think this will greatly change when we have static
assert support on the majorities of compiler.

> > and tool support is
somewhat lacking as far as I know,
> > being pretty much limited to
STLFilt.
> > 
> > Maybe I'm just too long in the tooth for this!
> > 
> >
Still, great piece, and I look forward to subsequent installments.
> 
>
Thanks,

___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] So I heard proto make AST ...

2010-08-10 Thread Thomas Heller
On Tue, Aug 10, 2010 at 8:56 PM, Eric Niebler  wrote:
> On 8/10/2010 2:52 PM, joel.fal...@lri.fr wrote:
>> Eric Niebler wrote:
>>> A pre-order traversal, pushing each visited node into an mpl vector? How
>>> about:
> 
>> I'm on a tiny mobile, but my idéa was to have such algo as proto
>> transforms & grammar
>
> Good. Now if you are saying that Proto's existing transforms are too
> low-level and that things like pre- and post-order traversals should be
> first class Proto citizens ... no argument. Patch? :-)

I think i am a little bit responsible for that whole discussion as i
mentiones on IRC that proto transforms are hard for me.
So, why are they so hard. I am currently learning the principles of
compiler construction (lecture at university). We learn all those
fancy algorithms to optimize the hell out of our codes.
But these algorithm all rely on bottom-up, top-down and what-not
traversals of your AST.
proto transforms work a little bit different from these tree
traversals. BUT they are very similiar, and probably as you said, a
little more low level.
Just my 2 cents ;)
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] for your review: intro to a series of blog posts about proto and phoenix v3

2010-08-10 Thread Thomas Heller
On Tue, Aug 10, 2010 at 6:56 PM, Eric Niebler  wrote:
> On 8/10/2010 12:03 PM, Thomas Heller wrote:
>> On Tue, Aug 10, 2010 at 5:21 PM, Eric Niebler  wrote:
>>> On 8/10/2010 11:14 AM, Robert Jones wrote:
>>>> When you get this
>>>> stuff wrong, what do the error messages look like? Boost.Bind &
>>>> Boost.Lambda errors are already enough to send most of us running for
>>>> the hills,
>>>
>>> (I've held back a whole rant about how long template
>>> error messages are library bugs and should be filed as such. That's a
>>> whole other blog post.)
>>
>> I think we see a great improvement with static_assert in C++0x!
>
> Undoubtedly true, but I can't confirm firsthand. I don't do any C++0x
> programming.
>
>> And we are potentially able to reduce error message if SFINAE is
>> applied more often, with the disadvantage of losing information on
>> what failed.
>
> I disagree about SFINAE. I think it leads to horrible error messages
> like, "No function overload matched. Here are the signatures of the (5,
> 20, 100+) functions that failed to match (and I'm not going to tell you
> why)". I've had better luck with tag dispatching, with a catch-all
> handler that asserts with a "if-you-get-here-it-means-this" message.

You got me a little wrong here, i don't like the "no matching function
call bla" error message either. It is just wanted to say that you have
the *possibility* to shorten your error messages.
I like the idea of tag dispatching, never came to my mind to use it
for error generating error messages ;)

Anyway, i think we will have to go a long way to have good error
messages in highly templated code.

> But all that will go in my rant. ;-)

Looking forward to it.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] for your review: intro to a series of blog posts about proto and phoenix v3

2010-08-10 Thread Thomas Heller
On Tue, Aug 10, 2010 at 5:21 PM, Eric Niebler  wrote:
> On 8/10/2010 11:14 AM, Robert Jones wrote:
>> Well, as a complete novice to code of this sophistication I
>> understood that piece perfectly, as far as it goes. Naturally, as the
>> opening piece of a series it raises far more questions than it
>> answers.
>
> That's great feedback, thank you.

Let me follow up. I like it too!

>> It also scares me somewhat. This stuff could mark an absolute
>> explosion of complexity in the code your average jobbing programmer
>> is expected to get to grips with, and in my experience the technology
>> is already slipping from the grasp of most of us! When you get this
>> stuff wrong, what do the error messages look like? Boost.Bind &
>> Boost.Lambda errors are already enough to send most of us running for
>> the hills,
>
> A great point! (I've held back a whole rant about how long template
> error messages are library bugs and should be filed as such. That's a
> whole other blog post.) I sort of address this when I say that a good
> dsel toolkit would force dsel authors to rigorously define their dsels,
> leading to "better usage experiences". That's pretty vague, though. I
> could be more explicit. But certainly the intention here is that proto
> makes it easier for dsel authors to give their users more succinct error
> messages.

I think we see a great improvement with static_assert in C++0x!
And we are potentially able to reduce error message if SFINAE is
applied more often, with the disadvantage of losing information on
what failed.

>> and tool support is somewhat lacking as far as I know,
>> being pretty much limited to STLFilt.
>>
>> Maybe I'm just too long in the tooth for this!
>>
>> Still, great piece, and I look forward to subsequent installments.
>
> Thanks,
>
> --
> Eric Niebler
> BoostPro Computing
> http://www.boostpro.com
> ___
> proto mailing list
> proto@lists.boost.org
> http://lists.boost.org/mailman/listinfo.cgi/proto
>
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] So I heard proto make AST ...

2010-07-27 Thread Thomas Heller
On Tuesday 27 July 2010 15:04:30 Alp Mestanogullari wrote:
> On Tue, Jul 27, 2010 at 3:01 PM, joel falcou  wrote:
> > Do people think such stuff (maybe in proto::tree:: or smthg ?) be useful
> > additions ?
> 
> Definitely. We're dealing with a compile-time AST, but this is still
> an AST and we often have to apply transformations to ASTs. Thus,
> having higher order metafunctions in Proto just asking us for the
> transform to apply on each node or somesuch, would be useful
> additions!

Especially when thinking about phoenix3. People might find it easier to think 
of tree traversals instead of proto transforms, grammars and such.
It is at least the case for me.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


  1   2   >