Re: [proto] So I heard proto make AST ...

2010-08-11 Thread Thomas Heller
On Wednesday 11 August 2010 00:45:44 Eric Niebler wrote:
 On 8/10/2010 3:52
PM, Thomas Heller wrote:
  On Tue, Aug 10, 2010 at 8:56 PM, Eric Niebler
wrote:
  Good. Now if you are saying that Proto's existing transforms
are
  too low-level and that things like pre- and post-order traversals

 should be first class Proto citizens ... no argument. Patch? :-)
  
 
I think i am a little bit responsible for that whole discussion as i
  

  mentiones on IRC that proto transforms are hard for me.
 
 The #boost
IRC channel? Perhaps I should spend time there.

You definitely should :)


 So, why are they so hard. I am currently learning the principles of
 
compiler construction (lecture at university). We learn all those
  fancy
algorithms to optimize the hell out of our codes. But these
  algorithm
all rely on bottom-up, top-down and what-not traversals of
  your AST.
proto transforms work a little bit different from these
  tree traversals.
BUT they are very similiar, and probably as you
  said, a little more low
level. Just my 2 cents ;)
 
 And a good 2 cents. I never actually took a
compiler construction class.
 Oops! But as I showed earlier in this thread,
pre-order traversal can be
 expressed in terms of fold. Post-order is much
the same. But if I'm
 going to go around claiming Proto is a
compiler-construction toolkit, it
 should have the algorithms that people
would expect, not the ones I just
 made up. :-P

Yeah, i already did some
traversals. But you have to think everytime: Did I make it right? Did I miss
some rule in my DSEL grammar?
IMHO, the important difference between a proto
transform and a proto expression traversal is that the transform has to
replicate your grammar, every time. And a traversal simply does not care if
your grammar is right or not. You just want to go over all elements of your
tree. The validity of your DSEL should have been checked one step before
(think of multi-pass compiler).
Joel Falcou showed a technique which, to
some extend is able to deal with the no-repetition part.

Let me give you an
example of a traversal which simply doesn't care if the expression given is
in the language or not (I will present you some transforms I have written
for phoenix3).

First, in pheonix, we want to know: What is the arity of our
expression, or sometimes a little weaker, do we have a nullary, or a n-ary
expression?

This is easily done with a proto transform, I agree.
For arity
calculation we have this transform (notice something? We have a transform
where a simple traversal would be sufficient):

struct
arity
  : proto::or_
   
proto::whenproto::terminalproto::_, mpl::int_0()
  ,
proto::when
proto::function
   
proto::terminalfuncwrapargument 
  ,
proto::terminalenv
  , proto::_


 , mpl::nextproto::_value(proto::_child2)()
  
 
  , proto::otherwise
proto::fold
  
 proto::_
  , mpl::int_0()
   
  , mpl::maxarity, proto::_state()

 
  

{};

Yet, very elegant (imho), it is quite
complex and not easily spotted what this code is doing.

With some kind of
post-order traversal and the appropriate visitor this might look simpler. I
have to admit though, i have no idea how such a traversal could look
like.

Eric, in one of your posts you mentioned the potential of phoenix3. I
think, this potential can be highly increased by having some kind of
traversal helper functions, just because I think people are more familiar
with the concepts of traversals and visitors (or, FWIW, iterators) than with
proto transforms. On the other hand, proto transform already are a powerful
tool, perhaps a little too powerful for some simple tasks. Perhaps we have
to educate people for a mind-shift to the direction of proto transforms.


And if you think Proto's transforms are hard now, be glad you weren't

using Proto v2 in 2006. shudder

___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Funky exercie: subset of Boost::parameters using proto

2010-08-15 Thread Thomas Heller
On Sunday 15 August 2010 18:16:07 Mathieu - wrote:
 On 15 August 2010 17:46, Tim Moore t...@montecito-software.com wrote:
  Nice.  I've been meaning to start using Boost.Parameter in one of my
  projects but I definitely like this syntax better.  I'll probably start
  using this soon (like this week).
  
  Please post if you make updates.
  
  Cheers,
  
  Tim
 
 Note that actually it doesn't handle passing parameters by ref ;). And
 that all the boost.parameter's features are not implemented!

You can't expect everything from an afternoon of work ;)
Anyway, its purpose is just for having named parameters for function 
arguments (yeah, ref is missing. if there is more interest, we will see what 
can be done).

 However, this is really great and fast w/r to compile times!

Haven't checked. But the only limit would be the size of fusion::map which 
is 10 by default, and should be enough!
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [Proto]: How to use complex grammars in a domain/extension

2010-09-22 Thread Thomas Heller



Eric Niebler wrote:

 On 9/21/2010 9:51 AM, Eric Niebler wrote:
 (proto lists members, see this thread for the rest of the discussion,
 which has been happening on -users:
 http://lists.boost.org/boost-users/2010/09/62747.php)
 
 On 9/21/2010 9:19 AM, Roland Bock wrote:
 On 09/21/2010 11:55 AM, Thomas Heller wrote:
 snip
 Solved the mistery. here is the code, explanation comes afterward:

 snip
 So, everything works as expected!

 Thomas,

 wow, this was driving me crazy, thanks for solving and explaining in all
 the detail! :-)

 I am still not sure if this isn't a conceptual problem, though:

 The equation grammar is perfectly OK. But without or-ing it with the
 addition grammar, the extension does not allow ANY valid expression to
 be created. I wonder if that is a bug or a feature?
 
 It's a feature. Imagine this grammar:
 
 struct IntsOnlyPlease
   : proto::or_
 proto::terminalint
   , proto::nary_exprproto::_, proto::varargIntsOnlyPlease 
 
 {};
 
 And a integer terminal i in a domain that obeys this grammar. Now,
 what should this do:
 
 i + hello!;
 
 You want it to fail because if it doesn't, you would create an
 expression that doesn't conform to the domain's grammar. Right? Proto
 accomplishes this by enforcing that all operands must conform to the
 domain's grammar *and* the resulting expression must also conform.
 Otherwise, the operator is disabled.
 
 Anything else would be fundamentally broken.
 
 This explanation is incomplete. Naturally, this operator+ would be
 disabled anyway because the resulting expression doesn't conform to the
 grammar regardless of whether the LHS and RHS conform. It's a question
 of *when* the operator gets disabled. For a full explanation, see this
 bug report:
 
 https://svn.boost.org/trac/boost/ticket/2407
 
 The answer is simple and logically consistent: make sure *every* valid
 expression in your domain (including lone terminals) is accounted for by
 your grammar.

Let me add something:

The more I thought i about the problem, the more i thought that this would 
be a perfect use case for sub domains! So i tried to hack something 
together:

#includeboost/proto/proto.hpp

using namespace boost;

typedef proto::terminalint::type const terminal;

struct equation;

struct addition:
proto::or_

   proto::terminalproto::_,
   proto::plusaddition, addition

{};

struct equation:
proto::or_

proto::equal_toaddition, addition

{};

templateclass Expr
struct extension;

struct my_domain:
proto::domain

 proto::pod_generatorextension,
 proto::or_equation, addition,
 proto::default_domain

{};

templateclass Expr
struct lhs_extension;

struct my_lhs_domain:
proto::domain

proto::pod_generatorlhs_extension,
addition,
my_domain

{};

templateclass Expr
struct rhs_extension;

struct my_rhs_domain:
proto::domain

proto::pod_generatorrhs_extension,
addition,
my_domain

{};


templateclass Expr
struct extension
{
 BOOST_PROTO_BASIC_EXTENDS(
 Expr
   , extensionExpr
   , my_domain
 )

void test() const
{}
};

templateclass Expr
struct lhs_extension
{
 BOOST_PROTO_BASIC_EXTENDS(
 Expr
   , lhs_extensionExpr
   , my_lhs_domain
 )
};

templateclass Expr
struct rhs_extension
{
 BOOST_PROTO_BASIC_EXTENDS(
 Expr
   , rhs_extensionExpr
   , my_rhs_domain
 )
};

template typename Grammar, typename Expr
void matches(Expr const expr)
{
expr.test();

 std::cout  std::boolalpha
  proto::matchesExpr, Grammar::value  \n;
}

int main()
{
 lhs_extensionterminal const i = {};
 rhs_extensionterminal const j = {};


 /*matchesequation(i);  // false
 matchesequation(j);  // false
 matchesequation(i + i);  // false
 matchesequation(j + j);  // false*/
 //matchesequation(i + j);  // compile error
 //matchesequation(j + i);  // compile error
 matchesequation(i == j); // true
 matchesequation(i == j + j); // true
 matchesequation(i + i == j); // true
 matchesequation(i + i == j + j); // true
}

This seems to be exactly what Roland wanted to achieve in the first place.
However, it looks like this design just overcomplicates stuff because we 
have to specify the addition in our base domain anyway ...

Initially i was under the impression that this wasn't needed, but it seems 
that proto cannot deduce something like:

lhs_expression OP rhs_expression results in expression

So the question is: is it reasonable to add features like that?
It seems valuable to me.


___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-05 Thread Thomas Heller
Eric Niebler wrote:

 On Mon, Oct 4, 2010 at 12:43 PM, Thomas Heller
 thom.heller-gM/ye1e23mwn+bqq9rb...@public.gmane.orgwrote:
snip

 
  I'll also point out that this solution is FAR more verbose that the
  original which duplicated part of the grammar. I also played with such
  visitors, but every solution I came up with suffered from this same
  verbosity problem.

 Ok, the verbosity is a problem, agreed. I invented this because of
 phoenix, actually. As a use case i wrote a small prototype with a
 constant folder:

 
http://github.com/sithhell/boosties/blob/master/proto/libs/proto/test/phoenix_test.cpp


 Neat! You're getting very good at Proto. :-)

Thanks :)
Let me comment on your request:

On Tuesday 05 October 2010 03:15:27 Eric Niebler wrote:
 I'm looking at this code now, and perhaps my IQ has dropped lately
 (possible), but I can't for the life of me figure out what I'm looking
 at. I'm missing the big picture. Can you describe the architecture of
 this design at a high level? Also, what the customization points are and
 how they are supposed to be used? I'm a bit lost. :-(

First, let me emphasize that I try to explain this code:
http://github.com/sithhell/boosties/blob/master/proto/libs/proto/test/phoenix_test.cpp

Ok, i feared this would happen, forgive me the low amount of comments in the 
code and let me start with my considerations for this new prototype.

During the last discussions it became clear that the current design wasn't 
as good as it seems to be, it suffered from some serious limitations. The 
main limitation was that data and algorithm wasn't clearly separated meaning 
that every phoenix expression intrinsically carried it's behavior/how to 
evaluate this expression.
This was the main motivation behind this new design, the other motivation 
was to simplify certain other customization points.
One of the main requirements i set myself was that the major part of 
phoenix3 which is already written and works, should not be subject to too 
much change.

Ok, first things first. After some input from Eric it became clear that 
phoenix expressions might just be regular proto expressions, wrapped around 
the phoenix actor carrying a custom tag. This tag should determine that it 
really is a custom phoenix expression, and the phoenix evaluation scheme 
should be able to customize the evaluation on these tags.
Let me remind you that most phoenix expressions can be handled by proto's 
default transform, meaning that we want to reuse that wherever possible, and 
just tackle the phoenix parts like argument placeholders, control flow 
statements and such.
Sidenote: It also became clear that phoenix::value and phoenix::reference 
can be just mere proto terminals.

Having that said, just having plain evaluation of phoenix expressions 
seemed to me that it is wasting of what could become possible with the power 
of proto. I want to do more with phoenix expressions, let me remind you that 
phoenix is C++ in C++ and with that i want to be able to write some cool 
algorithms transforming these proto expressions, introspect these proto 
expression and actively influence the way these phoenix expressions get 
evaluated/optimized/whatever. One application of these custom evaluations 
that came to my mind was constant folding, so i implemented it on top of my 
new prototype. The possibilities are endless: A proper design will enable 
such things as multistage programming: imagine an evaluator which does not 
compute the result, but translate a phoenix expression to a string which can 
be compiled by an openCL/CUDA/shader compiler. Another thing might be auto 
parallelization of phoenix expression (of course, we are far away from that, 
we would need a proper graph library for that). Nevertheless, these were 
some thoughts I had in mind.

This is the very big picture.

Let me continue to explain the customization points I have in this design:

First things first, it is very easy to add new expression types by 
specifying:
   1) The new tag of the expression.
   2) The way how to create this new expression, and thus building up the 
  expression template tree.
   3) Hook onto the evaluation mechanism
   4) Write other evaluators which just influence your newly created tag 
  based expression or all the other already existing tags

Let me guide you through this process in detail by explaining what has been 
done for the placeholder extension to proto (I reference the line numbers 
of my prototype).

1) define the tag tag::argument: line 307
2) specify how to create this expression: line 309 to 315
  First, define a valid proto expression line 309 through phoenix_expr 
  which had stuff like proto::plus and such as archetypes. The thing it 
  does, it creates a valid proto grammar and transform which can be 
  reused in proto grammars and transforms, just like proto::plus.
  Second, we create some constant expressions which are to be used as 
  placeholders
3) Hook onto

Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-08 Thread Thomas Heller
On Thursday 07 October 2010 23:06:24 Eric Niebler wrote:
 On 10/4/2010 1:55 PM, Eric Niebler wrote:
  The idea of being able to specify the transforms separately from the
  grammar is conceptually very appealing. The grammar is the control
  flow, the transform the action. Passing in the transforms to a grammar
  would be like passing a function object to a standard algorithm: a
  very reasonable thing to do. I don't think we've yet found the right
  formulation for it, though. Visitors and tag dispatching are too
  ugly/hard to use.
  
  I have some ideas. Let me think some.
 
 Really quickly, what I have been thinking of is something like this:
 
 templateclass Transforms
 struct MyGrammar
 
   : proto::or_
 
 proto::when rule1, typename Transforms::tran1 
   , proto::when rule2, typename Transforms::tran2 
   , proto::when rule3, typename Transforms::tran3 
 
 {};

I don't think this is far away from what i proposed.
Consider the following:

template typename
struct my_grammar
: proto::or_
 rule1
   , rule2
   , rule3
{};

template typename my_transform;

// corresponding to the tag of expression of rule1
template  struct my_transformtag1
: // transform
{};

// corresponding to the tag of expression of rule2
template  struct my_transformtag2
: // transform
{};

// corresponding to the tag of expression of rule3
template  struct my_transformtag3 
: // transform
{};

typedef proto::visitormy_transform, my_grammar 
algorithm_with_specific_transforms;

In my approach, both the transform and the grammar can be exchanged at will.

What i am trying to say is, both the transforms and the control flow (aka the 
grammar) intrinsically depend on the tag of the expressions, because the tag 
is what makes different proto expressions distinguishable.
This imminent characteristic of a proto expression is what drove Joel Falcou 
(i am just guessing here) and me (I know that for certain) to this tag based 
dispatching of transforms and grammars.

 
 That is, you parameterize the grammar on the transforms, just the way
 you parameterize a std algorithm by passing it a function object. Each
 grammar (I'm thinking of starting to call Proto grammars+transforms
 Proto algorithms, because really that's what they are) must document
 the concept that must be satisfied by its Transforms template parameter
 (what nested typedefs must be present).
 
 This is extremely simple and terse. It gives a simple way to extend
 behaviors (by deriving from an existing Transforms model and hiding some
 typedefs with your own).
 
 I know this is not general enough to meet the needs of Phoenix, and
 possibly not general enough for NT2, but I just thought I'd share the
 direction of my thinking on this problem.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Thoughts on traversing proto expressions and reusing grammar

2010-10-20 Thread Thomas Heller
On Wednesday 20 October 2010 15:02:01 Eric Niebler wrote:
 On 10/14/2010 12:27 PM, Eric Niebler wrote:
 snip
 
  - A new set of actions can be created easily by delegating
  
to MyActions::action by default, and specializing only those
rules that need custom handling.
 
 The code I sent around actually falls short on this promise. Here's an
 updated design that corrects the problem. The key is realizing that the
 actions need to be parameterized by ... the actions. That way, they can
 be easily subclassed.
 
 That is, we now have this:
 
 // A collection of semantic actions, indexed by phoenix grammar rules
 struct PhoenixDefaultActions
 {
 templatetypename Rule, typename Actions = PhoenixDefaultActions
 struct action;
 };
 
 struct MyActions
 {
 // Inherit default behavior from PhoenixDefaultActions
 templatetypename Rule, typename Actions = MyActions
 struct action
 
   : PhoenixDefaultActions::actionRule, Actions
 
 {};
 
 // Specialize action for custom handling of certain
 // grammar rules here...
 };
 
 If you don't ever pass MyActions to PhoenixDefaultActions, then any
 specializations of MyActions::actions will never be considered.

Good catch! I worked a little on trying to simplify that whole grammar with 
rules that have thing a bit. Forgive me, but i changed the name to Visitor. 
Why? Simply because i think this is what is done here. We visit a specific 
node which happened to match our rule.

Here it goes:
namespace detail
{
template 
typename Grammar, typename Visitor, typename IsRule = void
struct algorithm_case
: Grammar
{};

template 
typename Rule, typename Visitor, int RulesSize = Rule::size
struct algorithm_case_rule;

template typename Rule, typename Visitor
struct algorithm_case_ruleRule, Visitor, 1
: proto::when
  typename Rule::rule0
, typename Visitor::template visittypename Rule::rule0
  
{};

// add more ...

template typename Grammar, typename Visitor
struct algorithm_caseGrammar, Visitor, typename Grammar::is_rule
: algorithm_case_ruleGrammar, Visitor
{};

// algorithm is what 
template typename Cases, typename Visitor
struct algorithm
: proto::switch_algorithmCases, Visitor 
{
template typename Tag
struct case_
: algorithm_case
  typename Cases::template case_Tag
, Visitor
  
{};
};

template typename Grammar
struct rule
{
typedef void is_rule;

static int const size = 1;
typedef Grammar rule0;
};

template 
typename Grammar0 = void
  , typename Grammar1 = void
  , typename Grammar2 = void
  , typename Dummy = void
struct rules;

template typename Grammar
struct rulesGrammar
{
typedef void is_rule;

static int const size = 1;
typedef Grammar rule0;
};

 // add more ... 
}

Making your example:

// A collection of actions indexable by rules.
struct MyActions
{
templatetypename Rule
struct action;
};

// An easier way to dispatch to a tag-specific sub-grammar
templatetypename Tag, typename Actions
struct MyCases
  : proto::not proto::_ 
{};

struct MyCasesImpl
{
templatetypename Tag
struct case_
  : MyCasesTag, Actions
{};
};

// Define an openly extensible grammar using switch_
templatetypename Actions = MyActions
struct MyGrammar
  : detail::algorithm MyCasesImpl, Actions
{};

// Define a grammar rule for int terminals
struct IntTerminalRule
  : proto::terminalint
{};

// Define a grammar rule for char terminals
struct CharTerminalRule
  : proto::terminalchar
{};

// OK, handle the terminals we allow:
template
struct MyCasesproto::tag::terminal
  : proto::rules
IntTerminalRule
  , CharTerminalRule

{};

// Now, populate the MyActions metafunction class
// with the default actions:

template
struct MyActions::action IntTerminalRule 
  : DoIntAction
{};

template
struct MyActions::action CharTerminalRule 
  : DoCharAction
{};
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Visitor Design Pattern

2010-10-22 Thread Thomas Heller
On Friday 22 October 2010 09:58:25 Eric Niebler wrote:
 On 10/22/2010 12:33 AM, Thomas Heller wrote:
  On Friday 22 October 2010 09:15:47 Eric Niebler wrote:
  On 10/21/2010 7:09 PM, Joel de Guzman wrote:
  Check out the doc I sent (Annex A). It's really, to my mind,
  generic languages -- abstraction of rules and templated grammars
  through metanotions and hyper-rules.
  
  Parameterized rules. Yes, I can understand that much. My
  understanding stops when I try to imagine how to build a parser
  that recognizes a grammar with parameterized rules.
  
  And I can't understand how expression templates relate to parsing.
 
 It doesn't in any practical sense, really. No parsing ever happens in
 Proto. The C++ compiler parses expressions for us and builds the tree.
 Proto grammars are patterns that match trees. (It is in this sense
 they're closer to schemata, not grammars that drive parsers.)
 
 They're called grammars in Proto not because they drive the parsing
 but because they describe the valid syntax for your embedded language.

Ok, this formulation makes it much clearer :)

  I have this strong feeling that that's the intent of Thomas and
  your recent designs. Essentially, making the phoenix language a
  metanotion in itself that can be extended post-hoc through
  generic means.
  
  I don't think that's what Thomas and I are doing. vW-grammars
  change the descriptive power of grammars. But we don't need more
  descriptive grammars. Thomas and I aren't changing the grammar of
  Phoenix at all. We're just plugging in different actions. The
  grammar is unchanged.
  
  Exactly.
  Though, I think this is the hard part to wrap the head around. We
  have a grammar, and this very same grammar is used to describe
  visitation.
 
 It's for the same reason that grammars are useful for validating
 expressions that they are also useful for driving tree traversals:
 pattern matching. There's no law that the /same/ grammar be used for
 validation and evaluation. In fact, that's often not the case.

True.
However it seems convenient to me reusing the grammar you wrote for 
validating your language for the traversal of an expression matching that 
grammar.
This is what we tried with this rule based dispatching to Semantic Actions.
I am currently thinking in another direction, that is separating traversal 
and grammar again, very much like proto contexts, but with this rule 
dispatching and describing it with proto transforms ... the idea is slowly 
materializing in my head ...
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Visitor Design Pattern

2010-10-22 Thread Thomas Heller
On Friday 22 October 2010 11:29:07 Joel de Guzman wrote:
 On 10/22/10 4:17 PM, Thomas Heller wrote:
  On Friday 22 October 2010 09:58:25 Eric Niebler wrote:
  On 10/22/2010 12:33 AM, Thomas Heller wrote:
  On Friday 22 October 2010 09:15:47 Eric Niebler wrote:
  On 10/21/2010 7:09 PM, Joel de Guzman wrote:
  Check out the doc I sent (Annex A). It's really, to my mind,
  generic languages -- abstraction of rules and templated grammars
  through metanotions and hyper-rules.
  
  Parameterized rules. Yes, I can understand that much. My
  understanding stops when I try to imagine how to build a parser
  that recognizes a grammar with parameterized rules.
  
  And I can't understand how expression templates relate to parsing.
  
  It doesn't in any practical sense, really. No parsing ever happens in
  Proto. The C++ compiler parses expressions for us and builds the tree.
  Proto grammars are patterns that match trees. (It is in this sense
  they're closer to schemata, not grammars that drive parsers.)
  
  They're called grammars in Proto not because they drive the parsing
  but because they describe the valid syntax for your embedded language.
  
  Ok, this formulation makes it much clearer :)
 
 It's just the metaphor! And what I saying is that you will get into
 confusion land if you mix metaphors from different domains. Proto uses
 the parsing domain and it makes sense (*). It may (and I say may) be
 possible to extend that metaphor and in the end it may be possible
 to incorporate that into proto instead of phoenix (if it is indeed
 conceptually understandable and reusable) --an opportunity that may
 be missed if you shut the door and dismiss the idea prematurely.
 
 It is OK to switch metaphors and have a clean cut. But again,
 my point is: use only one metaphor. Don't mix and match ad-hoc.
 
 (* regardless if it doesn't do any parsing at all!)

Makes sense. Letting the idea of two level grammars sinking in ... I still 
have problems to adapt it to the parameterized semantic actions solution we 
developed

  I have this strong feeling that that's the intent of Thomas and
  your recent designs. Essentially, making the phoenix language a
  metanotion in itself that can be extended post-hoc through
  generic means.
  
  I don't think that's what Thomas and I are doing. vW-grammars
  change the descriptive power of grammars. But we don't need more
  descriptive grammars. Thomas and I aren't changing the grammar of
  Phoenix at all. We're just plugging in different actions. The
  grammar is unchanged.
  
  Exactly.
  Though, I think this is the hard part to wrap the head around. We
  have a grammar, and this very same grammar is used to describe
  visitation.
  
  It's for the same reason that grammars are useful for validating
  expressions that they are also useful for driving tree traversals:
  pattern matching. There's no law that the /same/ grammar be used for
  validation and evaluation. In fact, that's often not the case.
  
  True.
  However it seems convenient to me reusing the grammar you wrote for
  validating your language for the traversal of an expression matching
  that grammar.
  This is what we tried with this rule based dispatching to Semantic
  Actions. I am currently thinking in another direction, that is
  separating traversal and grammar again, very much like proto contexts,
  but with this rule dispatching and describing it with proto transforms
  ... the idea is slowly materializing in my head ...
 
 Again I should warn against mixing metaphors. IMO, that is the basic
 problem why it is so deceptively unclear. There's no clear model
 that conceptualizes all this, and thus no way to reason out on
 an abstract level. Not good.

Agree.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Visitor Design Pattern

2010-10-22 Thread Thomas Heller
On Friday 22 October 2010 11:29:07 Joel de Guzman wrote:
 On 10/22/10 4:17 PM, Thomas Heller wrote:
  On Friday 22 October 2010 09:58:25 Eric Niebler wrote:
  On 10/22/2010 12:33 AM, Thomas Heller wrote:
  On Friday 22 October 2010 09:15:47 Eric Niebler wrote:
  On 10/21/2010 7:09 PM, Joel de Guzman wrote:
  Check out the doc I sent (Annex A). It's really, to my mind,
  generic languages -- abstraction of rules and templated grammars
  through metanotions and hyper-rules.
  
  Parameterized rules. Yes, I can understand that much. My
  understanding stops when I try to imagine how to build a parser
  that recognizes a grammar with parameterized rules.
  
  And I can't understand how expression templates relate to parsing.
  
  It doesn't in any practical sense, really. No parsing ever happens in
  Proto. The C++ compiler parses expressions for us and builds the tree.
  Proto grammars are patterns that match trees. (It is in this sense
  they're closer to schemata, not grammars that drive parsers.)
  
  They're called grammars in Proto not because they drive the parsing
  but because they describe the valid syntax for your embedded language.
  
  Ok, this formulation makes it much clearer :)
 
 It's just the metaphor! And what I saying is that you will get into
 confusion land if you mix metaphors from different domains. Proto uses
 the parsing domain and it makes sense (*). It may (and I say may) be
 possible to extend that metaphor and in the end it may be possible
 to incorporate that into proto instead of phoenix (if it is indeed
 conceptually understandable and reusable) --an opportunity that may
 be missed if you shut the door and dismiss the idea prematurely.
 
 It is OK to switch metaphors and have a clean cut. But again,
 my point is: use only one metaphor. Don't mix and match ad-hoc.
 
 (* regardless if it doesn't do any parsing at all!)
 
  I have this strong feeling that that's the intent of Thomas and
  your recent designs. Essentially, making the phoenix language a
  metanotion in itself that can be extended post-hoc through
  generic means.
  
  I don't think that's what Thomas and I are doing. vW-grammars
  change the descriptive power of grammars. But we don't need more
  descriptive grammars. Thomas and I aren't changing the grammar of
  Phoenix at all. We're just plugging in different actions. The
  grammar is unchanged.
  
  Exactly.
  Though, I think this is the hard part to wrap the head around. We
  have a grammar, and this very same grammar is used to describe
  visitation.
  
  It's for the same reason that grammars are useful for validating
  expressions that they are also useful for driving tree traversals:
  pattern matching. There's no law that the /same/ grammar be used for
  validation and evaluation. In fact, that's often not the case.
  
  True.
  However it seems convenient to me reusing the grammar you wrote for
  validating your language for the traversal of an expression matching
  that grammar.
  This is what we tried with this rule based dispatching to Semantic
  Actions. I am currently thinking in another direction, that is
  separating traversal and grammar again, very much like proto contexts,
  but with this rule dispatching and describing it with proto transforms
  ... the idea is slowly materializing in my head ...
 
 Again I should warn against mixing metaphors. IMO, that is the basic
 problem why it is so deceptively unclear. There's no clear model
 that conceptualizes all this, and thus no way to reason out on
 an abstract level. Not good.

Alright, I think mixing metaphors is indeed a very bad idea.
IMHO, it is best to stay in the grammar with semantic actions domain as it 
always(?) has been.
I broke my head today, and developed a solution which stays in these very 
same proto semantics, and reuses keywords (more on that) already in proto.
Attached, you will find the implementation of this very idea.

So, semantic actions in are kind of simple right now. the work by having
   proto::whensome_grammar_rule, some_transform
They ultimatevily bind this specific transform to that specific grammar.

The solution attached follows the same principles. However, grammars and 
transform can now be decoupled. Transforms are looked up by rules that define 
the grammar. In order to transform an Expression by this new form of 
semantic actions, a special type of transform has to be used which i call 
traverse. This transform is parametrized by the grammar, which holds the 
rules, and the Actions which holds the actions.
The action lookup is done by the following rules (the code might differ from 
the description, this is considered a bug in the code):
   1) Peel of grammar constructs like or_, and_ and switch.
   2) Look into this inner most layer of the grammar and see if the supplied 
actions implement a action for that rule. If this rule also matches the 
current expression, that action is returned.
   3) If no action was found, just return the expression itself

Re: [proto] Visitor Design Pattern

2010-10-23 Thread Thomas Heller
On Saturday 23 October 2010 07:13:56 e...@boostpro.com wrote:
 Actually, I may have been hasty. I've had a hard time porting my
 mini-Phoenix to proto::algorithm. Thomas, can you, either with your
 version or mine? Proto::switch_ doesn't play nicely with it.

Yes you are right ... there is a pitfall to that ...
Having the cases look like:

struct cases::case_Tag : some_rule {};

There is no way the grammar can pick up some_rule to index the action.
Possible workarounds i can think of:

struct cases::case_Tag : proto::or_some_rule {};

or something like this in the actions:

struct actions::whensome_rule::proto_grammar : do_it {};

I will get to adapting the mini phoenix today ...

 \e
 
 Sent via tiny mobile device
 
 -Original Message-
 From: Joel de Guzman j...@boost-consulting.com
 Sender: proto-boun...@lists.boost.org
 Date: Sat, 23 Oct 2010 09:29:27
 To: proto@lists.boost.org
 Reply-To: Discussions about Boost.Proto and DSEL design
   proto@lists.boost.org
 Subject: Re: [proto] Visitor Design Pattern
 
 On 10/23/2010 5:36 AM, Eric Niebler wrote:
  On 10/22/2010 10:45 AM, Eric Niebler wrote:
  On 10/22/2010 10:01 AM, Thomas Heller wrote:
  I think this is the simplification of client proto code we searched
  for. It probably needs some minor polishment though.
  
  snip
  
  Hi Thomas, this looks promising. I'm digging into this now.
  
  This is so wonderful, I can't /not/ put this in Proto. I just made a
  small change to proto::matches to VASTLY simplify the implementation
  (sync up). I also made a few changes:
  
  - The action CRTP base is no more. You can't legally specialize a
  member template in a derived class anyway.
  
  - Proto::or_, proto::switch_ and proto::if_ are the only branching
  grammar constructs and need special handling to find the sub-rule that
  matched. All the nasty logic for that is now mixed in with the
  implementation of proto::matches (where the information was already
  available but not exposed).
  
  - You have the option (but not the obligation) to select a rule's
  default transform when defining the primary when template in your
  actions class. (You can also defer to another action class's when
  template for inheriting actions, but Thomas' code already had that
  capability.)
  
  - I changed the name from _traverse to algorithm to reflect its
  role as a generic way to build algorithms by binding actions to
  control flow as specified by grammar rules. I also want to avoid any
  possible future conflict with Dan Marsden's Traverse library, which I
  hope to reuse in Proto. That said ... the name algorithm sucks and
  I'd like to do better. Naming is hard.
  
  - The algorithm class is also a grammar (in addition to a transform)
  that matches its Grammar template parameter. When you pass an
  expression that does not match the grammar, it is now a precondition
  violation. That is consistent with the rest of Proto.
  
  That's it. It's simple, elegant, powerful, and orthogonal to and fits
  in well with the rest of Proto. I think we have a winner. Good job!
 
 Sweet! It's so deliciously good!
 
 I too don't quite like algorithm. How about just simply action?
 
std::cout  actionchar_terminal, my_actions()(a)   \n; //
 printing char
 
 or maybe on:
 
std::cout  onchar_terminal, my_actions()(a)   \n; // printing
 char
 
 Regards,
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Visitor Design Pattern

2010-10-24 Thread Thomas Heller
On Sun, Oct 24, 2010 at 9:59 AM, Joel de Guzman
j...@boost-consulting.com wrote:
 On 10/24/2010 1:16 PM, Eric Niebler wrote:

 Now, what to call the traveral/algorithm/action/on thingy. None of those
 feel right. Maybe if I describe in words what it does, someone can come
 up with a good name. Given a Proto grammar that has been built with
 named rules, and a set of actions that can be indexed with those rules,
 it creates a Proto algorithm. The traversal is fixed, the actions can
 float. It's calledinsert good name here.

 Actor

  std::cout  actorchar_terminal, my_actions()(a)   \n; // printing
 char

 :-)

Cool, then we'd need some new name for phoenix, hmn?
I like:
 apply
 actor
 algorithm
 traverse
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] : Proto transform with state

2010-11-18 Thread Thomas Heller
Eric Niebler e...@... writes:
 
 On 11/17/2010 2:18 PM, joel falcou wrote:
  On 17/11/10 19:46, Eric Niebler wrote:
  See the attached code. I wish I had a better answer. It sure would be
  nice to generalize this for other times when new state needs to bubble
  up and back down.
  
  Just chiming in. We had the exact same problem in quaff where needed to
  carry on a process ID over the trasnform of parallel statement. If it can
  make you worry less Eric, we ended with the exact same workaround.
 
 There's another issue. Look here:
 
   // don't evaluate T at runtime, but default-construct
   // an object of T's result type.
   templatetypename T
   struct type_of
 : proto::makeproto::callT 
   {};
 
   struct RenumberFun
 : proto::fold
   _
 , make_pair(fusion::vector0(), proto::_state)
 , make_pair(
   push_back(
   first(proto::_state)
 //--1
 , first(Renumber(_, second(proto::_state)))
   )
 //---2
 , type_ofsecond(Renumber(_, second(proto::_state))) 
   )
   
   {};
 
 Notice that the Renumber algorithm needs to be invoked twice with the
 same arguments. In this case, we can avoid the runtime overhead of the
 second invocation by just using the type information, but that's not
 always going to be the case. There doesn't seem to be a way around it,
 either.
 
 I think Proto transforms need a let statement for storing intermediate
 results. Maybe something like this:
 
   struct RenumberFun
 : proto::fold
   _
 , make_pair(fusion::vector0(), proto::_state)
 , let
   _a( Renumber(_, second(proto::_state)) )
 , make_pair(
   push_back(
   first(proto::_state)
 , first(_a)
   )
 , type_ofsecond(_a) 
   )
   
   
   {};
 
 I haven't a clue how this would be implemented.
 
 It's fun to think about this stuff, but I wish it actually payed the bills.

Ok ... I implemented let!

Here goes the renumbering example:
http://codepad.org/K0TZamPb

The change is in line 296 rendering RenumberFun to:
struct RenumberFun
  : proto::fold
_
  , make_pair(fusion::vector(), proto::_state)
  , let
_a(Renumber(_, second(proto::_state)))
  , make_pair(
push_back(
first(proto::_state)
  , first(_a)
)
  , type_ofsecond(_a) 
)


{};

The implementation of let actually was quite easy ... here is how it works:

letLocals, Transform is a transform taking definitions of local variables and 
the transform these locals will get applied to.
A local definition is in the form of: LocalName(LocalTransform)
If the specified transform has LocalName embedded, it will get replaced by 
LocalTransform.
I also implemented the definition of more than one local ... this is done by 
reusing proto::and_:

letproto::and_LocalName0(LocalTransform0), ... LocalNameN(LocalTransformN), 
Transform
The replacement is done from the end to the beginning, making it possible to 
refer in a LocalTransformN to a LocalNameN-1, this gets replaced automatically!

Hope that helps!

Thomas



___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] : Proto transform with state

2010-12-06 Thread Thomas Heller
Eric Niebler wrote:

 On 11/18/2010 3:31 PM, Eric Niebler wrote:
 On 11/18/2010 1:45 PM, Thomas Heller wrote:
 Eric Niebler e...@... writes:
 It's REALLY hard. The let context needs to be bundled with the Expr,
 State, or Data parameters somehow, but in a way that's transparent. I
 don't actually know if it's possible.

 Very hard ... yeah. I am thinking that we can maybe save these variables
 in the transform?
 
 I'm thinking we just stuff it into the Data parameter. We have a
 let_scope template that is effectively a pair containing:
 
 1. The user's original Data, and
 2. A Fusion map from local variables (_a) to values.
 
 The let transform evaluates the bindings and stores the result in the
 let_scope's Fusion map alongside the user's Data. We pass the let_scope
 as the new Data parameter. _a is itself a transform that looks up the
 value in Data's Fusion map. The proto::_data transform is changed to be
 aware of let_scope and return only the original user's Data. This can
 work. We also need to be sure not to break the new
 proto::external_transform.
 
 The problems with this approach as I see it:
 
 1. It's not completely transparent. Custom primitive transforms will see
 that the Data parameter has been monkeyed with.
 
 2. Local variables like _a are not lexically scoped. They are, in fact,
 dynamically scoped. That is, you can access _a outside of a let
 clause, as long as you've been called from within a let clause.
 
 Might be worth it. But as there's no pressing need, I'm content to let
 this simmer. Maybe we can think of something better.
 
 I played with the let transform idea over the weekend. It *may* be
 possible to accomplish without the two problems I described above. See
 the attached let transform (needs latest Proto trunk). I'm also
 attaching the Renumber example, reworked to use let.
 
 This code is NOT ready for prime time. I'm not convinced it behaves
 sensibly in all cases. I'm only posting it as a curiosity. You're insane
 if you use this in production code. Etc, etc.

Without having looked at it too much ... this looks a lot like the 
environment in phoenix. Maybe this helps in cleaning it out a bit.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] : Proto transform with state

2010-12-07 Thread Thomas Heller
Eric Niebler wrote:

 On 12/6/2010 4:50 PM, Thomas Heller wrote:
 Eric Niebler wrote:
 I played with the let transform idea over the weekend. It *may* be
 possible to accomplish without the two problems I described above. See
 the attached let transform (needs latest Proto trunk). I'm also
 attaching the Renumber example, reworked to use let.
 snip
 
 Without having looked at it too much ... this looks a lot like the
 environment in phoenix. Maybe this helps in cleaning it out a bit.
 
 I tend to doubt it would help clean up the implementation of Phoenix
 environments. These features exist on different meta-levels: one
 (proto::let) is a feature for compiler-construction (Proto), the other
 (phoenix::let) is a language feature (Phoenix). The have roughly the
 same purpose within their purview, but as their purviews are separated
 by one great, big Meta, it's not clear that they have anything to do
 with each other.

*Dough* misunderstanding here. I didn't mean to clean up the phoenix scope 
expressions with the help of proto::let. I was thinking, maybe proto::let 
can borrow something from phoenix scopes on a conceptual level. 
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] : Proto transform with state

2010-12-07 Thread Thomas Heller
Eric Niebler wrote:

 On 12/7/2010 3:13 PM, Thomas Heller wrote:
 Eric Niebler wrote:
 Now they do: T()(e,s,d). Inside T::impl, D had better be the type of d.
 Nowhere does the _data transform appear in this code, so changing _data
 to be smart about environments and scopes won't save you if you've
 monkeyed with the data parameter.
 
 Very true. Something like proto::transform_env_impl could help. Introduce
 a new type of primitive transform which is aware of this environment. The
 usual transform_impl can still be used.
 By calling T()(e,s,d) you just create a 2-tuple. The first parameter is
 the state, second data.
 Just thinking out loud here ...
 
 So transform_impl strips the data parameter of the let scope stuff, and
 the local variables like _a don't use transform_impl and see the scope
 with the locals?
 
 Well, it's not that simple Consider:
 
   make int(_a) 
 
 If make::impl uses transform_impl, it strips the let scope before _a
 gets to see the local variables. If this is to work, most of Proto's
 transforms must be special and pass the let scope through. That means
 proto::call, proto::make, proto::lazy and proto::when, at least. But ...
 it just might work. Good thinking!
 
 time passes...
 
 No, wait a minute. Look again:
 
 struct T : transformT {
 
   templateclass E, class S, class D
   struct impl : transform_implE, S, D {
 
 // do something with D
 
   };
 
 };
 
 T::impl gets passed D *before* transform_impl has a change to fix it up.
 Any existing transform that actually uses D and assumes it's what they
 passed in is going to be totally hosed. In order to make existing
 transforms let-friendly, they would all need to be changed. That's no
 good.
 
 Bummer, I was excited for a minute there. :-/

Yes ... this would basically mean a complete rewrite ... no good.
-- Proto V5, you might save the thought for the C++0x rewrite :)
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] grammars, domains and subdomains

2010-12-07 Thread Thomas Heller
Eric Niebler wrote:

 On 12/7/2010 2:37 PM, Thomas Heller wrote:
 Hi,
 
 I have been trying to extend a domain by subdomaining it. The sole
 purpose of this subdomain was to allow another type of terminal
 expression.
 
 Please see the attached code, which is a very simplified version of what
 I was trying to do.
 snip
 
 So, How to handle that correctly?
 
 Yup, that's a problem. I don't have an answer for you at the moment,
 sorry.

This is a real bummer ... I need it for the phoenix local variables :(
I might be able to work something out ... The idea is the following:

template typename Expr
struct grammar_of
{
typedef typename proto::domain_ofExpr::type domain_type;
typedef typename domain_type::proto_grammar type;
};

struct grammar
: proto::or_
  proto::plus
   grammar_of_child_c0 ()
 , grammar_of_child_c1 ()
  

{};


This looks just insane though ... But looks like what I want ... need to 
test this properly ...
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] grammars, domains and subdomains

2010-12-08 Thread Thomas Heller
Eric Niebler wrote:

 On 12/7/2010 2:37 PM, Thomas Heller wrote:
 Hi,
 
 I have been trying to extend a domain by subdomaining it. The sole
 purpose of this subdomain was to allow another type of terminal
 expression.
 
 Please see the attached code, which is a very simplified version of what
 I was trying to do.
 snip
 
 So, How to handle that correctly?
 
 Yup, that's a problem. I don't have an answer for you at the moment,
 sorry.

I think i solved the problem. The testcase for this solution is attached.
Let me restate what I wanted to accomplish:

Given a domain, which serves as a common superdomain for certain different
subdomains.
The grammar associated with that domain will serve as the base grammar for
that EDSL.
There might be usecases, where certain proto expressions want to be allowed
in a sub domain, and additionally allowing that expression to be mixed with
the already defined grammar rules in the super domain.
So, in order to achieve that, the grammar of our common super domain needs
to be parametrized on a Grammar type. This will allow to reuse that
grammar, and extend it with other rules.
The implementation looks like this:

struct super_grammar
   : proto::switch_super_grammar
{
template typename Tag, typename Grammar = super_grammar
struct case_;
};

With the different case_ specializations on Tag we can define what valid
expressions are, with respect to the Grammar type. This defaults to
super_grammar.

To extend that grammar in terms of allowing additional expressions, but do
not change what expressions super_grammar matches we can do the following:

struct sub_grammar
   : proto::switch_sub_grammar
{
template typename Tag, typename Grammar = sub_grammar
struct case_ : super_grammar::case_Tag, sub_grammar
{};
};

So far so good. With this technique, every expression which was valid in
super_grammar are now valid in sub_grammar with the addition of the
extensions.

This might be what people refer to as type-2 grammars.

Now, super_grammar belongs to super_domain and sub_grammar to sub_domain,
which is a sub domain of super_domain.
At the end of the day, I want to mix expressions from super_domain with
expressions from sub_domain.
The default operator overloads are not suitable for this, cause the deduced
domain of super_domain and sub_domain is super_domain.
This makes expressions of the form t1 OP t2 invalid (where t1 is from
super_domain and t2 from sub_domain) because t1 OP t2 is not a valid
expression in super_domain. However it is in sub_domain.
In this case we do not want the deduced_domain, but the strongest domain,
which is basically the most specialised domain, or strongest domain as I
tagged it.

I hope that makes sense.

Regards,

Thomas
#include boost/proto/proto.hpp

namespace proto = boost::proto;
namespace mpl = boost::mpl;

typedef proto::terminalint int_terminal;
typedef proto::terminaldouble double_terminal;


template typename Expr
struct actor1;

struct grammar1;

// define our base domain
struct domain1
: proto::domain
proto::pod_generatoractor1
  , grammar1
  , proto::default_domain

{};

// define the grammar for that domain
struct grammar1
: proto::switch_grammar1
{
// The actual grammar is parametrized on a Grammar parameter as well
// This allows us to effectively reuse that grammar in subdomains.
template typename Tag, typename Grammar = grammar1
struct case_
: proto::or_
proto::plusGrammar, Grammar
  , int_terminal

{};
};

// boring expression wrapper
template typename Expr
struct actor1
{
BOOST_PROTO_BASIC_EXTENDS(Expr, actor1Expr, domain1)
};



template typename Expr
struct actor2;

struct grammar2;

// our domain2 ... this is a subdomain of domain1
struct domain2
: proto::domain
proto::pod_generatoractor2
  , grammar2
  , domain1

{};

// the grammar2
struct grammar2
: proto::switch_grammar2
{
// again parameterized on a Grammar, defaulting to grammar2
// This is not really needed here, but allows to reuse that grammar as well
template typename Tag, typename Grammar = grammar2
struct case_
: proto::or_
// here we go ... reuse grammar1::case_ with our new grammar
grammar1::case_Tag, Grammar
  , double_terminal

{};
};

// boring expression wrapper
template typename Expr
struct actor2
{
BOOST_PROTO_BASIC_EXTENDS(Expr, actor2Expr, domain2)
};


actor1int_terminal::type t1 = {{42}};
actor2double_terminal::type t2 = {{3.14}};



// specialize is_extension trait

Re: [proto] grammars, domains and subdomains

2010-12-08 Thread Thomas Heller
Eric Niebler wrote:

 On 12/8/2010 5:30 AM, Thomas Heller wrote:
 Eric Niebler wrote:
 On 12/7/2010 2:37 PM, Thomas Heller wrote:
 So, How to handle that correctly?

 Yup, that's a problem. I don't have an answer for you at the moment,
 sorry.
 
 I think i solved the problem. The testcase for this solution is attached.
 Let me restate what I wanted to accomplish:
 snip
 
 Thomas,
 
 A million thanks for following through. The holidays and my day job are
 taking their toll, and I just don't have the time to dig into this right
 now. It's on my radar, though. I'm glad you have a work-around, but it
 really shouldn't require such Herculean efforts to do this. There are 2
 bugs in Proto:
 
 1) Proto operator overloads are too fragile in the presence of
 subdomains. Your line (5) should just work. It seems like a problem that
 Proto is conflating grammars with subdomain relationships the way it is,
 but I really need to sit down and think it through.
 
 One possible solution is an implicit modification of the grammar used to
 check expressions when the children are in different domains. For
 instance, in the expression A+B, the grammar used to check the
 expression currently is: common_domainA, B::type::proto_grammar.
 Instead, it should be:
 
or_
typename common_domainA, B::type::proto_grammar
  , if_
is_subdomain_of
typename common_domainA, B::type
  , domain_of _ 
()


 
 That is, any expression in a subdomain of the common domain is by
 definition a valid expression in the common domain. is_subdomain_of
 doesn't exist yet, but it's trivial to implement. However ...
 
 2) Using domain_of_ in proto::if_ doesn't work because grammar
 checking is currently done after stripping expressions of all their
 domain-specific wrappers. That loses information about what domain an
 expression is in. Fixing this requires some intensive surgery on how
 Proto does pattern matching, but I foresee no inherent obstacles.
 
 It'd be a big help if you could file these two bugs.

I will try to present a patch. I urgently need this feature to be 
become officially supported to use it for phoenix3 (a scope opened by let or 
lambda should be it's own sub domains in order to allow local variables, 
theses shall not be allowed in a regular expression, and they should be 
combinable with other expressions).
Looking at the bug tracker i find two bugs, which are directly related to 
this:
https://svn.boost.org/trac/boost/ticket/4675
and
https://svn.boost.org/trac/boost/ticket/4668
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] grammars, domains and subdomains

2010-12-08 Thread Thomas Heller
Eric Niebler wrote:

 On 12/8/2010 4:44 PM, Thomas Heller wrote:
 I will try to present a patch. I urgently need this feature to be
 become officially supported to use it for phoenix3 (a scope opened by let
 or lambda should be it's own sub domains in order to allow local
 variables, theses shall not be allowed in a regular expression, and
 they should be combinable with other expressions).

Basically, yes.
The other point is to detect if a expression is 
nullary. With local variables we have a problem. They are not nullary, 
however need a proper environment in order to be evaluated. I only do the 
evaluation if the expression (evaluation also means result type calculation, 
the problem here is operator()() as it gets instantiated every time. And I 
can't calculate a result type for something like _1 or _a):
  1) Matches the grammar (this is without extension)
  2) Is nullary (no placeholders in the expression)

So, The sub-domain with sub-grammar part is the easiest and cleanest 
solution, I actually looked it up in your old proto prototype.

 
 IIUC, this is to do proper error checking and detect scoping problems
 early. But if you don't have this feature, you can still make properly
 scoped expressions work, right? Can you move ahead with a looser grammar
 and tighten it up later?

I don't really now how to do it otherwise with the current design.
There is really only this part of the puzzle missing. If it is done, we have 
a working and clean Phoenix V3.
For the time being, I can live with the workaround I did.
However, I will focus my efforts of the next days on working out a patch for 
this to work properly.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] grammars, domains and subdomains

2010-12-09 Thread Thomas Heller
Eric Niebler wrote:

 On 12/8/2010 5:30 PM, Thomas Heller wrote:
 I don't really now how to do it otherwise with the current design.
 There is really only this part of the puzzle missing. If it is done, we
 have a working and clean Phoenix V3.
 For the time being, I can live with the workaround I did.
 However, I will focus my efforts of the next days on working out a patch
 for this to work properly.
 
 I made a simple fix to proto::matches on trunk that should get you
 moving. The overloads can still be smarter about sub-domains, but for
 now you can easily work around that by explicitly allowing expressions
 in sub-domains in your super-domain's grammar. See the attached solution
 to your original problem.

I am afraid that this does not really solve the problem.

I think there is a misunderstanding in how sub-domaining works.
The solution you propose is that a sub domain extends its super domain in 
the way that the expressions in the sub domain also become valid in the 
super domain. I guess this is the way it should work.
However, the solution I am looking for is different.
The sub-domain i tried to define should also extend its super domain, BUT 
expressions valid in this sub-domain should not be valid in the super 
domain, only in the sub-domain itself.

I think proto should support both forms of operation. The first one can be 
easily (more or less) achieved by simply changing proto::matches in the way 
you demonstrated earlier, I think. I am not sure, how to do the other stuff 
properly though.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] grammars, domains and subdomains

2010-12-10 Thread Thomas Heller


Eric Niebler wrote:

 On 12/8/2010 5:30 PM, Thomas Heller wrote:
 I don't really now how to do it otherwise with the current design.
 There is really only this part of the puzzle missing. If it is done, we
 have a working and clean Phoenix V3.
 For the time being, I can live with the workaround I did.
 However, I will focus my efforts of the next days on working out a patch
 for this to work properly.
 
 I made a simple fix to proto::matches on trunk that should get you
 moving. The overloads can still be smarter about sub-domains, but for
 now you can easily work around that by explicitly allowing expressions
 in sub-domains in your super-domain's grammar. See the attached solution
 to your original problem.

It solves the problem as it succeeds to compile.
However, there are two problems with that solution:
  1) t2 is matched by grammar1
  2) I have to add the plus rule in grammar2 (this could be solved with the 
 grammar parametrisation from my earlier post)
  3) The expression in a subdomain is matched in grammar1 on the pure fact 
 that it is a subdomain of domain1, it should be matched against the 
 subdomains grammar as well.

Right now, i am questioning the whole deduce_domain domain part and
selection of the resulting domains in proto's operator overload.
Here is what i think should happen (Without the loss of genericity I
restrict myself to binary expressions):

If the two operands are in the same domain, the situation is clear:
The operands need to match the grammar belonging to the domain, and the
result has to as well.

If one of the operands is in a different domain the situation gets
complicated. IMHO, the domain of the resulting expression should be
selected differently.
Given is a domain (domain1) which has a certain grammar (grammar1) and
sub-domain (domain2) with another grammar (grammar2).
When combining two expressions from these domains with a binary op, the
resulting expression should be in domain2.
Why? Because there is no way that when writing grammar1 to account the
expressions which should be valid grammar2. With the current deduce_domain,
this is determined to fail always. Additionally, conceptionally, it makes
no sense that a expression containing t1 and t2 be in domain1.
When the domains are not compatible (meaning they have no domain - sub
domain relationship), the resulting domain should be common_domain.

These considerations are based on the assumption that a expression in a
sub-domain should not be matched by the grammar of the super domain.
Which makes sense, given the context of the local variables in phoenix.
Remember, local variables shall only be valid when embedded in a let or
lambda expression.
Maybe, the sub-domain idea is not suited at all for that task.

OK ... thinking along ... The stuff which is already in place, and your
suggested fix, makes sense when seeing sub-domains really as extensions to
the super domain, and the grammar of these ...

Thoughts?



___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] grammars, domains and subdomains

2010-12-10 Thread Thomas Heller
Eric Niebler wrote:

 On 12/10/2010 3:23 AM, Thomas Heller wrote:
snip
 However, the solution I am looking for is different.
 The sub-domain i tried to define should also extend its super domain, BUT
 expressions valid in this sub-domain should not be valid in the super
 domain, only in the sub-domain itself.
 
 Because you don't want phoenix::_a to be a valid Phoenix expression
 outside of a phoenix::let, right?

Correct.
 
 I think proto should support both forms of operation. The first one can
 be easily (more or less) achieved by simply changing proto::matches in
 the way you demonstrated earlier, I think. I am not sure, how to do the
 other stuff properly though.
 
 OK, let's back up. Let's assume for the moment that I don't have time to
 do intensive surgery on Proto sub-domains for Phoenix (true). How can we
 get you what you need?

Good question :)
I can maybe help here once the port is finished once and for all. Let's 
prioritize. Too bad your day job does not involve proto :(

 My understanding of your needs: you want a way to define the Phoenix
 grammar such that (a) it's extensible, and (b) it guarantees that local
 variables are properly scoped. You have been using proto::switch_ for
 (a) and sub-domains for (b), but sub-domains don't get you all the way
 there. Have I summarized correctly?

Absolutely!
 
 My recommendation at this point is to give up on (b). Don't enforce
 scoping in the grammar at this point. You can do the scope checking
 later, in the evaluator of local variables. If a local is not in scope
 by then, you'll get a horrific error. You can improve the error later
 once we decide what the proper solution looks like.

I tried that and failed miserably ... There might have another workaround, I 
already have something i mind i missed earlier.

 If I have mischaracterized what you are trying to do, please clarify.

Nope, I guess this is the way it should be. Please disregard my other mail 
in this thread ;)
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


[proto] fix for gcc-4.2 ICE

2010-12-23 Thread Thomas Heller
Hi,

I recently run into an ICE by compiling phoenix3 with gcc-4.2. It seems this 
particular can not handle proto::detail::poly_function_traits properly.
the problem is the default Switch template initialization ... by replacing 
std::size_t Switch = sizeof(test_poly_functionFun(0,0)) by typename Switch 
= mpl::size_tsizeof(test_poly_functionFun(0,0)) the ICE is gone (the 
specialisations are adapted as well.
The patch is attached.

Best regards,
Thomas
Index: boost/proto/detail/poly_function.hpp
===
--- boost/proto/detail/poly_function.hpp	(revision 67425)
+++ boost/proto/detail/poly_function.hpp	(working copy)
@@ -15,6 +15,7 @@
 #include boost/ref.hpp
 #include boost/mpl/bool.hpp
 #include boost/mpl/void.hpp
+#include boost/mpl/size_t.hpp
 #include boost/mpl/eval_if.hpp
 #include boost/preprocessor/cat.hpp
 #include boost/preprocessor/facilities/intercept.hpp
@@ -185,7 +186,7 @@
 templatetypename T unknown_function_t test_poly_function(T *, ...);
 
 
-templatetypename Fun, typename Sig, std::size_t Switch = sizeof(test_poly_functionFun(0,0))
+templatetypename Fun, typename Sig, typename Switch = mpl::size_tsizeof(test_poly_functionFun(0,0)) 
 struct poly_function_traits
 {
 typedef typename Fun::template resultSig::type result_type;
@@ -194,7 +195,7 @@
 
 
 templatetypename Fun, typename Sig
-struct poly_function_traitsFun, Sig, sizeof(mono_function_t)
+struct poly_function_traitsFun, Sig, mpl::size_tsizeof(mono_function_t) 
 {
 typedef typename Fun::result_type result_type;
 typedef Fun function_type;
@@ -265,7 +266,7 @@
 
 
 templatetypename PolyFun BOOST_PP_ENUM_TRAILING_PARAMS(N, typename A)
-struct poly_function_traitsPolyFun, PolyFun(BOOST_PP_ENUM_PARAMS(N, A)), sizeof(poly_function_t)
+struct poly_function_traitsPolyFun, PolyFun(BOOST_PP_ENUM_PARAMS(N, A)), mpl::size_tsizeof(poly_function_t) 
 {
 typedef typename PolyFun::template implBOOST_PP_ENUM_PARAMS(N, const A) function_type;
 typedef typename function_type::result_type result_type;
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


[proto] phoenix 3 refactoring complete.

2010-12-23 Thread Thomas Heller
Hi,

I just wanted you to know that phoenix3 is in a working state once again.
I refactored everything with the changes we discussed ...
All tests from boost.bind are passing!
Placeholder unification is in place!
Now ... up to documentation writing and checking for BLL compatibility ...

Regards,
Thomas
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] looking for an advise

2010-12-27 Thread Thomas Heller
Eric Niebler wrote:

 On 12/27/2010 5:26 AM, Joel Falcou wrote:
 On 27/12/10 11:02, Maxim Yanchenko wrote:
 Hi Eric and other gurus,

 Sorry in advance for a long post.

 I'm making a mini-language for message processing in our system.
 It's currently implemented in terms of overloaded functions with
 enable_ifmatchGrammar  dispatching, but now I see that

 Dont. this increases copiel time and provide unclear error. Accept
 anykind of expression adn use matches in a
 static_assert with a clear error ID.
 
 (a) I'm reimplementing Phoenix which is not on Proto yet in Boost
 1.45.0 (that's
 how I found this mailing list). It would be great to reuse what
 Phoenix has;

 Isn't it in trunk already Thomas ?
 
 No, it's still in the sandbox. Maxim, you can find Phoenix3 in svn at:
 
 https://svn.boost.org/svn/boost/sandbox/SOC/2010/phoenix3
 
 According to Thomas, the docs are still a work in progress, but the code
 is stable.
 
snip

 Better see what Thomas has up his sleeves in Phoenix.
 
 Right. Maxim, you are totally correct that you are reimplementing much
 of Phoenix3, and that the extensibility of Phoenix3 is designed with
 precisely these use cases in mind. The only thing missing are docs on
 the customization points you'll need to hook.
 
 Thomas, this is a GREAT opportunity to put the extensibility of Phoenix3
 to test. Can you jump in here and comment?

You are right! We designed phoenix3 for exaclty this use case!
Unfortunately, all that is undocumented at the moment, but what you 
describe 
should be doable quite easily.
Right now, the only advice i can give is to have a look at all the 
different 
modules that are implemented.
I will reply the minute the docs are finished ... in the mean time i would 
be happy to help finding a solution to your problem.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] expanding Proto's library of callables

2010-12-28 Thread Thomas Heller
Eric Niebler wrote:

 On 12/28/2010 5:39 AM, Thomas Heller wrote:
 I just saw that you added functional::at.
 I was wondering about the rationale of your decision to make it a non
 template.
 My gut feeling would have been to have proto::functional::atN(seq)
 and not proto::functional::at(seq, N).
 
 Think of the case of Phoenix placeholders, where in the index is a
 parameter:
 
   when terminalplaceholder_ , _at(_state, _value) 

vs:

whenterminalplaceholder_ , _at_value(_state)

 For the times when the index is not a parameter, you can easily do:
 
   _at(_state, mpl::int_N())

vs:

_atmpl::int_N (_state)

just wondering ... the second version looks more natural and consistent
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] expanding Proto's library of callables

2010-12-28 Thread Thomas Heller
Eric Niebler wrote:

 On 12/28/2010 11:43 AM, Thomas Heller wrote:
 Eric Niebler wrote:
 
 On 12/28/2010 5:39 AM, Thomas Heller wrote:
 I just saw that you added functional::at.
 I was wondering about the rationale of your decision to make it a non
 template.
 My gut feeling would have been to have proto::functional::atN(seq)
 and not proto::functional::at(seq, N).

 Think of the case of Phoenix placeholders, where in the index is a
 parameter:

   when terminalplaceholder_ , _at(_state, _value) 
 
 vs:
 
 whenterminalplaceholder_ , _at_value(_state)
 
 Have you tried that? Callable transforms don't work that way. It would
 have to be:
 
  lazyat_value(_state)
 
 Blech.

Right ... i keep forgetting about that ...

 For the times when the index is not a parameter, you can easily do:

   _at(_state, mpl::int_N())
 
 vs:
 
 _atmpl::int_N (_state)
 
 just wondering ... the second version looks more natural and consistent
 
 Still think so?


Nope. Let's have it your way :)

One other thing though ...

struct at
{
BOOST_PROTO_CALLABLE()

templatetypename Sig
struct result;

templatetypename This, typename Seq, typename N
struct resultThis(Seq, N)
  : fusion::result_of::at
typename boost::remove_referenceSeq::type
  , typename boost::remove_consttypename 
boost::remove_referenceN::type::type

{};

templatetypename Seq, typename N
typename fusion::result_of::atSeq, N::type
operator ()(Seq seq, N const  
BOOST_PROTO_DISABLE_IF_IS_CONST(Seq)) const
{
return fusion::atN(seq);
}

templatetypename Seq, typename N
typename fusion::result_of::atSeq const, N::type
operator ()(Seq const seq, N const ) const
{
return fusion::atN(seq);
}
};

VS:

struct at
{
BOOST_PROTO_CALLABLE()

templatetypename Sig
struct result;

templatetypename This, typename Seq, typename N
struct resultThis(Seq, N)
: resultThis(Seq const , N)
{};

templatetypename This, typename Seq, typename N
struct resultThis(Seq , N)
  : fusion::result_of::at
Seq
  , typename proto::detail::uncvrefN::type

{};

templatetypename Seq, typename N
typename fusion::result_of::atSeq, N::type
operator ()(Seq seq, N const  
BOOST_PROTO_DISABLE_IF_IS_CONST(Seq)) const
{
return fusion::atN(seq);
}

templatetypename Seq, typename N
typename fusion::result_of::atSeq const, N::type
operator ()(Seq const seq, N const ) const
{
return fusion::atN(seq);
}
};

I think the second version instantiates less templates than the first one.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] My own lambda for MSM / wish list

2011-03-14 Thread Thomas Heller
On Monday, March 14, 2011 01:39:41 PM Christophe Henry wrote:
 Hi Thomas,
 
  Row  source_state , event , target_state, action , guard 
 
 I suggest you to look into spirit how semantic actions are dealt with, it 
reminds me of exactly this:
 Ok I will, thanks.

Keep in mind, spirit uses Phoenix V2, but the idea is the same ...

 Phoenix actors are POD, they are default constractible. Do you have a
 testcase where this is not the case?
 
 Unfortunately no more, it was quite some time ago, maybe I did it with
 V2. I got something which was not usable by BOOST_TYPEOF.
 I'll try it again.
 
  std::setint, BOOST_MSM_LPP_LAMBDA_EXPR(_1  _2)  s2;
 
 As Eric noted, decltype is the answer. I can't see why this should not
 be possible within phoenix.
 
 Wait, I'm confused, it IS possible, or it SHOULD be possible? Can I
 pass any valid phoenix expression? Not just a simple operator
 expression, but an if/else, while, or an expression made of several
 statements?
 I want an arbitrary phoenix expression.
 That I need a decltype is clear and what I'm already doing. But an
 expression containing references, pointers, or even an int would be of
 no use for a decltype.

Yes, this is a problem. I don't know how this can be solved without actually 
passing the expression.

  std::transform(vec.begin(),vec.end(),vec.begin(), ++_1 + foo_(_1) );
 
 To answer your question, phoenix actors return their last statement. In
 your case the result of ++_1 + foo_(_1) is returned.
 
 Hmmm now I'm lost. Just after you write that if_/else_ returns nothing.

See below.

  std::transform(vec.begin(),vec.end(),vec.begin(),
  if_(_1)[ ++_1,return_(++_1)].else_ [return_(++_1)] );
 
 Ok, if_(...)[...] and if(...)[...].else_[...] are statements in phoenix,
 meaning they return nothing (aka void). The answer to your question is
 to simply use phoenix::if_else which is in the operator module because
 it emulates the ?: operator.
 
 Ok but this is just a partial answer, right? What about while / do or
 whatever I might need?
 I would like a clear statement: If I have an expression composed of
 any number of statements of any type and any order inside the full
 extent of phoenix (say if_().else_[], while_(), ++_1,fun_()), what is
 the return value?

 From what you write I understand the last statement except in some
 cases. Which ones?

Calling phoenix expressions from the statement module return void.
Calling phoenix expressions from any other modules return whatever ... 
depending on the C++-Sense.
You might ask for example:
What does _1 + _2 return?

The answer is: It returns a callable expressions. When the expression gets 
called with two arguments that call will return the result of the addition, 
or however operator+ is overloaded for the types passed ...

In Phoenix you also have the overloaded operator, - which emulates 
sequences.
Now there is no return statement in phoenix. A call to a phoenix expression 
will return whatever expression (now talking in the C++ sense) the last 
expression was, where a phoenix expression can be composed and is maybe 
arbitrary complex.
This can be void or something totally different.

There are only some cases where the return value is fixed to a certain type.
In particular everything from the Statement module was defined to return 
void to match the C++ semantics.

 As Eric already noted, Phoenix captures everything by value. There is no
 special syntax needed to capture other variables in scope (if you want
 to capture them by value, meaning that they will get copied into the
 phoenix expression). Your solution is similar to what is there in
 phoenix already: Local variables and the let/lambda expressions. Your
 examples will have your exact better syntax:
 
 lambda[ _a = q ] [ _1 + _a /* firstcaptured */]
 
 lambda[ _a = ref(q) ] [ _a++ /* first captured */ ]
 
 lambda[ _a = cref(q) ] [ _1 + _a /* first captured */ ]
 
 Ah perfect! This is what I'm looking for.
 
  Ok, these ones I didn't implement them, but I can dream.
 
 I am dreaming with you! :)
 
 Let's not dream much longer then ;-)
 We have here people who can do it.
 
 By looking at these examples i can not see what's there that is not
 provided by Phoenix already.
 
 Then it's perfect and what I want, but I'll need to give it a closer
 look. Mostly I want:
 - return values
 - the ability to decltype anything. For MSM it's about the same as the
 std::set case.
 
 If you allow a short criticism, I'd say that phoenix is great and
 maybe even offers all I want, but the doc is doing it a great
 disservice. I got so frustrated I started my own implementation and
 seeing my lack of time, it means something ;-)

Maybe you should have asked before you started re-implementing everything ;)

 What I think we need is:
 - more different examples showing what it can do, not the trivial
 examples it currently took over from V2

Phoenix comes with a whole bunch of testcases, which can be seen as examples 
as well.

 - much much more about internals so that 

Re: [proto] My own lambda for MSM / wish list

2011-03-16 Thread Thomas Heller
On Monday, March 14, 2011 10:12:09 PM Christophe Henry wrote:
 Calling phoenix expressions from the statement module return void.
 Calling phoenix expressions from any other modules return whatever ...
 depending on the C++-Sense.
 
 It's ok, I can live with it, though I'll need to find a way around
 because I do need this return stuff.
 
  If you allow a short criticism, I'd say that phoenix is great and
  maybe even offers all I want, but the doc is doing it a great
  disservice. I got so frustrated I started my own implementation and
  seeing my lack of time, it means something ;-)
 
 Maybe you should have asked before you started re-implementing
 everything ;)
 
 And maybe I did ;-)
 As a matter of fact, I did, twice. First at the BoostCon09 I talked
 about it with JdG, but my question was not well formulated, so I'm not
 sure he got what I wanted to do.
 Second time at BoostCon10, I mentioned it during my talk and Eric's
 view was that what I wanted to do was not possible with the state of
 Phoenix at that time (though maybe I again failed to explain my
 point).

Well, because it wasn't possible at the time, I assume ;)
Times have changed since then ... 

 Then I had a look at the doc and still didn't find my answers.
 
 So, I decided to invest some time (not too long) to present using a
 basic implementation what I was looking for.
 At least with this example, it's easier to discuss, I can ask much
 more targeted questions and I had a lot of fun in the process :)
 
 Phoenix comes with a whole bunch of testcases, which can be seen as
 examples as well.
 
 I had a look at that too, but I looked at the statement testcases,
 which still didn't answer my question about return types. True, I
 didn't think of looking into operator.

Sure ... the things about return types can't be found in the examples ... To 
quote the docs:

Unlike lazy functions and lazy operators, lazy statements always return 
void.
https://svn.boost.org/svn/boost/trunk/libs/phoenix/doc/html/phoenix/modules/statement.html



 I tried to document the internals. What do you think is missing?
 The questions you had could have all been solved by looking at the
 documentation, there wasn't any need to know the internals.
 
 What I wanted to know:
 - can I pass a phoenix expression to a decltype / BOOST_TYPEOF and
 default-construct it?

In general, no, because of various terminals that need to be initialized.
You can however limit the phoenix grammar to only those expressions that can 
be default constructed.
Why not copy construct them?

 - what is the return value of a phoenix expression?

boost::result_ofExpr(Arg0, Arg1, ..., ArgN)::type

This is a little hidden in the docs ... but its all here:
https://svn.boost.org/svn/boost/trunk/libs/phoenix/doc/html/phoenix/inside/actor.html

Follow the link to Polymorphic Function Object.

 - how do I add stuff I want (return for ex.)

https://svn.boost.org/svn/boost/trunk/doc/html/phoenix/examples/adding_an_expression.html

 Do I really find this in the doc?
 With the ref/cref inside the lambda, it's also not shown, but I admit
 I could have thought about it by myself.
 
 Ok, I admit, the whole capture by value semantics probably isn't
 discussed at full length. Of course there is always room for
 improvement! Do you have anything specific?
 
 Ah, value semantics isn't my main interest anyway. Where I really
 started to doubt was reading the internals section, then I came to
 this:
 
 // Transform plus to minus
 template 
 struct invert_actions::whenphoenix::rule::plus
 
 : proto::call
 
 proto::functional::make_exprproto::tag::minus(
 phoenix::evaluator(proto::_left, phoenix::_context)
   , phoenix::evaluator(proto::_right, phoenix::_context)
 )
 
 {};
 
 I understand a bit of proto but this mix of proto:: and phoenix:: is
 just killing me. It just doesn't fit into my standard design layered
 model. Either I work at the proto layer, or at the phoenix layer, not
 at both. Having to understand both is simply increasing the cost of
 playing with phoenix.

Well, the phoenix internals are all implemented using proto ... if you dive 
deep into the actions you need both, stuff from proto and stuff from 
phoenix. This part of the library is just proto using stuff that was defined 
within phoenix ... 
You need to know what the evaluator and what actions are:
https://svn.boost.org/svn/boost/trunk/libs/phoenix/doc/html/phoenix/inside/actor.html#phoenix.inside.actor.evaluation
https://svn.boost.org/svn/boost/trunk/libs/phoenix/doc/html/phoenix/inside/actions.html
 
 Now, I have the feeling you think I'm here for a round of phoenix
 bashing. It can't be less true. I'm a big admirer of Joel's work and I
 think phoenix is a really cool library. My problem is simple, I want
 to combine ET with decltype and bring it into new places (MSM, MPL,
 proto for example), and I need to answer 2 questions:
 - can phoenix fulfill my needs? (it seems it does, great!)
 - do I arrive 

Re: [proto] Using Phoenix inside eUML: mixing grammars

2011-03-22 Thread Thomas Heller
On Wed, Mar 16, 2011 at 9:56 PM, Christophe Henry
christophe.j.he...@googlemail.com wrote:
 Hi,

Sorry for the late reply ...

 I have my eUML grammar, defing, for example a transition as:

 SourceState+ Event [some_guard] == TargetState

 I want to write for some_guard an expression of a phoenix grammar. The
 relevant part of the expression is:
 Event [some_guard]

 Where the corresponding part of my grammar is:

 struct BuildEventPlusGuard
    : proto::when
            proto::subscriptproto::terminalevent_tag,BuildGuards ,
            TempRownone,proto::_left,none,none,BuildGuards(proto::_right)()
        
 {};

 BuildGuards is, at the moment, a proto::switch_ grammar, which I want
 to replace with something matching a phoenix grammar and returning me
 a phoenix::actorExpr, which I will then save into TempRow.
 I suppose I could typeof/decltype the phoenix expression but it
 doesn't seem like the best solution, I'd prefer to delay this.
 Is there something like a phoenix grammar which I can call to check if
 the expression is valid, and if yes, which will return me the correct
 phoenix::actor?

There is boos::phoenix::meta_grammar which can be used to check for
valid phoenix expressions. It is really just the grammar, you can't
use it as a transform.
In your case you can reuse the meta grammar in a great way to restrict
certain constructs:

struct my_custom_phoenix_grammar
: proto::switch_my_custom_phoenix_grammar
{
template typename Tag
struct case_ : meta_grammar::case_Tag {};
};

The above, by default allows everything that's in meta_grammar with
the ability to override some of meta_grammar's rules.

If you want to use it as a transform you need the evaluator with an
appropriate action that does the desired transform... here is an
example:

 struct BuildEventPlusGuard
    : proto::when
            proto::subscriptproto::terminalevent_tag,
phoenix::meta_grammar ,
            TempRownone,proto::_left,none,none,
phoenix::evaluator(proto::_right, some_cool_action())()
        
 {};

Now, some_cool_action can do the transform that BuildGuards was doing.

 Second question. Now it's becoming more interesting. And not easy to
 explain :(

 For eUML, a guard can be defined as g1  g2, where g1 and g2 are
 functors, taking 4 arguments. For example (to make short):
 struct g1_
 {
    template class FSM,class EVT,class SourceState,class TargetState
    void operator()(EVT const ,FSM,SourceState ,TargetState )
    {
        ...
    }
 };
  g1_ g1;

 The fact that there are 4 arguments is the condition to make this work
 without placeholders.

 I 'm pretty sure that, while this clearly should be a function for
 phoenix, I would not like the syntax:
 g1(arg1,arg2,arg3,arg4)  g2(arg1,arg2,arg3,arg4).

 Is it possible to define g1 and g2 as custom terminals, but still get
 them treated like functions?

Yes!

Here is an example:

template typename G
struct msm_guard
{};

template typename Expr
struct msm_guard_actor;

template typename G
struct msm_guard_expression
: phoenix::terminalmsm_guardG, msm_guard_actor
{};

template typename Expr
struct msm_guard_actor
{
typedef actorExpr base_type;
base_type expr;
msm_guard_actor(base_type const  base) : expr(base) {}

// define the operator() overloads here to allow something
// that michael suggested. The result should be full blown phoenix
// expressions (BOOST_PHOENIX_DEFINE_EXPRESSION)
};

namespace boost { namespace phoenix {
namespace result_of
{
template template G
is_nullarycustom_terminalmsm_guardG  : mpl::false_ {};
}
template typename G
struct is_custom_terminalmsm_guardG  : mpl::true_ {};

template typename G
struct custom_terminalmsm_guardG 
: proto::call
G(
proto::functional::at(_env, mpl::int_1())
  , proto::functional::at(_env, mpl::int_2())
  , proto::functional::at(_env, mpl::int_3())
  , proto::functional::at(_env, mpl::int_4())
   )
   
{};
}}

I hope that above example helps you and clarifies how to customize
phoenix V3 even further.
Note on that terminal thing: It's not in trunk yet ... I did on
another computer, cause the spirit port needed it ... I don't have
access to that computer right now ... I will commit it later today.

 (To solve the problem, my current implementation generates me a
 functor of type And_g1,g2 which has an operator () with 4
 arguments).

 Thanks,
 Christophe
 ___
 proto mailing list
 proto@lists.boost.org
 http://lists.boost.org/mailman/listinfo.cgi/proto

___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Manipulating an expression tree

2011-04-08 Thread Thomas Heller
On Fri, Apr 8, 2011 at 2:03 PM, Karsten Ahnert
karsten.ahn...@ambrosys.de wrote:

 Why not just write a transform that calculates one derivative and call
 it N times to get the Nth derivative?

 Yes, that may be easy if you need two or three higher derivatives. For my
 application I need 10 to 20 or even more. I guess that currently no
 compiler can handle such large trees. For example, the simple product rule
 will result in 2^N terms.

Point taken. The expressions might get very long.
However, you could do an algeabric simplification transform (for
example constant propagation)
after every derivation step. Thus reducing the number of terms.

 But in the case of the product rule, one can use Leibnitz rule: If
 f(x)=g1(x) g2(x), then
 the N-th derivative of f(x) is sum_{k=0}^N binomial(N , k ) g1^k(x)
 g2^(N-k)(x). (g1^k is the k-th derivative of g1). This is exactly the
 point where I need intermediate values, to store previously calculated
 values of the derivatives of g1 and g2.

 Nevertheless, thank you for your example. I am a beginner with proto such
 that every example is highly illuminating.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] Using Phoenix inside eUML: mixing grammars

2011-05-01 Thread Thomas Heller
On Monday, April 25, 2011 06:39:14 PM Christophe Henry wrote:
 Hi Thomas,
 
 Sorry to come back to the subject so late, I didn't manage before :(
 
  If you want to use it as a transform you need the evaluator with an
  appropriate action that does the desired transform... here is an
  example:
 
  struct BuildEventPlusGuard
   : proto::when
   proto::subscriptproto::terminalevent_tag,
  phoenix::meta_grammar ,
   TempRownone,proto::_left,none,none,
  phoenix::evaluator(proto::_right, some_cool_action())()
   
  {};
 
  Now, some_cool_action can do the transform that BuildGuards was doing.
 
 Hmmm, I get a compiler error, which was expected (would be too easy
 otherwise ;- ) ), but the error is surprising. The error is that
 phoenix::evaluator seems to believe some_cool_action should be a
 random access fusion sequence (expects an environment?).

You are right ... slippery on my side ... evaluator expects a context, which is 
a tuple containing the environment and the actions: http://goo.gl/24fU9

 Anyway, I am hoping not to write any cool transform but simply save
 the type of the phoenix expression so that I can re-create an actor
 later. If I need to rewrite differently what BuildGuards was doing, I
 gain little. I would like phoenix to do the grammar parsing and
 building of actor.

It does ... just pass on proto::_right and it should be good:

struct BuildEventPlusGuard
  : proto::when
proto::subscript
proto::terminalevent_tag
  , phoenix::meta_grammar  // match the phoenix actor

  , TempRow
none
  , proto::_left
  , none
  , none
  , proto::_right // Pass the type along. which is a phoenix actor.
(proto::_right)  // Pass the object along. Which is the actor (1)

{};

(1): Here you can work around the thing with the possibly uninitialized stuff. 
Just copy the phoenix actor (should be cheap, if not optimized away completely).

 Thanks,
 Christophe
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] [phoenix] not playing nice with other libs

2011-05-04 Thread Thomas Heller
On Thu, May 5, 2011 at 5:47 AM, Eric Niebler e...@boostpro.com wrote:
 On 5/5/2011 2:27 AM, Bart Janssens wrote:
 On Wed, May 4, 2011 at 7:55 PM, Eric Niebler 
 eric-xT6NqnoQrPdWk0Htik3J/w...@public.gmane.org wrote:
 Bart, how high can N go in your EDSL? Is it really arbitrarily large?

 I didn't hit any limit in the real application (most complicated case
 is at 9) and just did a test that worked up to 30. Compilation (debug
 mode) took about 2-3 minutes at that point, with some swapping, so I
 didn't push it any further.

 I've attached the header defining the grouping grammar, there should
 be no dependencies on the rest of our code.

 We're talking about picking a sane and useful default for
 BOOST_PROTO_MAX_ARITY. You seem to be saying that 10 would cover most
 practical uses of your EDSL. Is that right?

I think having BOOST_PROTO_MAX_ARITY set to 10 is a good default!
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] using phoenix in MSM

2011-05-09 Thread Thomas Heller
On Monday, May 09, 2011 11:18:50 PM Christophe Henry wrote:
 Hi,
 
 Thanks to Eric's latest changes (which removed conflicts between MSM
 and phoenix), and Thomas' help, I added the possibility to use phoenix
 as an action inside eUML.
 There is still quite a bit to be done to be able to do with phoenix
 the same stuff as with standard eUML, but for the moment I can:
 - define a guard or action with a phoenix actor
 - provide a guard with eUML and an action with phoenix and vice-versa
 - use my own placeholders instead of arg1.. (_event, _fsm, ...)
 
 I have all in a branch
 (https://svn.boost.org/svn/boost/branches/msm/msm_phoenix) with a
 simple example.

Small remark:
Global objects seem to get out of fashion: They have a negative impact on 
compile times and binary size.
instead of using phoenix::function on global (namespace) scope, you could use
the newly introduced macros to define equivalent free functions.
For example, instead of this:

boost::phoenix::functionprocess_play_impl process_play;

you could just have:
   
BOOST_PHOENIX_ADAPT_CALLABLE(process_play, process_play_impl, 1)

For documentation see:
https://svn.boost.org/svn/boost/trunk/libs/phoenix/doc/html/phoenix/modules/function/adapting_functions.html

 It's not a huge amount but usable. As I only allow phoenix with a
 define (for security), I now have 2 possibilities:
 - keep working in the branch and release with 1.48
 - release with 1.47. There is no risk of breaking anything as we have
 the #define so it's safe.
 
 I'm very tempted to release with 1.47 so that Michael or whoever is
 interested would have a chance to try it out and tell me what he'd
 like to see there first, without having to get a branch. Plus it would
 be one more use case of phoenix.
 Preconditions:
 - phoenix is really out at 1.47

It is merged to release. So there is nothing that holds it back now ;)

 - I have a small worry with expression::argument?. The doc indicates
 it starts at index 0, but when I try, it starts at 1. Is the doc not
 up-to-date, the code, or I'm plain wrong?

The docs are wrong, thanks for pointing it out. Consider it fixed.

 Is there some interest to see MSM support phoenix, even only partly, in 1.47?
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


[proto] proto::expr vs. proto::basic_expr

2011-05-15 Thread Thomas Heller
Hi,

Today I experimented a little bit with phoenix and proto.
My goal was to decrease the compile time of phoenix. When I started the 
development of phoenix, Eric advised me to use proto::basic_expr to reduce 
compile times.
Which makes sense giving the argumentation that on instatiating the expression 
node, basic_expr has a lot less functions etc. thus the compiler needs to 
instantiate less. So much for the theory.
In practice this, sadly is not the case. Today I made sure that phoenix uses 
basic_expr exclusively (did not commit the changes).

The result of this adventure was that compile times stayed the same. I was a 
little bit disappointed by this result.

Does anybody have an explanation for this?

Regards,
Thomas
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto


Re: [proto] _unpack transform

2012-07-11 Thread Thomas Heller

On 07/10/2012 11:18 PM, Eric Niebler wrote:

I just committed to the proto-11 codebase a new transform called
_unpack. You use it like this:

   _unpackf0(Tfx, f1(_)...)

Where Tfx represents any transform (primitive or otherwise) f0 is any
callable or object type, and f1(_) is an object or callable transform.
The ... denotes pseudo-pack expansion (although it's really an C-style
vararg ellipsis). The semantics are to replace f1(_)... with
f1(_child0), f1(_child1), etc..

With this, the _default transform is trivially implemented like this:

struct _default
   : proto::or_
 proto::whenproto::terminal_, proto::_value
   , proto::otherwise
 proto::_unpackeval(proto::tag_of_(), _default(_)...)
 
 
{};

...where eval is:

struct eval
{
 templatetypename E0, typename E1
 auto operator()(proto::tag::plus, E0  e0, E1  e1) const
 BOOST_PROTO_AUTO_RETURN(
 static_castE0(e0) + static_castE1(e1)
 )

 templatetypename E0, typename E1
 auto operator()(proto::tag::multiplies, E0  e0, E1  e1) const
 BOOST_PROTO_AUTO_RETURN(
 static_castE0(e0) * static_castE1(e1)
 )

 // Other overloads...
};

The _unpack transform is pretty general, allowing a lot of variation
within the pack expansion pattern. There can be any number of Tfx
transforms, and the wildcard can be arbitrarily nested. So these are all ok:

   // just call f0 with all the children
   _unpackf0(_...)

   // some more transforms first
   _unpackf0(Tfx0, Tfx1, Tfx2, f1(_)...)

   // and nest the wildcard deeply, too
   _unpackf0(Tfx0, Tfx1, Tfx2, f1(f2(f3(_)))...)

I'm still playing around with it, but it seems quite powerful. Thoughts?
Would there be interest in having this for Proto-current? Should I
rename it to _expand, since I'm modelling C++11 pack expansion?

i think _expand would be the proper name. Funny enough i proposed it 
some time ago for proto-current, even had an implementation for it, and 
the NT2 guys are using that exact implementation ;)

Maybe with some extensions.
So yes, Proto-current would benefit from such a transform.
___
proto mailing list
proto@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/proto