Jose Alberto Fernandez wrote:
> [SNIP]
> > Look in runtime/defaults/velocity.properties for (most of) the
> > defaults. And it didn't make much of a diff anyway, as parsing is so
> > cheap compared to the processing.
> >
>
> First, without knowing what "template.loader.1.cache" ACTUALLY DOES is kind
> of difficult to guess things.
I am not sure why we are even discussing this anymore...
Read the code. What can I say, other than what would you GUESS the
'cache' property does for a template.loader?
I don't actually *know for certain* either, as I haven't read the code.
Jason et al did the work on the current loader stuff, and when I saw it,
I just assumed what it did, as he is good and generally linear. And
testing confirmed it. Wasn't really interested in the details.
> First of all, I was creating just one Template
> instance and calling merge() over and over in a cycle. So I do not see how
> this caching will do anything.
The template loader cache won't do anything for you, but you are
implicitly caching the template, as merge() doesn't parse the template.
I altered the test program that I used to produce the numbers to
actually call Runtime.getTemplate() so the cache property was applicable
to the tests. Of course, the work of the merge() was huge compared to
the parse, so it wasn't a major factor, as the numbers show for this
test.
> Second I was using the same setting while
> using the MacroContext, while using the same global Context and while using
> a clone of the global Context. In all cases I was using the default.
That's fine. It shouldn't make a diff there anyway. The global
introspection cache accumulates info either way, and a context will
accumulate info if it visits the same nodes over and over (which a big
iterative VM-based template might certainly do...). The MacroContext
issue was something else I detailed in my first post.
>[SNIP]
>
> You mean that if a recursive call requires 10 levels of recursion, due to
> the counter value being 10, then we will the 10 levels of rewriting arround?
> Will this caches AST tree be reused when I call with 9 in the counter during
> a new merge? Will it be extended when I call with 11?
Yes. Bingo. Exactly. Right on!
>
> > > call proc(arr[aVeryComplicatedFunctionReturning2asResult()])
> [SNIP]
> > We aren't trying to write a programming language here. #macros() are
> > not #subroutines(). We are trying to write a template engine, with a
> > simple template language to help non-programmer people create dynamic
> > output. The original intent I think is web pages, but it's good for
> > other things, as you have definitely proven.
> >
>
> True, but my point was, do we really NEED for Velocity, either for Web or
> something else, #macros as they are, or #subroutines are just as good
> for the users?
No. They aren't just as good because they remove functionality.
#macro() covers both pass-by-name (explicitly) and pass-by-value
(implicitly), where #subroutines are pass-by-value only.
I believe that we need the pass-by-name #macro() implementation for all
sorts of things, most common will be passing in a 'tool' - something
that is active, changes state, etc. And I also believe, and have proven
so far, that there is no real performance downside to warrant dropping
the functionality.
> As I have repeated before, I am new here so I do not know
> all the months and months of philosophical discussion on what kind
> of facilities are really needed for Velocity. So, I propose and I ask and
> I hope to get some discussion going.
That's cool. I wasn't here for the original discussion either. But I
guess I came into it with the same needs and philosophy as the project
founders, so it has worked out well so far.
> > > [snip]
> > >
> > > > I have had a fix in mind for a while, and will hopefully
> > get it done
> > > > this weekend. If you have a fix, please let me know so I
> > don't waste
> > > > any [more] time. But I will hint that a totally different
> > > > approach must be taken.
> > > >
> > >
> > > The main problem is with the current grammar, which bails out of the
> > > directives if the velocimacro is not found. If this was not
> > the case, it
> > > would be easy to solve. But as with other stuff, I was not
> > sure why is the
> > > code bailing out in the first place, so I dicided not to touch it.
> > > Can you please explain this feature? If it is just for the
> > error reporting
> > > I think that can be solved by the velocimacro itself.
> >
> > No, that is done to stop the parser from parsing what is really text
> > into a VM or PD.
>
> Well, it starts with a # and it was not escaped, why should be treated
> as text, at this point?
Because if it's not a refernence-like thing, a VM or a PD, it *is* text
('schmoo', in the vernacular). The parser needs to know in order to
process the following, trying to make a tree if a block directive, or to
parse the arg list, whatever. Otherwise, you have to choose at runtime,
and unwind to the literal if it's not a VM. That's a possible solution,
but I hope for something that requires less code, less complexity, keeps
parser things in the parser, etc. If I think hard enough, I can prollie
come up with patholgical cases that won't be unwindable because the tree
structure is wrong.
> > There are two approaches I can think of offhand that
> > one can take, the first is to let the parser go, and fix it all at
> > runtime, deciding then if you are a PD or VM or not, and rendering
> > appropriately.
>
> Isn't this what I was just saying in the paragraph above?
It could be. Who can be sure.
> > The problem then seems to be that escape logic has to
> > propogate out of the parser into the nodes, which I think should be
> > avoided. The other solution is to simply examine the
> > template at parse
> > time, and parse it twice. This is of cource intellectually offensive,
> > but gets things done, and if you use that magical caching thing...
> >
> And this was what I did, up to a point. Macros are parsed twice. To redo the
> whole template, you will need to some how let the grammat know whether
> you are on the first or second pass, so that it does not re-registers
> the macros and complaints if the options are not right.
Up to a point, indeed. :)
I think it's solvable.
> [SNIP]
> > >
> > > OK, you cought me here, what the heck is "nodal
> > instrospection caching"?
> >
> > The stuff that I worked on to get your performance problem to go away
> > after I introduced it by moving introspection stuff to runtime rather
> > than init() time to make us thread- and context-safe. Not
> > only did the
> > commit messages get posted to the list of course, but I think I mailed
> > you personally asking you to test things out to make sure all
> > was cool.
> > You did that, if I remember, and reported that things were pretty much
> > back to normal.
> >
>
> Well since both my tests were done over the same code line you committed
> with your instrospection caching in there, I do not see why would this
> be the reason for the differences.
Because you didn't notice that Context extends InternalContext, or if
you did, you didn't realize what was happening. Since you wrapped the
Context, all 'learning' about nodes was lost everytime because your
MacroContext was thrown away after each use in the VM render(). When I
implemented the icacheGet() and icachePut() in MacroContext to simply
call those functions on the wrapped Context, all was well. The caching
info was kept from call to call.
> I look at the instrospection code, but
> I did not see any reason why would it act different with MacroContext than
> with passing the same Context. Can you explain or point me to the code?
Above.
> > > This code seem to have all kinds of things burried inside
> > for no one to see.
> > > Is there a document or something that explains all the
> > caching techniques
> > > and other things to take into consideration when
> > collaborating on the
> > > project? I have not seen anything on the web pages, but I
> > may have been
> > > looking in the wrong place.
> >
> > What? Its for all to see. You get the source with the
> > distro, you can
> > get it with anon CVS, and you can even browse the code via the web.
> > Well, we could make velocity one big huge method, which would
> > solve that
> > whole burying issue, but would make maintenance, comprehension, and
> > collaboration challenging. I don't want to just say 'read the code',
> > but we don't have those docs yet. It's still a work in
> > progress, as you
> > know, so documenting the internals might not be the best use of scarce
> > resources at this moment.
> >
>
> Well I have been reading the code, and since I have been sending patches
> I guess you can realize I have the CVS tree.
:) I guessed. That, or you are the most brilliant dissassembler of
Java bytecode as you even get the comments right...
> But at least for me it is not
> clear how some parts of the code is suppose to work. And I certaintly think
> that some pointer would be helpful.
So just ask. Write the list. Write me. Write others. Call me. Visit
me. (I can't tell you to call or visit others...) I think you have to
admit that I have kept a very open private line with you, working with
you on the performance issues you discovered, as well as the
template-local VM stuff.
> I am not expecting a detail
> documentation but if there have been some really important decisions on the
> architecture, I would be nice to know.
What we do is each start with 12 beers, perferably a nice hoppy ale.
Then...
> > > >
> > > .....
> > > >
> > > > So you can see, we lose something valuable, with little gain
> > > > performance
> > > > wise.
> > > >
> > >
> > > Well, this was exactly what I was trying to acomplish, one
> > of the problems
> > > with pass-by-name (a.k.a pass-by-ref in this discussion) is
> > that it is
> > > very expensive globally, for the rare cases in which it
> > would be useful.
> >
> > But quite frankly, I don't think the expense has been
> > demonstrated. And
> > I think that this isn't some sort of contrived example where
> > it would be
> > useful in 'rare cases'. Suppose you had a little object that
> > generated
> > alternating colors in a table? Woudn't losing pass-by-name completely
> > remove the usefulness of context tools? (I am pre coffee again...)
> >
>
> Given any feature, we can always find a use for it :-).
Well, someone found a use, made a tool and contributed it to Velocity.
VelocityFormatter.java, class VelocityAlternator. Great tool. Cool to
pass into a VM.... enough said, eh?
> Usually we have to ask, how modular or clear or maintainable is code
> that relies in such feature.
> Even web pages have bugs, you know.
> I am nor sure I have an answer, I could say that a more maintainable
> macro will be one that expresses the fact that it is causing a side
> efect instead of relying on what is passed as parameters, but that is
> a subjective game I do not want to start at this point.
I question if it really a side effect. Maybe it's because I am used to
it. I think that it isn't a side effect if one is explicit in use.
Yes, there actually is a harmful side effect, something unexpected and
invisible, but I think we can fix it. Shooting the patient isn't a
solution, in my book...
> > > First I do not think it is too obvious for naive users to
> > understand that
> > > when you have somethig like:
> > >
> > > #macro(a $p)#b($p.mb) $p#end
> > > #macro(b $p)#c($p.mc) $p#end
> > > #macro(c $p)#d($p.md) $p#end
> > > #macro(d $p)$p #end
> > > #a($var.ma)
> > >
> > > Assume they are ordered the other way arround, due to
> > forward references if
> > > you want. I do not think it is clear to users that this
> > will execute as:
> > >
> > > $var.ma.mb.mc.md $var.ma.mb.mc $var.ma.mb $var.ma
> >
> > Yes, there are clearly issues with VMs. I recognize that.
> > You see the
> > same thing in recursion.
> >
> > > without any caching of the intermediate common expressions.
> > With pass by
> > > value, caching is implicit a reduction from 10 to 4. I
> > think that we should
> > > be able to agree that this re-evaluation of expression is a
> > major performace
> > > hit.
> >
> > I don't want to 'agree'. I want to be 'shown'. I tried for long long
> > time yesterday to show myself that there is a 'major' performance hit,
> > and I couldn't. No one else has either, I think. We all just look at
> > the code, harrumph, and say, "No, that can't be good."
> >
> > Please, prove me wrong. I want to be wrong here.
> >
>
> Geir, to be proved wrong what you need is to have methods that take
> computation time, that is, they do more than just return a pointer.
What? No. The issue I was referring to is the patch-and-parse versus
stack/MacroContext issue. To prove me wrong, I think it must be shown
that the patch-and-parse method of VM implementation is wasteful and
slow in normal production runtime setups. I am willing to take the
small hit in a 'development' environment where there is no caching.
That's the issue here, not pass-by-name vs pass-by-value. That was
something that came up in analysis of the patch.
The other issue, of avoiding re-evaluation is completely solvable within
the VTL.
I think its fair that if a designer-programmer team (how I think about
Velocity users most of the time....) decide to use more advanced Context
objects, ones that do change state when invoked, they need to know the
implications. There are uses. You may not see them, or agree with
them, but they are there.
> As long as we just do very simple things, we will not see any major
> drawbacks. Now, you may ask, is it really realistic that we will be doing
> computational intensive things in a web macro? My answer is YES!
My answer is YES! YES! YES! You misunderstood the 'prove me wrong' part
above, but I hope I cleared it up.
> The example of velocity in the jakarta site, talks about finding
> offer sales for a user given its history in the site. That means that
> based on the input, the page need to call something that goes to some DB
> and gets info about the user and retrieves it. So, there may be
> complex operations involved.
Yes. No doubt. I do things like this with clients almost every day. No
mystery there, and not terribly complicated. I have a client building a
web based financial app where we price options, do big calculations,
find a partridge in a pear tree, and do other goodies. DB access is a
walk in the park compared to that... :) But guess what? we understand
the problem domain and behave accordingly.
Look : My view is *if* you are going to hand the designer a loaded gun,
either teach them to handle it safely or duck. I would vote for not
handing them a gun in the first place, though. The MVC model simply
limits the damage a designer can do in View to what the Controller and
Model provide for View via the context.
>
> > > So, for example, in the case of exploring a tree structure,
> > you will be
> > > going from the root on every leaf you want to print, unless
> > you use #set to
> > > keep intermediate values in the context. But, the context
> > is global, so it
> > > will be very dificult if not impossible to use correctly on
> > recursive
> > > macros. Which is what one needs for generating out of
> > things like JDOM.
> >
> > Do you want an example that walks a jdom tree? I have one. That is
> > what showed me the problems in VMs, that I want to fix, that
> > I have been
> > working on. But I am not going to break features. I can easily be
> > outvoted though, so if we wish to toss this functionality, great. I
> > don't, to be frank.
> >
>
> JDOM is an example, what I was trying to say is that #set is not always
> enough, in reality, to get real pass-by-value. In particular, when recursion
> is involved. We would have to have local variable scopes for that to work.
> Which is feasible, but it is not there yet.
Right. I have been trying to say this for a while. But we don't simply
remove the feature.
> [SNIP]
>
> > Whatever. It's not an argument. Its a demonstration, and an
> > explination of why the patch didn't pass the testbed. You may or may
> > not have noticed that. If you did, it sure would have been great to
> > have a little not explaining so I didn't have to go chase it down.
> >
>
> Sorry I did not mentioned, it was late and I had a headache.
> But I really did not think was a breaking any real feature (i.e., the fact
> that complained about $i instead of $not-a-reference. I thought the test
> was to verified that it complained, which it did.
No worries. You know what they say : A Saturday with Velocity is
like, well, a Saturday with Velocity.
> [SNIP]
> > Read what I said again. I refer to the invalid reference case. Yes,
> > your solution works correctly with #if, because every arg is
> > guaranteed
> > to have something in the context if #if() would return true. So by
> > contstruction, it ensures that the #if() works. My
> > observation is that
> > I cannot see how it would not break if you try to keep current
> > functionality, which you declared above to be an error.
> >
>
> Sorry, but here you are the one that should read again. I said #if() works
> that means it does the body when appropriate and it does not does the body
> when appropriate.
> If you read again the code (look at the usages of NULL_MARK in MacroContext)
> you will see that when a parameter receives NULL as an argument it is
> effectively removed from the Context, which makes #if() to behave correctly.
> Simillar measures where taken for the alternate implementation also.
Yes, they work, but I argue only by construction, because you will put
something into the context iff the result of value() is not null, and
not otherwise. This alters the VM args, because if an arg isn't a valid
reference (not backed by something in the context), then it will not be
put into the context, and the #if() will work. By construction. Your
stack or MacroContext forces this to be true.
What I was saying above is that *if* you would keep current
functionality, in that a reference-like entity will behave properly,
rendering as $<reference_name> rather than '$i', the arg of the VM, when
it is not backed by something in the context, then things will be
broken, as you can't BOTH put something into the context such that it
will render properly on the other side (in the VM) AND have it be null
in the Context, making #if work. See?
> [SNIP]
> > That's true -> if a person does write a macro that access the
> > arg twice,
> > you get it accessed twice. The thing I want to emphasize is that its
> > #macro() not #subroutine().
> >
>
> I understand that, but as I said before which one would be better. Should
> we have the two?
I personally don't believe so. We should just fix it. #macro covers
#subroutine. Why make a facility that generally mimics another?
> As optional directives for example. The current grammar
> is kind of intrusive, since it knows and treats macros specially. Maybe
> by allowing some few changes here ans there we could open the issue and
> allow for alternatives.
Like I said, I personally don't believe so, but it is a democracy. What
do you need out of this? Just say it! I will devote my efforts to
finding the solution!
> > > It should have been:
> > >
> > > #macro(quietnull $a)
> > > #set($quietnullvar = $a)
> > > #if($quietnullvar)$quietnullvar#end
> > > #end
> >
> > Yep.
> >
> > > But, wouldn't the #set operation produce an error if $a
> > evaluates to null?
> >
> > Is that true? If there is something that returns null, it is returned
> > as "", I think, in ASTMethod.
I just tested. It's not true.
> Well, that would mean that #if($quietnullvar) will always be true.
> So if I do any other action in the #if ...
What? First you were talking about a #set(), and now an #if(). Which is
it? There are different answers.
Why don't we just test it? it's the easiest thing to test....
testing...
It works.
>
> #if($quietnullvar)$quietnullvar is not null#end
>
> will give the wrong answer, doesn't it.
No, it doesn't. The above macro, when invoked with an invalid reference
(not backed by something in the context) does what was intended.
> > > Humm, how can #quietnull be re-written?
> > >
>
> The question stands, even more since this was the original reason for
> #macros in the first place. :-)
No, the question is now asked to sit. It works just fine.
> Jose Alberto
geir
--
Geir Magnusson Jr. [EMAIL PROTECTED]
Velocity : it's not just a good idea. It should be the law.
http://jakarta.apache.org/velocity