[
https://issues.apache.org/jira/browse/VELOCITY-776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13103095#comment-13103095
]
Alex edited comment on VELOCITY-776 at 9/13/11 3:14 PM:
--------------------------------------------------------
My bug 811 was marked as a duplicate of this. For me this happens in the
context of including macro modules to override some default macros. See example
below. When the top level template.vtl is merged concurrently using the same
engine instance the effects described above do happen.
----- template.vtl ----
#parse(macros.vtl)
#myMacro("param")
----------------------------
------ macros.vtl ------
#macro(myMacro $param)
#end
-----------------------------
The system I am working on uses macro overloading a lot ( as a poor man's
subclassing to keep the things sane ) what really would be nice to have a
capability to include and exclude VM libraries dynamically for overloading
purposes.
I have modified the code in the RuntimeInstance.parse(reader, templateName) to
pass false as the "dump" parameter and it seems to fix the issue for now. In my
system the VTL files are not changed all that often and when they do macros are
not typically removed, so if the parse would retain some unused macros in the
cache until the next JVM restart it's not a big deal. It would be a big deal if
the parse does not refresh the existing macros on parse though.
I still think that the synchronization in the VelocimacroManager between
getNamespace, addNamespace and dumpNamespace is broken and just presents a
smaller window for the race condition than the one between the dump and parse.
was (Author: a701440):
My bug 811 was marked as a duplicate of this. For me this happens in the
context of including macro modules to override some default macros. See example
below. When the top level template.vtl is merged concurrently using the same
engine instance the effects described above do happen.
----- template.vtl ----
#parse(macros.vtl)
#myMacro("param")
----------------------------
------ macros.vtl ------
#macro(myMacro $param)
#end
-----------------------------
The system I am working on uses macro overloading a lot ( as a poor man's
subclassing to keep the things sane ) what really would be nice to have a
capability to include and exclude VM libraries dynamically for overloading
purposes.
I have modified the code in the RuntimeInstance.parse(reader, templateName) to
pass false as the "dump" parameter and it seems to fix the issue for now.
I still think that the synchronization in the VelocimacroManager between
getNamespace, addNamespace and dumpNamespace is broken and just presents a
smaller window for the race condition than the one between the dump and parse.
> "velocimacro.permissions.allow.inline.local.scope" makes VelocityEngine not
> threadsafe
> --------------------------------------------------------------------------------------
>
> Key: VELOCITY-776
> URL: https://issues.apache.org/jira/browse/VELOCITY-776
> Project: Velocity
> Issue Type: Bug
> Components: Engine
> Affects Versions: 1.6.4
> Environment: Sun java jdk1.6.0_21, Ubuntu 10.04
> Reporter: Simon Kitching
> Attachments: RenderVelocityTemplate.java,
> RenderVelocityTemplateTest.java
>
>
> The attached unit test shows that when
> "velocimacro.permissions.allow.inline.local.scope" is set to true, and
> multiple threads use the same VelocityEngine instance then macros sometimes
> don't get expanded and the #macroname call remains in the output text.
> Notes:
> * running test method "testMultipleEvals" (single threaded case) always
> succeeds
> * running test method "testMultiThreadMultipleEvals" always fails
> * commenting out the allow.inline.local.scope line makes the multithread test
> pass (but of course has other side-effects)
> Interestingly, for the multithread case it seems that 1 thread always
> succeeds and N-1 threads fail.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]