[jira] Closed: (COCOON-2154) Servlet:/ protocol: Support absolute URIs
[ https://issues.apache.org/jira/browse/COCOON-2154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Reinhard Poetz closed COCOON-2154. -- Resolution: Fixed Fix version (Component): Parent values: Servlet Service Framework(10247). Servlet:/ protocol: Support absolute URIs - Key: COCOON-2154 URL: https://issues.apache.org/jira/browse/COCOON-2154 Project: Cocoon Issue Type: New Feature Components: - Servlet service framework Reporter: Reinhard Poetz Assignee: Reinhard Poetz Using the servlet-protocol you can only define relative URIs which means that those URIs are only valid if they are resolved in the context of a particular servlet-service because they refer to the defines connections. If you need globally resolveable URIs, there needs to be a way to define globally unique servlet:/ URIs. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Closed: (COCOON-1831) Passing parameters to sub calls
[ https://issues.apache.org/jira/browse/COCOON-1831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Reinhard Poetz closed COCOON-1831. -- Resolution: Fixed patch applied. req params attrs and session are passed/shared. Passing parameters to sub calls --- Key: COCOON-1831 URL: https://issues.apache.org/jira/browse/COCOON-1831 Project: Cocoon Issue Type: New Feature Components: - Servlet service framework Reporter: Reinhard Poetz Assignee: Reinhard Poetz Attachments: BlockCallHttpServletRequest.patch, cocoon-servlet-service-impl.patch, cocoon-servlet-service-impl.patch When a servlet service request is created, parameters from the parent request are ignored. This means that the sub request is performed as a fresh and clean new call. This would avoid any possible side-effects, but is very inconvenient in practice because you don't even know the request header parameters from the original (external) request. Additionally you can only pass information which is part of the returned stream, which is e.g. a blocker to use the servlet protocol together with the control flow implementations. Those make use of special request parameters to transport the model (bizdata) to the view layer. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (COCOON-1964) Redirects inside a block called via the servlet protocol fail
[ https://issues.apache.org/jira/browse/COCOON-1964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Reinhard Poetz updated COCOON-1964: --- Summary: Redirects inside a block called via the servlet protocol fail (was: Redirects inside a block called via the blocks protocol fail) Redirects inside a block called via the servlet protocol fail - Key: COCOON-1964 URL: https://issues.apache.org/jira/browse/COCOON-1964 Project: Cocoon Issue Type: Bug Components: - Servlet service framework Affects Versions: 2.2-dev (Current SVN) Reporter: Alexander Klimetschek Priority: Critical Attachments: cocoon-allow-redirect-in-called-block.patch If you do a redirect (from within a piece of flowscript cocoon.redirectTo('cocoon:/foobar') or via redirect-to in the sitemap) inside a block that was called via the block: protocol will fail since the re-use of the outputstream of the response (which happens to be a BlockCallHttpServletResponse) does not work. This is due to the use of getWriter() after getOutputStream() has already been called. The servlet api says that only one of them should be called, so there is a check in the implementation of getWriter() that will throw an IllegalStateException. The patch removes that check, which is kinda hack, but I don't know of any other cases within cocoon where such a re-use is made. The problem could be avoided if during the redirect a reset() or resetBuffer() would be called on the response, but for some reason this does not happen with a redirect from within a flowscript for a form. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
Re: [#COCOON-2168] ResourceReader produces Java Heap Overflow when reading a huge resource - ASF JIRA
Joerg Heinicke wrote: On 05.03.2008 23:06, Joerg Heinicke wrote: We could argue about another default value than -1 though. Something like 1024^2. What do others think? Shall we change the default value from buffer everything (which lead to the OutOfMemoryError [1]) to something more secure in the sense of avoiding a potential source of error? Besides mentioning it on the changes page we have to set it to a value that's unlikely to be hit with a normal web application to change user application's behavior only in extreme cases. That's why I suggested 1MB. Hmm, not sure if we should change the default value. The idea of this default was to be sure that error handling works out of the box. If you have special cases like mentioned in the bug, it makes imho more sense to fine tune these special cases (for instance by not buffering). The output buffer value is one of the settings which is optimized for development and it should be tweaked for production usage. I think also if you're using a reader in your pipeline it is more likely that you don't want to let the pipeline buffer your output. Carsten Joerg [1] https://issues.apache.org/jira/browse/COCOON-2168 -- Carsten Ziegeler [EMAIL PROTECTED]
Re: [#COCOON-2168] ResourceReader produces Java Heap Overflow when reading a huge resource - ASF JIRA
Felix Knecht schrieb: Carsten Ziegeler schrieb: Joerg Heinicke wrote: On 05.03.2008 23:06, Joerg Heinicke wrote: We could argue about another default value than -1 though. Something like 1024^2. What do others think? Shall we change the default value from buffer everything (which lead to the OutOfMemoryError [1]) to something more secure in the sense of avoiding a potential source of error? Besides mentioning it on the changes page we have to set it to a value that's unlikely to be hit with a normal web application to change user application's behavior only in extreme cases. That's why I suggested 1MB. Hmm, not sure if we should change the default value. The idea of this default was to be sure that error handling works out of the box. If you have special cases like mentioned in the bug, it makes imho more sense to fine tune these special cases (for instance by not buffering). The output buffer value is one of the settings which is optimized for development and it should be tweaked for production usage. I think also if you're using a reader in your pipeline it is more likely that you don't want to let the pipeline buffer your output. IIUC this should work (no caching pipeline, set buffer-size to 8192): map:pipeline id=test-nocache type=noncaching map:match pattern=nocache map:read src=/home/felix/tmp/livecd-i686-installer-2007.0.iso map:parameter name=buffer-size value=8192 / /map:read /map:match /map:pipeline but it doesn't java.lang.OutOfMemoryError: Java heap space You need to turn off the buffering of the pipeline as well. I don't have the parameter name at hand, I assume it's buffer-size as well but could be different: map:pipeline id=test-nocache type=noncaching map:parameter name=buffer-size value=0 / map:match pattern=nocache map:read src=/home/felix/tmp/livecd-i686-installer-2007.0.iso map:parameter name=buffer-size value=8192 / /map:read /map:match /map:pipeline This is usually the way I would define reader pipelines. If the above still produces an OOMError than we have a bug :) Carsten -- Carsten Ziegeler [EMAIL PROTECTED]
Re: [#COCOON-2168] ResourceReader produces Java Heap Overflow when reading a huge resource - ASF JIRA
Felix Knecht wrote: You need to turn off the buffering of the pipeline as well. I don't have the parameter name at hand, I assume it's buffer-size as well but could be different: map:pipeline id=test-nocache type=noncaching map:parameter name=buffer-size value=0 / map:match pattern=nocache map:read src=/home/felix/tmp/livecd-i686-installer-2007.0.iso map:parameter name=buffer-size value=8192 / /map:read /map:match /map:pipeline This is usually the way I would define reader pipelines. If the above still produces an OOMError than we have a bug :) Thanks Carsten, it works. Parameter is outputBufferSize Ah, yes :) Great! I may ask at this point if the general configuration of noncaching pipelines is correct (cocoon-core/cocoon-core/src/main/resources/META-INF/cocoon/avalon/cocoon-core-sitemapcomponents.xconf): map:pipe name=noncaching src=org.apache.cocoon.components.pipeline.impl.NonCachingProcessingPipeline !-- parameter name=outputBufferSize value=8192/ -- /map:pipe == As no parameter is specified '-1' is used what in fact leads to the same configuration as for caching pipelines?! Yes, this is correct :) The output buffer only specifies the size of the buffer for writing the response. This is not directly related to caching. You might increase performance by buffering. The buffer is in both case unlimited because of allowing proper error handling. If we don't have it yet, we should add these things to a tuning cocoon page. I would turn off infinite buffering in production in all case and set a fixed buffer size (like 8192). For reader pipelines I would turn off buffering completly. Carsten -- Carsten Ziegeler [EMAIL PROTECTED]
Re: [#COCOON-2168] ResourceReader produces Java Heap Overflow when reading a huge resource - ASF JIRA
You need to turn off the buffering of the pipeline as well. I don't have the parameter name at hand, I assume it's buffer-size as well but could be different: map:pipeline id=test-nocache type=noncaching map:parameter name=buffer-size value=0 / map:match pattern=nocache map:read src=/home/felix/tmp/livecd-i686-installer-2007.0.iso map:parameter name=buffer-size value=8192 / /map:read /map:match /map:pipeline This is usually the way I would define reader pipelines. If the above still produces an OOMError than we have a bug :) Thanks Carsten, it works. Parameter is outputBufferSize map:pipeline id=test-nocache type=noncaching map:parameter name=outputBufferSize value=0 / map:match pattern=nocache map:read src=/home/felix/tmp/livecd-i686-installer-2007.0.iso map:parameter name=buffer-size value=8192 / /map:read /map:match /map:pipeline I may ask at this point if the general configuration of noncaching pipelines is correct (cocoon-core/cocoon-core/src/main/resources/META-INF/cocoon/avalon/cocoon-core-sitemapcomponents.xconf): map:pipe name=noncaching src=org.apache.cocoon.components.pipeline.impl.NonCachingProcessingPipeline !-- parameter name=outputBufferSize value=8192/ -- /map:pipe == As no parameter is specified '-1' is used what in fact leads to the same configuration as for caching pipelines?! Felix
Re: [#COCOON-2168] ResourceReader produces Java Heap Overflow when reading a huge resource - ASF JIRA
Carsten Ziegeler schrieb: Joerg Heinicke wrote: On 05.03.2008 23:06, Joerg Heinicke wrote: We could argue about another default value than -1 though. Something like 1024^2. What do others think? Shall we change the default value from buffer everything (which lead to the OutOfMemoryError [1]) to something more secure in the sense of avoiding a potential source of error? Besides mentioning it on the changes page we have to set it to a value that's unlikely to be hit with a normal web application to change user application's behavior only in extreme cases. That's why I suggested 1MB. Hmm, not sure if we should change the default value. The idea of this default was to be sure that error handling works out of the box. If you have special cases like mentioned in the bug, it makes imho more sense to fine tune these special cases (for instance by not buffering). The output buffer value is one of the settings which is optimized for development and it should be tweaked for production usage. I think also if you're using a reader in your pipeline it is more likely that you don't want to let the pipeline buffer your output. IIUC this should work (no caching pipeline, set buffer-size to 8192): map:pipeline id=test-nocache type=noncaching map:match pattern=nocache map:read src=/home/felix/tmp/livecd-i686-installer-2007.0.iso map:parameter name=buffer-size value=8192 / /map:read /map:match /map:pipeline but it doesn't java.lang.OutOfMemoryError: Java heap space at org.apache.cocoon.util.BufferedOutputStream.incBuffer(BufferedOutputStream.java:148) at org.apache.cocoon.util.BufferedOutputStream.write(BufferedOutputStream.java:96) at org.apache.cocoon.reading.ResourceReader.processStream(ResourceReader.java:355) at org.apache.cocoon.reading.ResourceReader.generate(ResourceReader.java:386) at org.apache.cocoon.components.pipeline.AbstractProcessingPipeline.processReader(AbstractProcessingPipeline.java:656) at org.apache.cocoon.components.pipeline.AbstractProcessingPipeline.process(AbstractProcessingPipeline.java:431) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.cocoon.core.container.spring.avalon.PoolableProxyHandler.invoke(PoolableProxyHandler.java:72) at $Proxy8.process(Unknown Source) at org.apache.cocoon.components.treeprocessor.sitemap.ReadNode.invoke(ReadNode.java:94) at org.apache.cocoon.components.treeprocessor.AbstractParentProcessingNode.invokeNodes(AbstractParentProcessingNode.java:55) at org.apache.cocoon.components.treeprocessor.sitemap.MatchNode.invoke(MatchNode.java:87) at org.apache.cocoon.components.treeprocessor.AbstractParentProcessingNode.invokeNodes(AbstractParentProcessingNode.java:78) at org.apache.cocoon.components.treeprocessor.sitemap.PipelineNode.invoke(PipelineNode.java:144) at org.apache.cocoon.components.treeprocessor.AbstractParentProcessingNode.invokeNodes(AbstractParentProcessingNode.java:78) at org.apache.cocoon.components.treeprocessor.sitemap.PipelinesNode.invoke(PipelinesNode.java:81) at org.apache.cocoon.components.treeprocessor.ConcreteTreeProcessor.process(ConcreteTreeProcessor.java:239) at org.apache.cocoon.components.treeprocessor.ConcreteTreeProcessor.process(ConcreteTreeProcessor.java:171) at org.apache.cocoon.components.treeprocessor.TreeProcessor.process(TreeProcessor.java:247) at org.apache.cocoon.servlet.RequestProcessor.process(RequestProcessor.java:351) at org.apache.cocoon.servlet.RequestProcessor.service(RequestProcessor.java:169) at org.apache.cocoon.sitemap.SitemapServlet.service(SitemapServlet.java:84) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.apache.cocoon.servletservice.ServletServiceContext$PathDispatcher.forward(ServletServiceContext.java:501) at org.apache.cocoon.servletservice.ServletServiceContext$PathDispatcher.forward(ServletServiceContext.java:473) at org.apache.cocoon.servletservice.spring.ServletFactoryBean$ServiceInterceptor.invoke(ServletFactoryBean.java:230) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at $Proxy5.service(Unknown Source) Felix Carsten Joerg [1] https://issues.apache.org/jira/browse/COCOON-2168
[jira] Commented: (COCOON-2168) ResourceReader produces Java Heap Overflow when reading a huge resource
[ https://issues.apache.org/jira/browse/COCOON-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12577373#action_12577373 ] Felix Knecht commented on COCOON-2168: -- After all it turned out that it's a problem of configuration. Thanks to Carsten. Last snippet of the mail thread thread http://marc.info/?t=12047341133r=1w=2 map:pipeline id=test-nocache type=noncaching map:parameter name=outputBufferSize value=0 / map:match pattern=nocache map:read src=/home/felix/tmp/livecd-i686-installer-2007.0.iso map:parameter name=buffer-size value=8192 / /map:read /map:match /map:pipeline The output buffer only specifies the size of the buffer for writing the response. This is not directly related to caching. You might increase performance by buffering. The buffer is in both case unlimited because of allowing proper error handling. If we don't have it yet, we should add these things to a tuning cocoon page. I would turn off infinite buffering in production in all case and set a fixed buffer size (like 8192). For reader pipelines I would turn off buffering completly. ResourceReader produces Java Heap Overflow when reading a huge resource --- Key: COCOON-2168 URL: https://issues.apache.org/jira/browse/COCOON-2168 Project: Cocoon Issue Type: Bug Components: * Cocoon Core Affects Versions: 2.2-dev (Current SVN) Reporter: Felix Knecht Attachments: ResourceReader.diff, test-case.tar.gz When reading a huge resource (i.e. 700MB file) the ResourceReader produces an overflow due to the BufferedOutputStream which is used (and forced to be used via AbstractReader). The BufferedOutputStream flushes only at the end (or when forced to), but overwrites the flush method to do nothing. As I don't know exactly where the BufferedOutputStream is used and what kind of impacts it will have to change it there I'm just going to fix the ResourceReader. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Closed: (COCOON-2168) ResourceReader produces Java Heap Overflow when reading a huge resource
[ https://issues.apache.org/jira/browse/COCOON-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Felix Knecht closed COCOON-2168. Resolution: Fixed It's a configuration problem. ResourceReader produces Java Heap Overflow when reading a huge resource --- Key: COCOON-2168 URL: https://issues.apache.org/jira/browse/COCOON-2168 Project: Cocoon Issue Type: Bug Components: * Cocoon Core Affects Versions: 2.2-dev (Current SVN) Reporter: Felix Knecht Attachments: ResourceReader.diff, test-case.tar.gz When reading a huge resource (i.e. 700MB file) the ResourceReader produces an overflow due to the BufferedOutputStream which is used (and forced to be used via AbstractReader). The BufferedOutputStream flushes only at the end (or when forced to), but overwrites the flush method to do nothing. As I don't know exactly where the BufferedOutputStream is used and what kind of impacts it will have to change it there I'm just going to fix the ResourceReader. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
Re: [#COCOON-2168] ResourceReader produces Java Heap Overflow when reading a huge resource - ASF JIRA
On 11.03.2008 04:48, Carsten Ziegeler wrote: We could argue about another default value than -1 though. Something like 1024^2. What do others think? Shall we change the default value from buffer everything (which lead to the OutOfMemoryError [1]) to something more secure in the sense of avoiding a potential source of error? Besides mentioning it on the changes page we have to set it to a value that's unlikely to be hit with a normal web application to change user application's behavior only in extreme cases. That's why I suggested 1MB. Hmm, not sure if we should change the default value. The idea of this default was to be sure that error handling works out of the box. If you have special cases like mentioned in the bug, it makes imho more sense to fine tune these special cases (for instance by not buffering). But I fear hardly anybody is aware or even uses the feature. The output buffer value is one of the settings which is optimized for development and it should be tweaked for production usage. It's not really development, is it? I mean even if you can not reset output buffer completely, you will still get the error markup appended and I would not care for development about how this looks :) Being aware of the potential change in behavior I also chose a quite huge buffer of 1 MB so that hardly anybody should be affected. We can also discuss about making it even bigger like 10 MB. But I consider a buffer that's flushed too early once in a while better than an OOME in the default setup. And people can still change it to -1 and endless buffering if they really need. But at least they are aware of the effects then. Joerg
Re: [#COCOON-2168] ResourceReader produces Java Heap Overflow when reading a huge resource - ASF JIRA
Joerg Heinicke wrote: On 11.03.2008 04:48, Carsten Ziegeler wrote: We could argue about another default value than -1 though. Something like 1024^2. What do others think? Shall we change the default value from buffer everything (which lead to the OutOfMemoryError [1]) to something more secure in the sense of avoiding a potential source of error? Besides mentioning it on the changes page we have to set it to a value that's unlikely to be hit with a normal web application to change user application's behavior only in extreme cases. That's why I suggested 1MB. Hmm, not sure if we should change the default value. The idea of this default was to be sure that error handling works out of the box. If you have special cases like mentioned in the bug, it makes imho more sense to fine tune these special cases (for instance by not buffering). But I fear hardly anybody is aware or even uses the feature. Yes, that's possible. The output buffer value is one of the settings which is optimized for development and it should be tweaked for production usage. It's not really development, is it? I mean even if you can not reset output buffer completely, you will still get the error markup appended and I would not care for development about how this looks :) Hmm, I would never rely on the default error handling for these cases in production environments. If something is already written to the output stream, it's too late anyway. But I see your point. Being aware of the potential change in behavior I also chose a quite huge buffer of 1 MB so that hardly anybody should be affected. We can also discuss about making it even bigger like 10 MB. But I consider a buffer that's flushed too early once in a while better than an OOME in the default setup. And people can still change it to -1 and endless buffering if they really need. But at least they are aware of the effects then. Hmm, ok, we could change this in the main sitemap as a default configuration while leaving it in the java code untouched. However, I still think that this is not needed, if people want to stream huge responses, they should think about what they are doing and configure everything accordingly. I totally agree that we lack documentation here. Carsten -- Carsten Ziegeler [EMAIL PROTECTED]
Re: svn commit: r635881 - /cocoon/trunk/core/cocoon-servlet-service/cocoon-servlet-service-impl/src/main/java/org/apache/cocoon/servletservice/spring/ServletFactoryBean.java
[EMAIL PROTECTED] pisze: Author: reinhard Date: Tue Mar 11 03:58:38 2008 New Revision: 635881 URL: http://svn.apache.org/viewvc?rev=635881view=rev Log: the context attribute might not exist [...] +if(contextPath != null) { +int tmp = contextPath.indexOf(':'); +boolean tmp2 = !(contextPath.startsWith(file:) || contextPath.startsWith(/) || contextPath.indexOf(':') == -1); This is not a solution IMHO. I forgot to bring this issue to mailing list, my fault. Actually, I think we should disallow empty context-path and mount-path attributes because there is no sane way to handle such cases. If you don't set contextPath, then you break a contract in ServletContext class because getResource() method no longer works. I think we should just change our schema and throw an exception when any of attributes is null. WDYT? -- Grzegorz Kossakowski
Re: [#COCOON-2168] ResourceReader produces Java Heap Overflow when reading a huge resource - ASF JIRA
Carsten Ziegeler escribió: Hmm, ok, we could change this in the main sitemap as a default configuration while leaving it in the java code untouched. However, I still think that this is not needed, if people want to stream huge responses, they should think about what they are doing and configure everything accordingly. I totally agree that we lack documentation here. +1 Best Regards, Antonio Gallardo.