Additionaly to the exceptions that arised when i introduced load balancing
with Nginx (the quoted text below describes the issue) strange behaviour is
taking place now: not finding _existing_ nodes, errors while creating new
nodes and new exceptions appeared even connecting directly with the
repository through its IP address:
2011-11-03 11:57:40.506 WARN [http-192.168.0.188-8080-1]
JcrRemotingServlet.java:337 */pequeno/jcr:content: mandatory property {
http://www.jcp.org/jcr/1.0}data does not exist*
2011-11-03 12:02:05.009 ERROR [http-192.168.0.188-8080-2]
ExportContextImpl.java:193 ClientAbortException: *java.net.SocketException:
Broken pipe*
My guessing is that the repository became unconsistent. Any better
interpretation?
Thanks in advance for your attention!
2011/11/2 Francisco Carriedo Scher <[email protected]>
> Hi there,
>
> i have a clustered JR environment and i operate it from the Java side
> through Webdav. I have deployed three servers that operate in cluster and
> everything went fine until i tried to add a load balancing / fault
> tolerance feature to the design. I am using Nginx as load balancer with the
> webdav module and, despite obtaining correctly the Repository object, some
> operations fail with the following exception:
>
> *javax.jcr.RepositoryException: Request Entity Too Large
> at org.apache.jackrabbit.spi2dav.**ExceptionConverter.generate(**
> ExceptionConverter.java:113)
> at org.apache.jackrabbit.spi2dav.**ExceptionConverter.generate(**
> ExceptionConverter.java:49)
> at org.apache.jackrabbit.**spi2davex.**RepositoryServiceImpl$**
> BatchImpl.start(**RepositoryServiceImpl.java:**457)
> at org.apache.jackrabbit.**spi2davex.**RepositoryServiceImpl$**
> BatchImpl.access$200(**RepositoryServiceImpl.java:**399)
> at org.apache.jackrabbit.**spi2davex.**RepositoryServiceImpl.submit(**
> RepositoryServiceImpl.java:**304)
> at org.apache.jackrabbit.jcr2spi.**WorkspaceManager$**
> OperationVisitorImpl.execute(**WorkspaceManager.java:830)
> at org.apache.jackrabbit.jcr2spi.**WorkspaceManager$**
> OperationVisitorImpl.access$**500(WorkspaceManager.java:797)
> at org.apache.jackrabbit.jcr2spi.**WorkspaceManager.execute(**
> WorkspaceManager.java:594)
> at org.apache.jackrabbit.jcr2spi.**state.SessionItemStateManager.**
> save(SessionItemStateManager.**java:139)
> at org.apache.jackrabbit.jcr2spi.**ItemImpl.save(ItemImpl.java:**246)
> at org.apache.jackrabbit.jcr2spi.**SessionImpl.save(SessionImpl.**
> java:328)
> at com.solaiemes.filerepository.**management.**EmbeddableFileManager.*
> *deleteItem(**EmbeddableFileManager.java:**248)
> at com.solaiemes.filerepository.**management.**EmbeddableFileManager.*
> *importFile(**EmbeddableFileManager.java:**469)
> at com.solaiemes.filerepository.**management.**EmbeddableFileManager.*
> *saveFile(**EmbeddableFileManager.java:80)
> at com.solaiemes.filerepository.**management.RepoShell.main(**
> RepoShell.java:87)
> Caused by: org.apache.jackrabbit.webdav.**DavException: Request Entity
> Too Large
> at org.apache.jackrabbit.webdav.**client.methods.DavMethodBase.**
> getResponseException(**DavMethodBase.java:172)
> at org.apache.jackrabbit.webdav.**client.methods.DavMethodBase.**
> checkSuccess(DavMethodBase.**java:181)
> at org.apache.jackrabbit.**spi2davex.**RepositoryServiceImpl$**
> BatchImpl.start(**RepositoryServiceImpl.java:**453)
> ... 12 more
> *
> Uploading file operation fails and the initial guessing was that uploading
> files through webdav needs multiple HTTP requests and the load balancer
> would forward each of those to a different repository server. With little
> files (some bytes) and with operations that include just reading it seems
> to work correctly. I added session affinity (sticky session) support to
> Nginx and recompiled it but the same error persists.
>
> Can somebody tell me if my guessing was correct and the error is related
> to the question i suggested?
>
> Thanks in advance for your attention!
>
>