Hi, Well I had to increase maximum heap size for both client and tomcat server in order to be able to send 20mb binary property, and it was damn slow. I think most of the problem was by this base64 conversion.
That is not good of course and I have to sind a solution pretty soon. When I use normal webdav interface (/repository/default/ instead of /server) and upload large file with traditional webdav client (e.g. bitkinex or cadaver) it is pretty fast and I don't have to increase maximum heap. So maybe one quick fix would be to first send the data traditional way and then modify the node with the help of spi2dav. Maybe this is also one possibility how to cope with it generally. Best, Jozef On 1/7/08, Angela Schreiber <[EMAIL PROTECTED]> wrote: > Thomas Mueller wrote: > > Hi, > > > >> a possible way improve this would be to make usage of > >> the global data store (JCR-926) > > > > That would be a solution. The idea is to avoid temporary copies of the > > data, and persist large objects as early as possible. I'm not sure if > > the data store should be used in the Jackrabbit SPI client > > the data store would be on the server. but i would introduce > means to be able have the binary QValue only 'contain' the > uri (or some other sort of identifier) and maybe the length. > the uri would be resolved only if the stream is obtained from > the value... something like that. > > and basically the object could be sent to the server upon > creating the initial qvalue already... what would be needed > was a separate qvaluefactory implementation and some extensions > to the jackrabbit-webapp that would allow to read/write the binary > objects irrespective of their jcr property. > that's what i meant my making usage of the global data store. > > angela >