On Mon, Sep 14, 2009 at 18:39, freak182 <[email protected]> wrote: > First I want to clarify on few things: in one repository can have 1 or more > workspace, right? and in workspace can have 1 or more nodes, right?
Right. > My question is: > 1. is there a limit capacity in workspace? in MB or GB? > 2. is there a limit capacity in node? in MB or GB? Not sure if you mean a quota or a (scaling) limit. Jackrabbit does not support a quota mechanism (ie. maximum amount of data per user and/or workspace). How much data you can put into the repository depends on the persistence configuration and the hardware. The best performance can be achieved with bundle database persistence managers and a file datastore. The latter will allow for a very scalable handling of large binary properties. Note that large properties should be binaries... (very) long string properties can slow down access to that node. Regarding number of nodes it is recommended to distribute the load across the tree and to not have many direct children below a node - the rough limit until it still scales well is around 10k nodes. > 3. if I can set the limit, will jackrabbit will auto-create nodes to store > new files/documents? As said above, you cannot set a limit (quota). But even if there was such a feature - why should setting a limit lead to auto-creation of nodes? I think it should rather throw an exception on write if the quota is exceeded. > 4. if im running out of space in drive c: where my original repository and i > want to use drive d: or other harddisk to be the storage, how easy it to > tell jackrabbit to store/read from that hard disk or drive? Again, depends on the persistence configuration. If you use a database, the mechanisms provided by the db for that case can be used (obviously). Otherwise, incl. the file datastore, Jackrabbit does not have a mechanism for automatic handling of a full disk. Your application will get an (Repository?) exception when trying to write to the repository if the disk is full. Regards, Alex -- Alexander Klimetschek [email protected]
