Hi, > considering using jackrabbit as our jcr for a rewrite of our current app. We > currently have about 1 PB of content and metadata that we would like to store > in a single workspace. Will jackrabbit scale to this size? Has anyone created > a > repository of this size with jackrabbit? Should we limit the size of the > workspaces?
How many nodes do you plan for? If it's mainly binary data (such as files) I suggest to use the data store. http://wiki.apache.org/jackrabbit/DataStore - then it shouldn't be a problem. If there is little binary data, the problem might be backup (it depends on the persistence manager you use). > We are > also considering using the ‘Amazon > S3 Persistence Manager Project’ found in the sandbox, has anyone used it in a > production environment? I didn't use it, but from what I know the performance might be a problem. You would need to test it yourself. Regards, Thomas
