Hi,

We do want the availability of file attachment storage (Sergiu has done an implementation during the summer hackathon), but as Guillaume said it should be to the choice of the administrator.

Now concerning database storage, about Hibernate, does it means streams are not available at all in Hibernate or does it mean they don't always work ? If streams are available for the databases that support it, which ones support it ?

Concerning your proposal, it's interesting as indeed if we use streams for everything else, we do get rid of the memory consuption issue for attachments.
Now I have a few concerns:

- complexity and management of the data. What happens if we have a corrupted DB and one of the chunks fails to save. We might end up with invalid content. - we also have to solve other large items (like attachment history or recycle bin of attachments)

On a side note concerning the max_allowed_packet issue in MySQL, I was able to change that value at runtime (from the mysql console). If this also works using a remote connection, maybe we could hack and force a big value at runtime. This would be really great because the max_allowed_packet is killing us. XWiki does not report it well in many cases and almost no customers reads the documentation and sets the value properly. We also have seen in many cases, where the database is shared with other applications, and there is little access to the database configuration and to the ability of restart. To make it short, the max_allowed_packet issue is a major issue when operating XWiki.

Before we go into large fixes for that problem, could we maybe at least check that we report errors properly (on a 2.0.5 we were not for sure at least for attachment saving failure). We should also make sure we can always delete even when we cannot read the data in memory. This is also not the case when we cannot read the data because it's too big or because one of the tables does not have any data.

Ludovic

Le 18/10/10 19:55, Caleb James DeLisle a écrit :
I talked with the Hibernate people about using streams and was told that it is 
not supported by all
databases.

As an alternative to the proposal below I would like to propose a filesystem 
based storage mechanism.
The main advantage of using the database to store everything is that 
administrators need only use
mysql_dump and they have their entire wiki backed up.

If we are to abandon that requirement, we can have much faster attachment 
storage by using the
filesystem. For this, I propose BinaryStore interface remains the same but
com.xpn.xwiki.doc.BinaryObject would contain:

void addContent(InputStream content)

OutputStream addContent()

void clear()

InputStream getContent()

void getContent(OutputStream writeTo)

clear() would clear the underlying file whereas addContent would always append 
to it.


The added column would look like this:

<class name="com.xpn.xwiki.store.doc.FilesystemBinaryObject" 
table="filesystembinaryobject">
     <id name="id" column="id">
         <generator class="native" />
     </id>

     <property name="fileURI" type="string">
         <column name="fileuri" length="255" not-null="true"/>
     </property>
</class>


This would as with the original proposal be useful for not only storing 
attachments but attachment
history, deleted attachments and even document history or deleted documents.


WDYT?

Caleb


On 10/15/2010 04:21 PM, Caleb James DeLisle wrote:
Because the storage of large attachments is limited by database constraints and 
the fact that the
JDBC does not allow us to stream content out of the database, I propose we add 
a new database table
binarychunk.

The mapping will read as follows:

<class name="com.xpn.xwiki.store.hibernate.HibernateBinaryStore$BinaryChunk" 
table="binarychunk">
     <composite-id unsaved-value="undefined">
         <key-property name="id" column="id" type="integer" />
         <key-property name="chunkNumber" column="chunknumber" type="integer" />
     </composite-id>

     <property name="content" type="binary">
         <column name="content" length="983040" not-null="true"/>
     </property>
</class>

Notice the maximum length (983040 bytes) is a number which is divisible by many 
common buffer sizes
and is slightly less than the default max_packet_size in mysql which means that 
using the
xwikibinary table, we could store attachments of arbitrary size without hitting 
mysql default limits.


com.xpn.xwiki.store.BinaryStore will contain:

@param toLoad a binary object with an id number set, will be loaded.
void loadObject(BinaryObject toLoad)

@param toStore a binary object, if no id is present then it will be given one 
upon successful
                store, if id is present then that id number will be used.
void storeObject(BinaryObject toStore)

This will be implemented by: com.xpn.xwiki.store.hibernate.HibernateBinaryStore


com.xpn.xwiki.doc.BinaryObject will contain:

void setContent(InputStream content)

OutputStream setContent()

InputStream getContent()

void getContent(OutputStream writeTo)

Note: The get function and set functions will be duplicated with input or 
output streams to maximize
ease of use.

This will be implemented by com.xpn.xwiki.doc.TempFileBinaryObject which will 
store the binary
content in a temporary FileItem (see Apache commons fileupload).



+ This will be able to provide a back end for not only attachment content, but 
for attachment
   archive and document archive if it is so desired.
+ I have no intent of exposing it as public API at the moment.


WDYT?

Caleb

_______________________________________________
devs mailing list
[email protected]
http://lists.xwiki.org/mailman/listinfo/devs

_______________________________________________
devs mailing list
[email protected]
http://lists.xwiki.org/mailman/listinfo/devs



--
Ludovic Dubost
Blog: http://blog.ludovic.org/
XWiki: http://www.xwiki.com
Skype: ldubost GTalk: ldubost

<<attachment: ludovic.vcf>>

_______________________________________________
devs mailing list
[email protected]
http://lists.xwiki.org/mailman/listinfo/devs

Reply via email to