Hi, > i have a nice usecase .. i have a filenode in my workspace and i should create > about 70 copies of this node. > its a not so small pdf file (10Mb) and i am using the datastore so its no > problem > the binary exists only one time but the problem is the textextractor. it will > be called 70 times :-) > is it possible to reuse the fulltext index on a copy operation without new > reindexing the file ?
It's an interesting use case, and probably quite common. It would be good if the text extraction would be run only once for each binary. However I'm not sure how this should be implemented... One solution is to extract the text in the data store, but that would be in the 'wrong' level. What about this: the DataStore could return a special kind of InputStream that allows to get the DataIdentifier (DataStoreInputStream for example). The text extractor would then use this unique identifier to ensure text for the same binary file is only extracted once. The same mechanism could be used to avoid copying binary data within the same repository, and multiple repositories that share the same data store: if the data store detects such an input stream it would first check if the binary object already exists. Regards, Thomas
