Hallo,
we consider the use of memcached in a little non-standard way - as the
data caching tool used when building voice data files for out TTS
(text-to-speech) system. In this process, wave (+ some other) files
are read and searched for speech chunks, which are then stored to the
data files being created. During this process, the same files are read
repeatedly (many times).

Currently, we use our "cache" embedded into the tools which create the
data files, but it is not very good solution.

I was thinking about the use memcached, which would fit nicely into
the process (it is run under CruiseControl). The main problem is that
all the data need to be cached occupy approximately 6GB (and it may
increase in future). The good think is that the data are pretty
compressable.

So, I would like to as, it it would be possible to add (internal)
compression mechanism to the memcached daemon. I was thinking about a
switch which would switch the compression on/off (and/or setting the
level) during memcached start. The data would be compressed
transparently, so no interface/protocol change will occur in fact (it
would only use less memory).

If you agree with this enhancement, I offer to implement it within
memcached and send patches to you. I would only appreciate little hint
where exactly to put it (I thing close to the place where the data are
read from, and before they are written back to the listening socket).

Of course we can compress the data before they are sent to the
memcached, but it would require much more changes in the build scripts
(written in various languages), so to have the compression within
memcached seems much more easier for us.

Thank you,
Dan T.

**********************************************************************
*  Phone : +420 377 63 2531
*  http://www.kky.zcu.cz/en/people/tihelka-daniel
*
*  Department of Cybernetics
*  Faculty of Applied Sciences
*  University of West Bohemia
*  Univerzitni 8,
*  306 14 Plzen
*  Czech Republic
*
**********************************************************************

Reply via email to