A word of warning if you use the traditional method, an RDBMS table with
descriptive data and a reference to the name of the file storing the
binary data. If you store a lot of files in a directory you can get
into trouble. A robust design uses some form of tree structure of
directories to limit the size of individual directories to a value which
the system utilities can handle.
It is very tedious to discover that "ls" does not work on your directory!
Martin Jenkins wrote:
Dimitris P. Servis wrote:
I have to provide evidence that such an anorthodox solution is also
feasible
If it was me I'd "investigate" the problem by doing the "right" thing in
the first place, by which time I'd know enough to knock up the "wrong"
solution for the doubters before presenting the "proper" solution as a
fait accompli.
I have to compare access performance with flat binary files
If I remember correctly, there's no random access to BLOBs so all you'd
be doing is storing a chunk of data and reading the whole lot back. I
don't think that's a realistic test - the time it takes SQLite to find
the pages/data will be a tiny fraction of the time it will take to read
that data off the disk. You can't compare performance against reading
"records" out of the flat file because "they" won't let you do that. In
all it doesn't sound very scientific. ;)
Martin
-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------
-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------