Re: [sqlite] How to achieve fastest possible write performance for a strange and limited case

2013-04-02 Thread Eduardo Morras
On Fri, 29 Mar 2013 15:17:52 -0400
Jeff Archer  wrote:

> I have previously made an apparently bad assumption about this so now I
> would like to go back to the beginning of the problem and ask the most
> basic question first without any preconceived ideas.
> 
> This use case is from an image processing application.  I have a large
> amount of intermediate data (way exceeds physical memory on my 24GB
> machine).  So, I need to store it temporarily on disk until getting to next
> phase of processing.  I am planning to use a large SSD dedicated to holding
> this temporary data.  I do not need any recoverability in case of hardware,
> power or other failure.   Each item to be stored is 9 DWORDs, 4 doubles and
> 2 variable sized BLOBS which are images.
> 
> I could write directly to a file myself.  But I would need to provide some
> minimal indexing, some amount of housekeeping to manage variable
> sized BLOBS and some minimal synchronization so that multiple instances of
> the same application could operate simultaneously on a single set of data.
> 
> So, then I though that SQLite could manage these things nicely for me so
> that I don't have to write and debug indexing and housekeeping code that
> already exists in SQLite.
> 
> So, question is:  What is the way to get the fastest possible performance
> from SQLite when I am willing to give up all recoverability guarantees?
> Or, is it simple that I should just write directly to file myself?

Piping through gzip -6 or xz -2 will minimize bytes to write. If you are 
working with 5D images xz (7-zip LZMA fork) will do the best. 

For processing you do zcat file | processing_application or xzcat file | 
processing_application

---   ---
Eduardo Morras 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] TCL Test failures on ARM

2013-04-02 Thread Simon Slavin

On 2 Apr 2013, at 8:33am, Bk  wrote:

> 1) i am using Linux 32bit on the embedded device,  is it okey to give
> "-D_FILE_OFFSET_BITS=64 " ?

Yep.  Theoretically it may be a little slower or lack backward compatibility, 
but if it compiles at all, it should be fine.  The other way around would be 
bad but that way is okay as long as your whole app uses it.

> 2) what is the significance of -D_FILE_OFFSET_BITS=64" ?

Standard file library sees this setting and knows to use the 64-bit variants of 
file operation functions and types.  For instance, if you refer to 'off_t' in 
your code, it will be understood as 'off64_t'.

You might instead want to look into _LARGEFILE64_SOURCE .  I have no idea 
whether this does actually make a difference to SQLite, though.

Warning: all of this is getting into territory where deep understanding of your 
compiler is useful.  If you never intended to get this detailed, you may be 
going up the wrong avenue.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] TCL Test failures on ARM

2013-04-02 Thread Bk
Hi,

i have noticed that the tests pass if built with below command

TCC = armv7l-timesys-linux-gnueabi-gcc   -D_FILE_OFFSET_BITS=64 
-DSQLITE_OS_UNIX=1 -I. -I${TOP}/src -I${TOP}/ext/rtree

Where i have added "-D_FILE_OFFSET_BITS=64" . 

1) i am using Linux 32bit on the embedded device,  is it okey to give
"-D_FILE_OFFSET_BITS=64 " ? 

2) what is the significance of -D_FILE_OFFSET_BITS=64" ?


Thank You




--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/TCL-Test-failures-on-ARM-tp67612p67991.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] ICU and collation strength [patch]

2013-04-02 Thread François Gannaz
Hi

I wanted to compare strings in a SQLite DB without caring for the accents
and the case. I mean "Événement" should be equal to "evenèment".

The ICU extension enables a case-insensitive LIKE, but 'é' will still be
different from 'É'. Anyway, using the ICU LIKE won't use any index, so I
was very reluctant to go this way.

The only solution I found was to patch the ICU extension[^1]. I added an
optional parameter to the function that creates a new collation: the
strength of the collation. Its default value is 3, and when I set it to
1, I get what I wanted.

SELECT icu_load_collation('fr_FR', 'french');
SELECT icu_load_collation('fr_FR', 'french_ci', 1); -- with patch

I first thought this was a very frequent need, so I had asked about this
on Stack Overflow, hoping for a quick answer. I ended up posting there a
detailed answer[^2].

I'm perfectly satisfied with this solution, but I thought I should share
it. I believe it could be merged into the ICU extension, because it
doesn't break compatibility, it just adds a new feature.

[^1]: 
[^2]: 


Regards
--
François Gannaz
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users