Re: [sqlite] New DLLs and sources. Was: SQLite version 3.8.5 beta

2014-05-28 Thread RSmith


On 2014/05/28 20:26, Warren Young wrote:

On 5/28/2014 11:20, jose isaias cabrera wrote:


I would rather have the speed
then the size.




Rather speed than size is an Engineering decision which is easy on a desktop/server system (such as most Windows/OSX/*nix 
implementations) but it may well be the opposite in a phone or other handheld device or small integrated system where IO is so 
significantly slower and size is paramount. What's more, SQLite caters (and have always catered) for these embedded implementations 
- and while today on most ARM things or Android/iSomething systems size is becoming less of a concern than it historically was, 
SQLite's focus is still correctly aimed at un-bulk.


Many years ago, I read an article written by a Microsoft employee where they said they built Windows' own binaries optimized for 
size rather than speed, since in today's L1/L2/L3 world, size *is* speed.


Bigger code gets kicked out of the processor cache faster, so the processor has 
to go back to main memory more often.


Others have already responded with lots of caveats to this, but let me add to never confuse file size with resulting code size. One 
optimization may be to provide a few different versions of the same set of code bits that get used based on criteria. The people who 
make the optimizers are usually quite well versed in what makes for faster execution and/or size and it is unlikely that they will 
have made really bad choices as visible from the sideline by us forum dwellers - no matter which one of the OSes they mainly serve.




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] New DLLs and sources. Was: SQLite version 3.8.5 beta

2014-05-28 Thread Scott Robison
On May 28, 2014 12:36 PM, "Drago, William @ MWG - NARDAEAST"
> Don't modern compilers consider what effects the speed optimizations will
have on the pipeline and the cache and optimize accordingly?

I think they might try to in a broad way, but we live in a world with
multiple models of CPUs from multiple manufacturers with varying amounts of
cache and varying pipeline details and so on. If you knew you could target
a specific CPU that would be possible, but usually you don't have the
needed information.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Load time performance

2014-05-28 Thread Bert Huijben


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Stephen Chrzanowski
> Sent: woensdag 28 mei 2014 17:33
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Load time performance
> 
> Caching or not, I don't experience the same thing with opening a read only
> text file in Notepad.  For S'n'Gs, I had a random text file sitting on my
> desktop (A config file for a game, and it is pure text) and decided to use
> it as a subject of abuse.  Its original attribute is set so read only is
> OFF.  I opened it up twice in PSPad just to try and eliminate any kind of
> cache chance.  The second load was instant as the PSPad software was
> actually running the second time I ran.  I closed the file, set the R/O
> attribute, and re-opened.  Again, instant load.

This sounds like hitting a retry loop. The Sqlite Windows VFS implements a
few of those to avoid issues with virusscanners, but it shouldn't hit those
om such a case... But it would be hard to tell what caused the access denied
error, triggering the retry.


Bert 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Reading compressed database files

2014-05-28 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 28/05/14 02:26, Hadashi, Rinat wrote:
> I have 13 tables, of which 2 are huge, 2 are medium and the rest are
> very small. My huge tables have 3 columns: numeric, numeric and varchar
> with millions of rows. I keep an index on the numeric columns.
> 
> Does that bring up any column ordering suggestion?

Showing the schema and slowest query will help.  Also what is the average
size of the varchar values?  What operating system and filesystem are you
using?

SQLite stores each row as each column sequentially encoded.  If for
example your varchar was around 32kb then to read two rows would require
seeking/reading about every 32kb, which is way less efficient than if it
was 10 bytes in size in which case multiple rows come back with each read.

You haven't mentioned what you tried already.  Good starters are running
vacuum after populating the database and determining the optimal page
size.  The latter will depend on your queries - eg a 64kb page size will
result in 64kb of i/o even if only one byte is needed from a page.  If you
use NTFS compression then it operates on units of 64kb so using a 64kb
page size would be optimal.

I recommend you have a deterministic repeatable representative set of data
and queries.  That way you try different settings like page size, file
system compression and operating system tuning (if applicable).

There isn't some secret magic wand that will suddenly make things faster -
instead you need to measure and tweak multiple places.

Roger

-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iEYEARECAAYFAlOGMfgACgkQmOOfHg372QT9IACfVvhc1LWG4X2IFBC0rKKNnrdw
UFIAoNhtFdh1EZKEo3fx7Kj9bkdKJRW4
=02fs
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] New DLLs and sources. Was: SQLite version 3.8.5 beta

2014-05-28 Thread jose isaias cabrera

"Richard Hipp" wrote...


On Wed, May 28, 2014 at 1:20 PM, jose isaias cabrera
wrote:



3. Is there a spot anywhere that has clear steps on creating the Sqlite3
DLL?



http://www.sqlite.org/draft/howtocompile.html#dll

The "draft" page above will be promoted to the official website at the 
next

release.


Muchas gracias. 


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] New DLLs and sources. Was: SQLite version 3.8.5 beta

2014-05-28 Thread Richard Hipp
On Wed, May 28, 2014 at 1:20 PM, jose isaias cabrera
wrote:

>
> 3. Is there a spot anywhere that has clear steps on creating the Sqlite3
> DLL?
>

http://www.sqlite.org/draft/howtocompile.html#dll

The "draft" page above will be promoted to the official website at the next
release.



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] New DLLs and sources. Was: SQLite version 3.8.5 beta

2014-05-28 Thread Warren Young

On 5/28/2014 12:35, Drago, William @ MWG - NARDAEAST wrote:


Bigger code gets kicked out of the processor cache faster, so the
processor has to go back to main memory more often.


Don't modern compilers consider what effects the speed optimizations will have 
on the pipeline and the cache and optimize accordingly?


The compiler probably won't be looking at things like cache pressure. 
All the compiler will care about is that this function now executes 
twice as fast, yay!  It won't consider that it had to make it twice as 
large as the -Os version, which will therefore stay in cache 4x as long, 
so that cache thrashing will throw away the 2x benefit.


As you say, it probably won't make a function so large that it never 
fits into L1 in the first place.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] New DLLs and sources. Was: SQLite version 3.8.5 beta

2014-05-28 Thread Warren Young

On 5/28/2014 12:26, Warren Young wrote:

On 5/28/2014 11:20, jose isaias cabrera wrote:


I would rather have the speed
then the size.


in today's L1/L2/L3 world, size *is* speed.


Also, there is a pretty hard limit on how much micro code optimizations 
can help a DBMS.  It's a fundamentally I/O limited problem.  Disk is 
many (4ish?) orders of magnitude slower than main RAM, and the CPU 
caches are orders of magnitude faster than that.


http://www.eecs.berkeley.edu/~rcs/research/interactive_latency.html

That is to say, if you made every code path in SQLite zero length, it 
would do approximately *squat* to improve the time it takes to get your 
query results.


Only intelligent algorithms matter here, not micro-optimizations. 
Better indexes, smarter query planners, etc.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] New DLLs and sources. Was: SQLite version 3.8.5 beta

2014-05-28 Thread Drago, William @ MWG - NARDAEAST
> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Warren Young
> Sent: Wednesday, May 28, 2014 2:26 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] New DLLs and sources. Was: SQLite version 3.8.5
> beta
>
> On 5/28/2014 11:20, jose isaias cabrera wrote:
> >
> > I would rather have the speed
> > then the size.
>
> Many years ago, I read an article written by a Microsoft employee where
> they said they built Windows' own binaries optimized for size rather
> than speed, since in today's L1/L2/L3 world, size *is* speed.
>
> Bigger code gets kicked out of the processor cache faster, so the
> processor has to go back to main memory more often.

Don't modern compilers consider what effects the speed optimizations will have 
on the pipeline and the cache and optimize accordingly?

-Bill




CONFIDENTIALITY, EXPORT CONTROL AND DISCLAIMER NOTE:This e-mail and any 
attachments are solely for the use of the addressee and may contain information 
that is privileged or confidential. Any disclosure, use or distribution of the 
information contained herein is prohibited. In the event this e-mail contains 
technical data within the definition of the International Traffic in Arms 
Regulations or Export Administration Regulations, it is subject to the export 
control laws of the U.S.Government. The recipient should check this e-mail and 
any attachments for the presence of viruses as L-3 does not accept any 
liability associated with the transmission of this e-mail. If you have received 
this communication in error, please notify the sender by reply e-mail and 
immediately delete this message and any attachments.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] New DLLs and sources. Was: SQLite version 3.8.5 beta

2014-05-28 Thread Scott Robison
On Wed, May 28, 2014 at 12:26 PM, Warren Young  wrote:

> On 5/28/2014 11:20, jose isaias cabrera wrote:
>
>>
>> I would rather have the speed
>> then the size.
>>
>
> Many years ago, I read an article written by a Microsoft employee where
> they said they built Windows' own binaries optimized for size rather than
> speed, since in today's L1/L2/L3 world, size *is* speed.
>
> Bigger code gets kicked out of the processor cache faster, so the
> processor has to go back to main memory more often.
>

Good point. Many people fail to take into account that the real key to
speed is not what tricks a compiler can perform while translating code from
source to object form, but what algorithms are used in the source code. A
highly optimizing slow algorithm is rarely if ever going to outperform a
better algorithm.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] New DLLs and sources. Was: SQLite version 3.8.5 beta

2014-05-28 Thread Warren Young

On 5/28/2014 11:20, jose isaias cabrera wrote:


I would rather have the speed
then the size.


Many years ago, I read an article written by a Microsoft employee where 
they said they built Windows' own binaries optimized for size rather 
than speed, since in today's L1/L2/L3 world, size *is* speed.


Bigger code gets kicked out of the processor cache faster, so the 
processor has to go back to main memory more often.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions regarding using test_onefile as VFS

2014-05-28 Thread Luca Sturaro
Hi, no you have to compile your vfs with sqlite3 amalgamation and its shell
in order to use it.
After compiling and linking, you will be able to run your vfs. Remember to
register your vfs in order to have it available.

Hope this helps,
Regards,
Luca
Il 28/mag/2014 14:41 "김병준"  ha scritto:

> The documentation seems to state that in order to use test_onefile,
> instead of providing an option when compiling sqlite3, test_onefile must be
> set as vfs with the -vfs command option in the shell. Is my understanding
> correct?
>
> The documentation does not provide examples using vfs demo files such as
> test_onefile or test_demovfs.  In order to use those vfs demo files, do I
> need to edit the shell source code myself?
>
> Are there any existing documentations on the usage of files such as
> test_demovfs or test_onefile? Or are there any tips or books on this matter
> that you can recommend me?
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] New DLLs and sources. Was: SQLite version 3.8.5 beta

2014-05-28 Thread jose isaias cabrera


"Richard Hipp" wrote...

On Wed, May 28, 2014 at 12:56 PM, jose isaias cabrera 


Re: [sqlite] New DLLs and sources. Was: SQLite version 3.8.5 beta

2014-05-28 Thread Richard Hipp
On Wed, May 28, 2014 at 12:56 PM, jose isaias cabrera  wrote:

>
> Just noticed something...  It may be nothing, but the MinGW built DLL has
> a size of 645KB while the MSVC built one has a size of 962KB.  Just under
> 33% bigger. I hope there is nothing missing on the MinGW one. :-)  It's so
> weird how MS DLLs and programs are always so much bigger in size then
> non-MS built ones.  Just food for thoughts...
>

The MSVC DLL is 64-bit and the MinGW DLL is 32-bit.  That accounts for part
of the difference.  Additionally, MinGW was run with the -Os option
(optimize for small size) whereas MSVC was run with -O2 (optimize for
maximum speed).  So MSVC is probably doing lots of function in-lining and
loop-unrolling that might make the code a little faster, but also makes it
bigger.


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] New DLLs and sources. Was: SQLite version 3.8.5 beta

2014-05-28 Thread jose isaias cabrera


"Richard Hipp" wrote...



On Tue, May 27, 2014 at 10:50 AM, Richard Hipp  wrote:


On Tue, May 27, 2014 at 10:39 AM, jose isaias cabrera <
cabr...@wrc.xerox.com> wrote:


H... I am running the original DLL created for 3.8.4.3 on the WinXP
and
it works fine, so it was not a change as far as v3.17 and before, but 
just

this new DLL.  I am wondering if it is also because it is a pre-release
and
the "released" version will work fine.



It is probably because Dan usually builds the release DLLs using mingw
(32-bit) but I built those pre-release DLLs using MSVC 2012.



Fresh DLLs (and source code) with all the latest updates and enhancements
are now on the website:  http://www.sqlite.org/download.html

This time I build the 32-bit DLL using mingw instead of MSVC.  (MSVC was
still used for the 64-bit DLL.)  So perhaps it will work correctly on
WinXP.  Please let me know one way or the other.  Thanks.


Just noticed something...  It may be nothing, but the MinGW built DLL has a 
size of 645KB while the MSVC built one has a size of 962KB.  Just under 33% 
bigger. I hope there is nothing missing on the MinGW one. :-)  It's so weird 
how MS DLLs and programs are always so much bigger in size then non-MS built 
ones.  Just food for thoughts...


Thanks.
josé 


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Reading compressed database files

2014-05-28 Thread Richard Hipp
On Wed, May 28, 2014 at 11:33 AM, Hadashi, Rinat wrote:

> Fixed data. I write the databases once in a different flow, and then I
> only read them.
>

How well does gzip compress the database?  In other words, if you do:

 ls -l original.db
 gzip original.db
 ls -l original.db.gz

How much smaller is original.db.gz than original.db?

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Reading compressed database files

2014-05-28 Thread Hadashi, Rinat
Fixed data. I write the databases once in a different flow, and then I only 
read them.

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Simon Slavin
Sent: Wednesday, May 28, 2014 4:18 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Reading compressed database files


On 28 May 2014, at 8:25am, Hadashi, Rinat  wrote:

> My databases are very big (almost 100 GB).
> I am looking for a compression solution.

Are these databases with fixed data which can be opened read-only ?  Or do you 
have to be able to make changes to them ?  It makes a huge difference in how 
easy it is to do this.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
-
Intel Israel (74) Limited

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Load time performance

2014-05-28 Thread Stephen Chrzanowski
Caching or not, I don't experience the same thing with opening a read only
text file in Notepad.  For S'n'Gs, I had a random text file sitting on my
desktop (A config file for a game, and it is pure text) and decided to use
it as a subject of abuse.  Its original attribute is set so read only is
OFF.  I opened it up twice in PSPad just to try and eliminate any kind of
cache chance.  The second load was instant as the PSPad software was
actually running the second time I ran.  I closed the file, set the R/O
attribute, and re-opened.  Again, instant load.

On to SQLite.  Same abused text file, with the R/O attribute on.  Started
sqlite3 with the filename as a parameter, and it took about 3 seconds to
get me to the CLI.  That is WITHOUT a select statement, just to get to the
CLI.  So I quit out of the CLI, hit the UP arrow, and hit enter, and again
a 3 second wait.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DELETE & INSERT vs. REPLACE

2014-05-28 Thread Simon Slavin

On 28 May 2014, at 3:55pm, Drago, William @ MWG - NARDAEAST 
 wrote:

> Can someone tell me what an appropriate use for REPLACE is?

REPLACE in SQlite is just a way of writing

INSERT OR REPLACE ...

It's best use is when you are not sure whether a record already exists or not.  
And the definition of 'exists' is that the new row has data which clashes with 
an existing row by at least one UNIQUE constraint, including the rule that 
primary keys must be UNIQUE.

So an example is if you have a big table of equipment, and a smaller table 
listing all equipment which is on loan.  One row for each item on loan, and 
everything not mentioned in this smaller table should be in the stock room.  
The smaller table would have a UNIQUE key on the equipment number to prevent it 
from listing one item being on loan to two different people.

If you discover that item number 515 is on loan to Barry now you need to make 
sure that Barry is listed in that table.  But you don't know whether you're 
creating a new row or replacing a row that said that the equipment was on loan 
to Amanda last week.  So you use INSERT OR REPLACE and SQLite works out whether 
it has to delete an existing row before it can insert the new one.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DELETE & INSERT vs. REPLACE

2014-05-28 Thread Drago, William @ MWG - NARDAEAST
> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Simon Slavin
> Sent: Tuesday, May 27, 2014 5:21 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] DELETE & INSERT vs. REPLACE
>
>
> On 27 May 2014, at 9:12pm, Drago, William @ MWG - NARDAEAST
>  wrote:
>
> > Let me rephrase the question by asking, how should I overwrite
> existing data? For example, Jane has 5 guitars in her instrument table.
> She trades her 5 guitars for 5 new ones. Almost everything about the
> new guitars is different from the old guitars.
> >
> > My gut tells me I should simply delete where instrument_type =
> 'guitar' and then insert the new data.
>
> Go with your gut.  Since there's no relationship between the old and
> new guitars it would be just as likely that Jane would replace 5
> guitars with 4 guitars, having sold two cheap ones and bought one
> expensive one.  You should not be using REPLACE, you should be using
> DELETE, then INSERT, with the correct number of each type of command.
>
> Of course, unless your entire databases is about Jane's guitars that's
> not a good table to create.  You would be more likely to have a
> database about everything Jane owns, or a database about all the
> guitars lots of people owns.

Thank you. That's good news and I don't have to change any of my code.

Can someone tell me what an appropriate use for REPLACE is?

-Bill
CONFIDENTIALITY, EXPORT CONTROL AND DISCLAIMER NOTE:This e-mail and any 
attachments are solely for the use of the addressee and may contain information 
that is privileged or confidential. Any disclosure, use or distribution of the 
information contained herein is prohibited. In the event this e-mail contains 
technical data within the definition of the International Traffic in Arms 
Regulations or Export Administration Regulations, it is subject to the export 
control laws of the U.S.Government. The recipient should check this e-mail and 
any attachments for the presence of viruses as L-3 does not accept any 
liability associated with the transmission of this e-mail. If you have received 
this communication in error, please notify the sender by reply e-mail and 
immediately delete this message and any attachments.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Load time performance

2014-05-28 Thread Simon Slavin

On 28 May 2014, at 1:44pm, Stephen Chrzanowski  wrote:

> What I found was that even changing the files read-only attribute
> (Old-school DOS 3.3 days file attribute kinda thing) the initial query lag
> hit.  The same thing happened when I unset the R/O attrib and set the NTFS
> permissions to read access only.  The database had maybe a single empty
> table and a single simple query execute.  The lag was about a second and a
> half to two seconds, I don't remember exactly.  Not a SIGNIFICANT hit, but
> a hit needless to say.  The test was done with my own application
> specifically written to test the theories, as well as using the SQLite CLI
> downloaded from the SQLite site.

You just gave a good description of how Windows's file caching system works.  
If you open a read-only file under Windows it reads some of it into cache as an 
attempt to speed up future access. If the file is opened read/write then it 
doesn't do this because you might be going to overwrite the old contents so you 
won't care what's there.

This strategy on Windows' part is a good strategy which is quite effective for 
the average user who does normal things with their computer.  It can speed up 
reading an application file and its preference file quite a bit.  But it's not 
a good strategy for people who will only want to read a small amount of data 
from a database then close it again.

It's worth noting that this strategy does not add significant time to the 
entire operation.  The entire time of accessing the database file does not 
change much.  It simply shifts time taken to the 'open' command rather than 
spread it out through subsequent read commands.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Reading compressed database files

2014-05-28 Thread Simon Slavin

On 28 May 2014, at 8:25am, Hadashi, Rinat  wrote:

> My databases are very big (almost 100 GB).
> I am looking for a compression solution.

Are these databases with fixed data which can be opened read-only ?  Or do you 
have to be able to make changes to them ?  It makes a huge difference in how 
easy it is to do this.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite3 database on windows network drive and unreliable connection

2014-05-28 Thread Stephen Chrzanowski
SQLite is a desktop application, not a network aware application.  The file
locking mechanisms lie to SQLite which makes it an EXTREMELY HIGH CHANCE
that connectivity and any WRITE statements WILL cause data corruption.

This isn't the fault of SQLite but the network file system locking.  AFAIK,
there is no network file sharing utility that works 100%.  Windows and
Linux based systems are affected.  The problem is that the 'server' doesn't
handle multiple file locks properly because it is treating the file as a
'file' not as a data source.  Multiple copies of your application consider
the data to be theirs, so, if data is being written to the WAL file, or
directly to the database, the server is going to treat both with the same
regard and potentially write the data out of order.

Their preference probably will cause data loss.

http://www.sqlite.org/whentouse.html

Under the "Situations Where Another RDBMS May Work Better" section, the
first paragraph illustrates what I mentioned above.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Load time performance

2014-05-28 Thread Stephen Chrzanowski
Read Only mode is going to cause initial lag.  I've written an analysis to
what I saw at least on a Windows machine several months ago.  I didn't try
a Linux machine I don't think, but I can re-do it if anyone is interested.
That analysis was done on a local drive (RAID-0 SSD setup), not on a
network.  What I found was that even changing the files read-only attribute
(Old-school DOS 3.3 days file attribute kinda thing) the initial query lag
hit.  The same thing happened when I unset the R/O attrib and set the NTFS
permissions to read access only.  The database had maybe a single empty
table and a single simple query execute.  The lag was about a second and a
half to two seconds, I don't remember exactly.  Not a SIGNIFICANT hit, but
a hit needless to say.  The test was done with my own application
specifically written to test the theories, as well as using the SQLite CLI
downloaded from the SQLite site.

My application also tested for specifically stating to the SQLite engine
that the file was to be opened read only, as well as the default read/write
access method.

Setting the file to full access eliminated the initial lag entirely for all
conditions, even (If I recall correctly) when I had my application try to
access the file via R/O.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Questions regarding using test_onefile as VFS

2014-05-28 Thread 김병준
The documentation seems to state that in order to use test_onefile, instead of 
providing an option when compiling sqlite3, test_onefile must be set as vfs 
with the -vfs command option in the shell. Is my understanding correct? 
 
The documentation does not provide examples using vfs demo files such as 
test_onefile or test_demovfs.  In order to use those vfs demo files, do I need 
to edit the shell source code myself?
 
Are there any existing documentations on the usage of files such as 
test_demovfs or test_onefile? Or are there any tips or books on this matter 
that you can recommend me?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Load time performance

2014-05-28 Thread Teg
Hello Rinat,

I  found  that  if  I  opened the DB and read it using normal file IO,
throwing  away  the  data  I read, then closing it and opening it with
Sqlite  could reduce this initial delay. It depends on the size of the
DB though. It's mostly useful for small DB's.

C

Wednesday, May 28, 2014, 3:22:49 AM, you wrote:

HR> Hi,
HR> I work in READ ONLY mode.
HR> My application connects the DB only once, at the beginning.
HR> I can't really work with local files. (I log to any machine and get my 
files from the network.)

HR> Perhaps there are some intermediate files generated in the first
HR> load that I can prepare in advance?

HR> Rinat

HR> -Original Message-
HR> From: sqlite-users-boun...@sqlite.org
HR> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Stephen Chrzanowski
HR> Sent: Thursday, May 22, 2014 4:05 PM
HR> To: General Discussion of SQLite Database
HR> Subject: Re: [sqlite] Load time performance

HR> You shouldn't access a SQLite database remotely, except maybe for
HR> read only access, but that could come under fire.  Any network
HR> file action can't guarantee that write locks happen.

HR> If you attempt to access a file in READ-ONLY mode, you'll be greeted with a
HR> 1-5 second delay (I don't remember what the delay is right now)
HR> per connection.  If you're using one connection to the DB, then
HR> you'll experience the delay.  If you've got one connection going
HR> for the life of your application, you'll get hit with the first
HR> delay, but all subsequent queries will work.


HR> On Thu, May 22, 2014 at 8:12 AM, Hadashi, Rinat 
wrote:

>> Hi
>>
>> The first time I access a database takes significantly more time than 
>> subsequent accesses.
>> I am looking for ideas to shorten the time required for the first access.
>>
>> I work on Linux, my db. file is "somewhere" in the file system, not 
>> locally on the machine from which I am running sqlite3
>>
>> Thanks
>>
>> Rinat Hadashi
>>
>>
HR> -
HR> Intel Israel (74) Limited

HR> This e-mail and any attachments may contain confidential material for
HR> the sole use of the intended recipient(s). Any review or distribution
HR> by others is strictly prohibited. If you are not the intended
HR> recipient, please contact the sender and delete all copies.

HR> ___
HR> sqlite-users mailing list
HR> sqlite-users@sqlite.org
HR> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



-- 
Best regards,
 Tegmailto:t...@djii.com

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Reading compressed database files

2014-05-28 Thread Teg
Hello Rinat,

I compress my data into blobs using Gzip before insert, and decompress
when I read the blobs back out but, leave the other data in the DB
un-compressed. In that way, I get compression but, normal operations
remain fast. This works if the data to compress > 4Kish. My blobs can
be 200K to 20megs in size. If it's a bunch of small text columns, this
technique won't be useful.

C

Wednesday, May 28, 2014, 5:26:29 AM, you wrote:

HR> Hi Roger

HR> Where can I learn how to characterize my database?
HR> I work read-only, single connection.
HR> I have 13 tables, of which 2 are huge, 2 are medium and the rest are very 
small.
HR> My huge tables have 3 columns: numeric, numeric and varchar with millions 
of rows.
HR> I keep an index on the numeric columns.

HR> Does that bring up any column ordering suggestion?

HR> Thanks
HR> Rinat
HR> -Original Message-
HR> From: sqlite-users-boun...@sqlite.org
HR> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Roger Binns
HR> Sent: Wednesday, May 28, 2014 12:06 PM
HR> To: General Discussion of SQLite Database
HR> Subject: Re: [sqlite] Reading compressed database files

HR> -BEGIN PGP SIGNED MESSAGE-
HR> Hash: SHA1

HR> On 28/05/14 00:25, Hadashi, Rinat wrote:
>> My databases are very big (almost 100 GB). I am looking for a 
>> compression solution.
>> 
>> Did anyone have an experience with reading a compressed database?

HR> It would be helpful if you characterise your data and queries.

HR> For example if the size is due to blobs, then careful ordering of
HR> columns, or moving them to separate tables will likely be very useful.

HR> You can get compression external to SQLite by using a compressing
HR> filesystem like NTFS or btrfs (make sure to pick an appropriate
HR> page size), or by internal compression with cerod:

HR>   http://www.hwaci.com/sw/sqlite/cerod.html

HR> Roger
HR> -BEGIN PGP SIGNATURE-
HR> Version: GnuPG v1

HR> iEYEARECAAYFAlOFpvsACgkQmOOfHg372QRSRACfcDqTprcD//n9yYXcGPl9yQfo
HR> sTIAoLkIaQHR4JAwk1LbuRzCyQsx/5aN
HR> =tYeT
HR> -END PGP SIGNATURE-
HR> ___
HR> sqlite-users mailing list
HR> sqlite-users@sqlite.org
HR> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
HR> -
HR> Intel Israel (74) Limited

HR> This e-mail and any attachments may contain confidential material for
HR> the sole use of the intended recipient(s). Any review or distribution
HR> by others is strictly prohibited. If you are not the intended
HR> recipient, please contact the sender and delete all copies.

HR> ___
HR> sqlite-users mailing list
HR> sqlite-users@sqlite.org
HR> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



-- 
Best regards,
 Tegmailto:t...@djii.com

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Reading compressed database files

2014-05-28 Thread Richard Hipp
On Wed, May 28, 2014 at 5:26 AM, Hadashi, Rinat wrote:

> Hi Roger
>
> Where can I learn how to characterize my database?
>

How much does ZIP or gzip compress your database?  The amount of
compression obtained by CEROD is usually very close to the compression
obtained simply by running the database through ZIP or gzip.


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Reading compressed database files

2014-05-28 Thread Hadashi, Rinat
Hi Roger

Where can I learn how to characterize my database?
I work read-only, single connection.
I have 13 tables, of which 2 are huge, 2 are medium and the rest are very small.
My huge tables have 3 columns: numeric, numeric and varchar with millions of 
rows.
I keep an index on the numeric columns.

Does that bring up any column ordering suggestion?

Thanks
Rinat
-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Roger Binns
Sent: Wednesday, May 28, 2014 12:06 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Reading compressed database files

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 28/05/14 00:25, Hadashi, Rinat wrote:
> My databases are very big (almost 100 GB). I am looking for a 
> compression solution.
> 
> Did anyone have an experience with reading a compressed database?

It would be helpful if you characterise your data and queries.

For example if the size is due to blobs, then careful ordering of columns, or 
moving them to separate tables will likely be very useful.

You can get compression external to SQLite by using a compressing filesystem 
like NTFS or btrfs (make sure to pick an appropriate page size), or by internal 
compression with cerod:

  http://www.hwaci.com/sw/sqlite/cerod.html

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iEYEARECAAYFAlOFpvsACgkQmOOfHg372QRSRACfcDqTprcD//n9yYXcGPl9yQfo
sTIAoLkIaQHR4JAwk1LbuRzCyQsx/5aN
=tYeT
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
-
Intel Israel (74) Limited

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Reading compressed database files

2014-05-28 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 28/05/14 00:25, Hadashi, Rinat wrote:
> My databases are very big (almost 100 GB). I am looking for a
> compression solution.
> 
> Did anyone have an experience with reading a compressed database?

It would be helpful if you characterise your data and queries.

For example if the size is due to blobs, then careful ordering of columns,
or moving them to separate tables will likely be very useful.

You can get compression external to SQLite by using a compressing
filesystem like NTFS or btrfs (make sure to pick an appropriate page
size), or by internal compression with cerod:

  http://www.hwaci.com/sw/sqlite/cerod.html

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iEYEARECAAYFAlOFpvsACgkQmOOfHg372QRSRACfcDqTprcD//n9yYXcGPl9yQfo
sTIAoLkIaQHR4JAwk1LbuRzCyQsx/5aN
=tYeT
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Reading compressed database files

2014-05-28 Thread RSmith


On 2014/05/28 09:25, Hadashi, Rinat wrote:

Hi

My databases are very big (almost 100 GB).
I am looking for a compression solution.

Did anyone have an experience with reading a compressed database?
What was the degradation in performance of queries?


Severe.

Well, this depends - if the table consists of rather large data values (which do not form part of an Index), compressing those saves 
a lot of space with no significant loss in performance. If however you try to search (or SELECT WHERE) using such a compressed 
field, the performance penalty is severe. Just imagine, the DB engine has to decompress every item in the table to match it against 
the query requirements (unless you need an exact match and can search with an already-compressed specifier).


If the table doesn't contain any large fields but simply very many records, then any compression is useless. (In my experience, the 
LZ and GZ algorithms start paying dividends in size reduction after about 70 characters of standard English language information. 
(Very random info takes longer and very repetitive info pays quicker).



If you do have tables with large data fields and want to experiment with it - you can use one of the SQLite projects we've designed 
from here:

http://www.rifin.co.za/software/sqlc/

(Just get the thing from the downloads page)
It's just a DB manager but it adds a lot of extra SQL functions, all math functions etc. and encryption and compression functions, 
so you could for instance do stuff like this:


UPDATE sometable SET ItemDescription = Encode(ItemDescription, '') WHERE ID = 
nnn;

or

SELECT Decode(ItemDescription, '') FROM sometable WHERE ID = nnn;

Encode/Decode takes 2 parameters, the first being the text to be encoded/decoded and the second a password. It encrypts and 
compresses the data (actually first compress then encrypt it).

If the password is empty (upon encryption) it will only compress, etc.

It's all well-explained in the SQL code-hinting windows.

Those algorithms are adapted from the fastest code around and optimized for speed rather than compression size, but it does decent. 
Anyway, the point is you can see what kind of compression ratios and what kind of speeds you can expect when using data compression 
in a table like yours and whether it is feasible or not.


(PS1: Needless to say, please try it out on a copy of your database and not the 
in-use versions).
(PS2: That system's support is not via this list, so if you have questions about it, kindly mail me direct and not bore these people 
with it).


Cheers,
Ryan

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Reading compressed database files

2014-05-28 Thread Hadashi, Rinat
Hi

My databases are very big (almost 100 GB).
I am looking for a compression solution.

Did anyone have an experience with reading a compressed database?
What was the degradation in performance of queries?

Thanks
Rinat Hadashi


-
Intel Israel (74) Limited

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Load time performance

2014-05-28 Thread Hadashi, Rinat
Hi,
I work in READ ONLY mode.
My application connects the DB only once, at the beginning.
I can't really work with local files. (I log to any machine and get my files 
from the network.)

Perhaps there are some intermediate files generated in the first load that I 
can prepare in advance?

Rinat

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Stephen Chrzanowski
Sent: Thursday, May 22, 2014 4:05 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Load time performance

You shouldn't access a SQLite database remotely, except maybe for read only 
access, but that could come under fire.  Any network file action can't 
guarantee that write locks happen.

If you attempt to access a file in READ-ONLY mode, you'll be greeted with a
1-5 second delay (I don't remember what the delay is right now) per connection. 
 If you're using one connection to the DB, then you'll experience the delay.  
If you've got one connection going for the life of your application, you'll get 
hit with the first delay, but all subsequent queries will work.


On Thu, May 22, 2014 at 8:12 AM, Hadashi, Rinat wrote:

> Hi
>
> The first time I access a database takes significantly more time than 
> subsequent accesses.
> I am looking for ideas to shorten the time required for the first access.
>
> I work on Linux, my db. file is "somewhere" in the file system, not 
> locally on the machine from which I am running sqlite3
>
> Thanks
>
> Rinat Hadashi
>
>
-
Intel Israel (74) Limited

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users