[sqlite] disk i/o error while execute sqlite3_exec()

2009-05-31 Thread fjhjsjy
I need to use sqlite on VirtualLogix VLX ( a Linux-like embedded OS).  The 
source code is from sqlite-amalgamation-3.6.14.tar.gz. Because of some reason, 
I use my Makefile to compile source code instead of Makefile that ./configure 
generate.

My Makefile is :

CC=c64xplus-linux-gcc

LIBPATH=-L. 

INCPATH=-I.

BIN=demo2

OBJ=sqlite3.o shell.o


$(BIN):$(OBJ)
    $(CC) -g -o $(BIN) $(OBJ) $(INCPATH) $(LIBPATH) -lpthread -Wl,-ar


sqlite3.o:
    $(CC) -g -c sqlite3.c -o sqlite3.o -lpthread

shell.o:    
    $(CC) -g -c shell.c $(INCPATH) -o
 shell.o


The Makefile can
 generate program that execute correctly on PC and arm embedded system.
My problem is when I create tables or
 query a record in an exist databas on VLX, sqlite3_exec() always return error 
code 10 (SQLITE3_IOERR). The filesystem of VLX is NFS and fopen, fread and 
fwrite function can work correctly on it.

how to resolve this problem?



  ___ 
  好玩贺卡等你发,邮箱贺卡全新上线! 
http://card.mail.cn.yahoo.com/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread Simon Slavin

On 1 Jun 2009, at 3:38am, Olaf Schmidt wrote:

> Is fsync() somehow "messed up" on linux currently?

Correct.  Read the man page.  It doesn't do what most people think it  
does.  Nor does sync as the man page for sync explains.

The only way, under most setups, to get all changes to the disk  
surface is to do no reads or writes for a couple of seconds.  This  
gives everything in the data path a chance to write all its caches to  
the next level of hardware.  That's what the pause between the  
commands in 'sync sync halt' does: do nothing for a while.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread Olaf Schmidt

"Marcus Grimm"  schrieb im
Newsbeitrag news:4a1e6034.3030...@medcom-online.de...
> just for anybody who is interested:
>
> I translated Jim's function into window code and added
> a page of 1024 that will be written, instead of a single byte.
> On my Win-XP system I got 55 TPS, much faster than sqlite
> seems to write a page but that might be related to the
> additional overhead sqlite needs to do.

Just tested the TPS here on windows too, using a
(stdcall- VC8) compile of engine-version 3.6.14.1
(based on the "normal amalgamation-download").

And I basically get, what the sqlite-site says, in case
I use a pagesize of 4096 - and somewhat better values,
when I use a pagesize of 1024 (always starting with a
"fresh db").

Running a loop of 1000 small transactions (with
Begin Immediate under the hood) gives:

5400rpm Notebook-Disk, Win-XP-NTFS:
pagesize=4096
16.26msec, 61TPS (sync=2)
14.86msec, 67TPS (sync=1)
pagesize=1024
8.98msec, 111TPS (sync=2)
7.94msec, 126TPS (sync=1)

same test against a 7200rpm-Drive on XP behaves
proportionally (gives somewhat better values with
the correct "rpm-relation").


Under Linux, running the same tests (over the wine-
layer), working there finally against ext3-partitions -
I basically get the (high) values Jim has measured,
so this "wine-check" (using the same binaries which
caused "normal" results on windows) shows, that this
is apparently caused by the IO-subsystem (either due
to an incorrect wine-mapping or by the kernel itself)
and not because of wrong default-settings in the
amalgamation-compile.

Running on a debian-sid - kernel 2.6.29.2
against a WD-"green-drive" with an Ext3-partition
(1TB with only 5400rpm rotation-speed):
pagesize=4096
3.99msec, 251TPS (sync=2)
3.38msec, 296TPS (sync=1)
pagesize=1024
2.82msec, 355TPS (sync=2)
2.79msec, 359TPS (sync=1)

switching the WriteCache off on that disk, gives
basically the same factor 10 performance-decrease
as Jim was reporting = ca. 30TPS (+-3)  remaining.


And here for comparison the values, running within a VM
in VirtualBox, hosted on that very same  WD-harddisk
(guest was Win-XP and the writecache on that disk
was reenabled beforehand):
pagesize=4096
11.66msec, 86TPS (sync=2)
10.00msec, 100TPS (sync=1)
pagesize=1024
8.27msec, 121TPS (sync=2)
7.00msec, 143TPS (sync=1)

So, also that is basically matching with Jims results.


And just for those who are interested in values
against one of these "first affordable SSDs without that
JMicron WriteLag-bug" (a small OCZ Vertex 30GB,
running as the system-disk).
This was measured on the same Linux-machine (debian-sid,
kernel 2.6.29.2, running on Ext3 too - and therefore yet
without the Trim-Cmd-support which was (is?) proposed
for kernel 2.6.30, to support the write-direction of SSDs
somewhat better):
pagesize=4096
2.13msec, 469TPS (sync=2)
1.98msec, 505TPS (sync=1)
pagesize=1024
2.17msec, 461TPS (sync=2)
2.01msec, 497TPS (sync=1)


The Write-Cache on that disk is not off-switchable
(although hdparm reports, that the disk has one).
Maybe that is, because the internal 64MB Cache
of (not only) the Vertex-SSDs is "highly important", to be
able to deal with the Flash-chips in a performant way in
either case - and I could imagine, that these disks will
ensure a proper Cache-flushing to the Flash-Chips even
in case of a power-loss, since these disks don't need that
much energy (only 1-2W), so a small GoldCap(acitor)
should be enough, to hold and provide the needed energy
for the reqired "final Cache-Flushing-TimeFrame".

I was a bit disappointed tough, since I expected higher values -
but from what I've read from different SSD-related posts - it
seems, that (not only) the current Linux-Kernel yet treats these
"disks" as rotational media in the FS-layer - and I tried my
best, to signalize the IO-scheduler, that we have an SSD here,
but not much difference (tried with 'rotational'=0 and also the
other scheduler-switches as noop, deadline etc. - finally the
values I just posted were measured with 'noop' in the
scheduler - but this way only slightly better than 'deadline' or
the usual 'cfq' (or former 'anticipatory') default-settings.

Also tried Jims python-script directly on that debian-box
(python 2.54 - sqlite 3.6.13 = the original sid apt-packages).
and achieve on the SSD:
9663 TPS (off)
345 TPS (normal)
323 TPS (full)

Then against the 5400rpm disk, which surprisingly came
up with these values, running the python-script:
9441 TPS (off)
524 TPS (normal)
497 TPS (full)

Now that was baffling, since I expected the exact opposite
(I checked twice - the path-settings and results regarding
my tests over the wine-layer and also the directly performed
python-results - but no mistakes there - the SSD was coming
up with better results over wine than the 5400rpm WD-disk
and the opposite is seen, when running natively over python
(though against an older sqlite-version in that case).

Hmm - now since the first results from windows-binaries in a
VM were matching the results on a "native XP", I 

Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread Simon Slavin

On 31 May 2009, at 11:56pm, John Stanton wrote:

> Try studying basic database theory and technology to get a better
> understanding of the problem.

I have a pretty good understanding, I think.  Plus 25 years experience.

> You will then understand the reason for
> absolute transactional integrity and why Sqlite must use fsync or
> similar and expects fsync to be a complete implementation which  
> ensures
> that each transaction is permanently stored once a COMMIT has
> succeeded.

You won't be using Unix, then.  I'm about to horrify you: 'man fsync'

"Note that while fsync() will flush all data from the host to the  
drive (i.e. the "permanent storage device"), the drive itself may not  
physically write the data to the platters for quite some time and it  
may be written in an out-of-order sequence.
Specifically, if the drive loses power or the OS crashes, the  
application may find that only some or none of their data was written.  
The disk drive may also re-order the data so that later writes may be  
present, while earlier writes are not."
On Mac OS X systems, the following paragraph describes Apple's way of  
getting around the problem, and why PRAGMA fullsync does what it  
does.  And the only OS it currently works on ?  Apple's.  And it's slow.
> [snip] From the early days of IT transaction processors have been  
> used to
> ensure data integrity.   An example is the development of CICS by IBM
> about 40 years or more ago.  It made it possible to run important
> applications on computers without requiring some form of manual system
> to enforce integrity.  Sqlite gives that capability to small  
> distributed
> and embedded applications.

Absolutely.  With an OS designed for ACID, and all storage stuff  
configured to avoid cached writes, you can do a great job.  That's not  
what the toys we bought to run Word or WoW quickly are.  SQLite does a  
great job of changing the file contents in a consistent way.  Standard  
hardware does not do a great job of immediately writing that file to  
disk.  It does a great job of playing 3D games with a decent frame- 
rate.  That's what it's designed for.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Join performance in SQLite

2009-05-31 Thread Thomas Briggs
   As others have already mentioned, hash joins can help in a
situation where there are no appropriate indexes.  They can make
things worse if the inputs aren't large enough though, so there's
still some gray area.

   The biggest thing that other databases have going for them - MSSQL
and Oracle at least - is parallelism.  If you've got 8 or 16 or 32
threads available to you, and plenty of RAM to boot, it's often faster
to ignore the indexes and either hash join or nested loop join subsets
of the affected tables.  Thus situations where there are no indexes
seem better too, and SQLite can look bad in comparison.  'tis the
price paid for being a zero-config embedded database vs. a full-blown
client/server database system, that's all.

   -T

On Sat, May 30, 2009 at 11:11 AM, D. Richard Hipp  wrote:
> There has been a recent flurry of comments about SQLite at
>
>     http://www.reddit.com/r/programming/comments/8oed5/how_sqlite_is_tested/
>     http://news.ycombinator.com/item?id=633151
>
> One of the criticisms of SQLite is that it is slow to do joins.  That
> is true if SQLite is unable to figure out how to use an index to speed
> the join.  I was under the impression that SQLite actually did a
> fairly reasonable job of making use of indices, if they exist.  But
> without indices, an k-way join takes time proportional to N^k.
>
> Do other SQL database engines not have this same limitation?  Are
> MySQL and PostgreSQL and Firebird and MS-SQL and Oracle creating
> phantom indices on-the-fly to help them do joins faster, for example?
> Or do their optimizers do a better job of finding ways to use indices
> in a join?  Can somebody supply me with specific examples of joins
> that other database engines do efficiently but that SQLite does
> slowly?  Is join efficiency really a frustration to many SQLite users?
>
> Curiously, in some of our own internal tests, SQLite is much, much
> faster than MS-SQL, MySQL, and PostgreSQL for k-way joins where k is
> large - greater than 20 or 30.  (SQLite can handle up to a 64-way
> join.)  This is because SQLite uses a O(k*k) greedy algorithm for
> selecting the ordering of tables in the join whereas the other guys
> all do a much more extensive search.  So the performance loss in the
> other engines is due to the excessive time spent in the query planner,
> not the time actually running the query.  SQLite can plan a 64-way
> join in the blink of an eye, whereas PostgreSQL requires several
> minutes.
>
> But for the tests described in the previous paragraph, there were
> always good indices so that the time to actually run the join was
> approximately linear.  What about situations where you have a 4- or 5-
> way join on tables that are not indexed?  Do other database engines
> handle those more efficiently than SQLite somehow?  Is this something
> we need to look into?
>
> D. Richard Hipp
> d...@hwaci.com
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread John Stanton
Try studying basic database theory and technology to get a better 
understanding of the problem.  You will then understand the reason for 
absolute transactional integrity and why Sqlite must use fsync or 
similar and expects fsync to be a complete implementation which ensures 
that each transaction is permanently stored once a COMMIT has 
succeeded.  That ties the transaction rate to the physical rotation rate 
of the disk.

If you want Sqlite to be ACID your design needs to be centered around 
the disk you use since it is the absolute bottleneck.  A 15,000 rpm disk 
will give you more than a 4,500 rpm one.  Writing non-critical data to 
regular files or a non-ACID database will also up the throughput.

 From the early days of IT transaction processors have been used to 
ensure data integrity.   An example is the development of CICS by IBM 
about 40 years or more ago.  It made it possible to run important 
applications on computers without requiring some form of manual system 
to enforce integrity.  Sqlite gives that capability to small distributed 
and embedded applications.
> My impression is that COMMIT/ROLLBACK is for data integrity: don't  
> update the account balance if the transaction didn't work.  That sort  
> of stuff.  Not to deal with hardware failure.
>
> I think that these days your computer can never know when things that  
> reached permanent storage.  Permanent storage is too far away, behind  
> too many levels of caching for the CPU to ever know about it.  Your  
> CPU may be virtual.  Your storage medium may be virtual: a disk image,  
> a RAID controller, emulated hardware, or something like that.  If you  
> talk to a hard disk by IDE it will be using onboard caching and not  
> admitting it to the motherboard (most of them do this these days).
>
> Unless you're going to disable caching all the way down the line from  
> CPU to physical hard disk, you're never going to get any real idea of  
> what the hardware's doing.  And if you did that it would low things  
> down too much anyway.  Cached writes do a really good job of speeding  
> things up.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread Marcus Grimm
>
> On 31 May 2009, at 9:24pm, Jay A. Kreibich wrote:
>
>> The whole point of a transaction is that once it
>>  returns "success" on a commit, it is committed to permanent storage,
>>  no matter what -- including crashes, power failures, and other major
>>  problems.
>
> My impression is that COMMIT/ROLLBACK is for data integrity: don't
> update the account balance if the transaction didn't work.  That sort
> of stuff.  Not to deal with hardware failure.

do not mix sql COMMIT with hardware sync/commit functionality,
the subject of the posts refer mainly to the fsync/commit
issue on hardware level.

You are right with the statements below but this is not the
point here:
The developers of sqlite put a major effort to make it
practically impossible that sqlite will ever end up with
a corrupt database even when considering a powerloss
or OS crash.
One issue here is the requirement to exactly
know and define when data has reached the disk surface.
To achieve this, sqlite relies on the correct low level
implementation of the fsync functions. The goal can't be
reached if these functions do not behave as originally
expected by the sqlite developers and then sqlite behaves
similar if the synchronous pragma is set to default OFF.
It's nothing to complain against sqlite it is just something
users should be aware off, imo.


>
> I think that these days your computer can never know when things that
> reached permanent storage.  Permanent storage is too far away, behind
> too many levels of caching for the CPU to ever know about it.  Your
> CPU may be virtual.  Your storage medium may be virtual: a disk image,
> a RAID controller, emulated hardware, or something like that.  If you
> talk to a hard disk by IDE it will be using onboard caching and not
> admitting it to the motherboard (most of them do this these days).
>
> Unless you're going to disable caching all the way down the line from
> CPU to physical hard disk, you're never going to get any real idea of
> what the hardware's doing.  And if you did that it would low things
> down too much anyway.  Cached writes do a really good job of speeding
> things up.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread Simon Slavin

On 31 May 2009, at 9:24pm, Jay A. Kreibich wrote:

> The whole point of a transaction is that once it
>  returns "success" on a commit, it is committed to permanent storage,
>  no matter what -- including crashes, power failures, and other major
>  problems.

My impression is that COMMIT/ROLLBACK is for data integrity: don't  
update the account balance if the transaction didn't work.  That sort  
of stuff.  Not to deal with hardware failure.

I think that these days your computer can never know when things that  
reached permanent storage.  Permanent storage is too far away, behind  
too many levels of caching for the CPU to ever know about it.  Your  
CPU may be virtual.  Your storage medium may be virtual: a disk image,  
a RAID controller, emulated hardware, or something like that.  If you  
talk to a hard disk by IDE it will be using onboard caching and not  
admitting it to the motherboard (most of them do this these days).

Unless you're going to disable caching all the way down the line from  
CPU to physical hard disk, you're never going to get any real idea of  
what the hardware's doing.  And if you did that it would low things  
down too much anyway.  Cached writes do a really good job of speeding  
things up.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread Jay A. Kreibich
On Sun, May 31, 2009 at 06:38:45PM +0100, Simon Slavin scratched on the wall:
> 
> On 31 May 2009, at 5:53pm, Jim Wilcoxson wrote:
> 
> > the real point here is that Python and SQLite aren't doing real
> > transactions
> 
> But they /are/ real transactions.

  They are not ACID transactions.  They lack what most people consider
  to be the proper degree of Durability.

> The fact that instead of being written as magnetic domains on a disk  
> surface your data is floating around several layers of caching may be  
> a problem if your computer crashes, but only then.  

  "Only then" counts.  The whole point of a transaction is that once it
  returns "success" on a commit, it is committed to permanent storage,
  no matter what -- including crashes, power failures, and other major
  problems.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread Simon Slavin

On 31 May 2009, at 5:53pm, Jim Wilcoxson wrote:

> the real point here is that Python and SQLite aren't doing real
> transactions

But they /are/ real transactions.  You write rows, and read them back,  
and the values are there.  Use SQL commands in the way they're meant  
to be used and the DBMS does what you expect it to do.  It doesn't  
crash your application, and it doesn't return the wrong values.  The  
fact that instead of being written as magnetic domains on a disk  
surface your data is floating around several layers of caching may be  
a problem if your computer crashes, but only then.  It's not as if you  
have several computers accessing the hard disk and they were each  
getting different answers.

> But by disabling the IDE hard drive write cache:

I see what you're saying, but the IDE hard drive is unmistakably part  
of the storage system.  It may not be the write-head and platters but  
it's part of the hard drive system.  My feeling is that even if you  
know from spin speed that it's impossible to write that fast, you  
can't really start splitting the mass storage system up in that way,  
you just have to accept that a 'write' command does writing.

The subject header says 'Slow Transaction Speed'.  You get fast speed  
when you let all the caches do what they normally do (default  
settings).  Compare those with other implementations of SQL and SQLite  
does pretty well.  I don't see any reason to complain.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread Jim Wilcoxson
Simon, the real point here is that Python and SQLite aren't doing real
transactions.  What we should be seeing here is AT MOST 30 TPS with
synchronous=normal, assuming SQLite does 3 hard disk syncs per commit
as was described in an earlier post.

On my old AMD box, the one where fsync() is broken, I get this:

[...@amd toys]$ uname -a
Linux amd 2.6.25-ARCH #1 SMP PREEMPT Sat Jun 14 18:07:19 CEST 2008
i686 AMD Athlon(tm) 64 Processor 3200+ AuthenticAMD GNU/Linux

[...@amd toys]$ py dbsync.py
For pragma synchronous= off Time: 0.380044937134 TPS: 7893.80335553
For pragma synchronous= normal Time: 5.70174884796 TPS: 526.154357197
For pragma synchronous= full Time: 7.00615310669 TPS: 428.195038606

[...@amd toys]$ py dbsync.py
For pragma synchronous= off Time: 0.361596107483 TPS: 8296.54948689
For pragma synchronous= normal Time: 5.72849082947 TPS: 523.698141327
For pragma synchronous= full Time: 7.06226301193 TPS: 424.793015345

But by disabling the IDE hard drive write cache:

[r...@amd toys]# hdparm -W 0 /dev/sda
/dev/sda:
 setting drive write-caching to 0 (off)
 write-caching =  0 (off)

[r...@amd toys]# py dbsync.py
For pragma synchronous= off Time: 0.361658096313 TPS: 8295.12744379
For pragma synchronous= normal Time: 60.0442349911 TPS: 49.9631646643
For pragma synchronous= full Time: 60.0657091141 TPS: 49.9453023072

These last 2 are expected numbers for a true "on the platters" commit
that is limited by the drive's rotational speed.  The CPU doesn't
really matter for this test.

Jim

On 5/31/09, Simon Slavin  wrote:
>
> On 31 May 2009, at 4:59pm, Simon Slavin wrote:
>
>> Standard MacBookPro3,1 Core 2 Duo 2.4 GHz, OS 10.5.7, whatever hard
>> disk Apple puts in them.
>> Using SQLite version 3.4.0 (installed as part of the OS installation).
>>
>> SimonsMBP:Documents simon$ python --version
>> Python 2.5.1
>>
>> SimonsMBP:Documents simon$ python dbsynch.py
>> For pragma synchronous= off Time: 2.70638608932 TPS: 1108.48929199
>> For pragma synchronous= normal Time: 4.9895169735 TPS: 601.260606174
>> For pragma synchronous= full Time: 6.40981006622 TPS: 468.032588954
>
> I am, of course, an idiot.  That was running from the command line.
> If I run it as a .pyc I get
>
>  >>> import dbsynch
> For pragma synchronous= off Time: 2.69734382629 TPS: 1112.2052631
> For pragma synchronous= normal Time: 4.79093813896 TPS: 626.182161611
> For pragma synchronous= full Time: 6.31202697754 TPS: 475.283139739
>
> SimonsMBP:Documents simon$ python dbsynch.pyc
> For pragma synchronous= off Time: 2.61415219307 TPS: 1147.59959575
> For pragma synchronous= normal Time: 4.84184503555 TPS: 619.598516262
> For pragma synchronous= full Time: 6.0191090107 TPS: 498.412637928
>
> I don't see why the 'full' figures vary so much.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
Software first.  Software lasts!
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread Simon Slavin

On 31 May 2009, at 4:59pm, Simon Slavin wrote:

> Standard MacBookPro3,1 Core 2 Duo 2.4 GHz, OS 10.5.7, whatever hard
> disk Apple puts in them.
> Using SQLite version 3.4.0 (installed as part of the OS installation).
>
> SimonsMBP:Documents simon$ python --version
> Python 2.5.1
>
> SimonsMBP:Documents simon$ python dbsynch.py
> For pragma synchronous= off Time: 2.70638608932 TPS: 1108.48929199
> For pragma synchronous= normal Time: 4.9895169735 TPS: 601.260606174
> For pragma synchronous= full Time: 6.40981006622 TPS: 468.032588954

I am, of course, an idiot.  That was running from the command line.   
If I run it as a .pyc I get

 >>> import dbsynch
For pragma synchronous= off Time: 2.69734382629 TPS: 1112.2052631
For pragma synchronous= normal Time: 4.79093813896 TPS: 626.182161611
For pragma synchronous= full Time: 6.31202697754 TPS: 475.283139739

SimonsMBP:Documents simon$ python dbsynch.pyc
For pragma synchronous= off Time: 2.61415219307 TPS: 1147.59959575
For pragma synchronous= normal Time: 4.84184503555 TPS: 619.598516262
For pragma synchronous= full Time: 6.0191090107 TPS: 498.412637928

I don't see why the 'full' figures vary so much.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread Simon Slavin

On 31 May 2009, at 2:27pm, Jim Wilcoxson wrote:

> g5:~ mac$ Python-2.6.1/python.exe dbsync.py
> /Users/mac/Python-2.6.1/Lib/sqlite3/dbapi2.py:27: RuntimeWarning:
> Python C API version mismatch for module _sqlite3: This Python has API
> version 1013, module _sqlite3 has version 1012.
>  from _sqlite3 import *
> For pragma synchronous= off Time: 1.97417807579 TPS: 1519.61975305
> For pragma synchronous= normal Time: 4.06918787956 TPS: 737.247846202
> For pragma synchronous= full Time: 4.78447008133 TPS: 627.028688445


Standard MacBookPro3,1 Core 2 Duo 2.4 GHz, OS 10.5.7, whatever hard  
disk Apple puts in them.
Using SQLite version 3.4.0 (installed as part of the OS installation).

SimonsMBP:Documents simon$ python --version
Python 2.5.1

SimonsMBP:Documents simon$ python dbsynch.py
For pragma synchronous= off Time: 2.70638608932 TPS: 1108.48929199
For pragma synchronous= normal Time: 4.9895169735 TPS: 601.260606174
For pragma synchronous= full Time: 6.40981006622 TPS: 468.032588954

So are Python 2.5.1 and SQLite 3.16.14 so much faster they more than  
make up for a faster Core 2 Duo machine ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite on a Virtual Machine of Microsoft's Hyper-V ?

2009-05-31 Thread Allen Fowler




> 
> My experience has been that VMs strongly focus on correctness and
> reliability, and will obey sync orders and everything else databases
> depend on.
> 
> 


This is true on the CPU level. 


However, since I/O is a major bottleneck for VM's, things can get more complex 
inside the "storage controller hardware" emulation subsystem.  


Two things to ask about:

1) Many VM systems just rely on optimized nic drivers and go for "network" 
storage for all app/user data.  This has implications for SQlite.

2) Even if you use local storage, you must ensure that sync() and such calls 
that are presented to the virtualized storage controller are in fact passed on 
to the real storage controller.   Given the performance implications of 
multiple VM's issuing such calls, I would not just assume this to be true.

(It is possible to run DBs inside a VM safely, but perhaps not in the default 
config  just double check.)



  

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Join performance in SQLite

2009-05-31 Thread Florian Weimer
* D. Richard Hipp:

> One of the criticisms of SQLite is that it is slow to do joins.  That  
> is true if SQLite is unable to figure out how to use an index to speed  
> the join.  I was under the impression that SQLite actually did a  
> fairly reasonable job of making use of indices, if they exist.  But  
> without indices, an k-way join takes time proportional to N^k.
>
> Do other SQL database engines not have this same limitation?  Are  
> MySQL and PostgreSQL and Firebird and MS-SQL and Oracle creating  
> phantom indices on-the-fly to help them do joins faster, for example?   

PostgreSQL roughly does one of the following (when dealing with a
two-way join):

  * If one side of the join is estimated to be a small set, PostgreSQL
performs a sequential scan on it, hashes it, and joins the other
table in a hash join.

  * If both sides are large, each side is sorted, and a merge join is
performed.

Things go horribly wrong if the estimates are off and the wrong plan
is picked.

There's also a nested loop join (which would be what SQLite does), but
I haven't seen it in recent version.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread Jim Wilcoxson
Here's an update for the commit rate tester for Mac OSX.  With the
original version, using fsync(), OSX doesn't do real syncs because the
fsync call isn't properly implemented (my opinion).  Apple took a
shortcut with fsync and only flushes OS buffers to the drive, but the
drive is then free to reorder the requests (that's okay) and execute
them at a later time (that's not okay).  You have to use
fdcntl(F_FULLSYNC) to get data actually written to the platters.  At
least Apple does document this behavior.  Here's the updated program:

#include 
#include 
#include 
#include 
#include 

#define MAX 3000

main () {
  int fd;
  int n;
  int loops;
  time_t elap;
  time_t start;

  if ((fd=open("big", O_RDWR+O_CREAT, 0777)) == -1) {
perror("Error opening file");
exit(1);
  }

  start = time(NULL);
  for(loops=0; loops wrote:
> I agree that adding this to the library, and making it accessible via
> a pragma command would be very useful.  For example, pragma commitrate
> 1000 would test the commit rate of 1000 commits and return the results
> in transactions per second as a row.
>
> If I install my app on a client's machine, I could run this test
> periodically to ensure that the system environment is going to support
> "no corruption" operation, and/or send some kind of warning to my
> customer that there is a risk of corruption because their system
> environment has problems.
>
> If it were only an external program bundled with sqlite, I couldn't
> really make use of it, because I'd have to distribute the program and
> instructions how to use it, and rely on customers to actually do it.
>
> This is a pretty small function.  Just for my own use, I'd consider
> foreign key support to be way more bloated that this.
>
> Jim
>
> On 5/29/09, Marcus Grimm  wrote:
>>> On Thu, May 28, 2009 at 03:12:55PM -0700, Allen Fowler scratched on the
>>> wall:

 > just for anybody who is interested:

 >
 > I translated Jim's function into window code and added
 > a page of 1024 that will be written, instead of a single byte.
 > On my Win-XP system I got 55 TPS, much faster than sqlite
 > seems to write a page but that might be related to the
 > additional overhead sqlite needs to do.
>>
>> just to add: I traced a little what sqlite does when an
>> simple UPDATE is done within a transaction: It does
>> two syncs on the journal file and one final sync on the
>> db itselve, so achieving something like 15 TPS is reasonable.
>>
>>
 >
 > This brings me to a nice to have feature request:
 > How about adding similar test function in the sqlite API ?
 > This might use the vfs to write pages and gives some feedback
 > on the performance of the system where sqlite runs on.
 > It might also detect problems with the commit function...
 > Just an idea...
 >

 Interesting idea.
>>>
>>>   It would make a lot more sense to make this an external utility
>>>   or an extension of the sqlite3 shell.  Adding it to the core library
>>>   is a definite case of code bloat.
>>
>> Adding it into the API would allow my application to
>> easily make the test for example the first time it runs
>> on a system. But maybe a problem for the users that
>> apply a sqlite wrapper.
>> However, having it in sqlite3 shell would be very useful as
>> well.
>>
>> Marcus
>>
>>>
>>>   Actually, a whole suite of performance related tests might be
>>>   interesting.
>>>
>>>-j
>>>
>>> --
>>> Jay A. Kreibich < J A Y  @  K R E I B I.C H >
>>>
>>> "Our opponent is an alien starship packed with atomic bombs.  We have
>>>  a protractor."   "I'll go home and see if I can scrounge up a ruler
>>>  and a piece of string."  --from Anathem by Neal Stephenson
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
> --
> Software first.  Software lasts!
>


-- 
Software first.  Software lasts!
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users