Re: [sqlite] Slow Transaction Speed?

2009-06-09 Thread John Stanton
We are happy with the results using JFS on AIX and SUSE Linux.  Also no 
concerns with EXT3 on various Linuxes.  We have learned to avoid Windows.

Christian Smith wrote:
> On Wed, May 27, 2009 at 08:05:00AM -0400, pyt...@bdurham.com wrote:
>   
>>
>> In your experience, which Linux file system(s) provide the high
>> performance platform for SQLite?
>>
>> 

>> I can't speak for DRH, but I have found that ext3 with the option of
>> "data=journal" gives a massive improvement speed wise than default ext3
>> options, mainly because the journal is contiguous and ext3 can avoid seeks
>> while still ensuring data is written safely to disk. This is a big win for
>> rotating platter disks.
>>
>> I did an informal benchmark of various filesystem types on
>> Linux (note this is 2 1/2 years ago) as part of an OSNews thread here:
>> http://www.osnews.com/permalink?184137
>>
>> I'd be interested in how ext4 compares to ext3. Perhaps an evening project
>> beckons.
>>
>> 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-06-08 Thread Christian Smith
On Wed, May 27, 2009 at 08:05:00AM -0400, pyt...@bdurham.com wrote:
> Dr. Hipp,
> 
> > Your OS and filesystem configuration have a big impact too. I've notice, 
> > for example, that transactions are really slow on RieserFS on Linux 
> > compared to Ext3.
> 
> In your experience, which Linux file system(s) provide the high
> performance platform for SQLite?


I can't speak for DRH, but I have found that ext3 with the option of
"data=journal" gives a massive improvement speed wise than default ext3
options, mainly because the journal is contiguous and ext3 can avoid seeks
while still ensuring data is written safely to disk. This is a big win for
rotating platter disks.

I did an informal benchmark of various filesystem types on
Linux (note this is 2 1/2 years ago) as part of an OSNews thread here:
http://www.osnews.com/permalink?184137

I'd be interested in how ext4 compares to ext3. Perhaps an evening project
beckons.

> 
> Which Linux file systems do you recommend avoiding for SQLite use?


Anything with FAT in the name...

Plus, avoid NFS due to possible locking issus.

> 
> Thank you,
> Malcolm
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-06-01 Thread Nicolas Williams
On Mon, Jun 01, 2009 at 03:09:46AM +0100, Simon Slavin wrote:
> On 31 May 2009, at 11:56pm, John Stanton wrote:
> > You will then understand the reason for absolute transactional
> > integrity and why Sqlite must use fsync or similar and expects fsync
> > to be a complete implementation which  ensures that each transaction
> > is permanently stored once a COMMIT has succeeded.
> 
> You won't be using Unix, then.  I'm about to horrify you: 'man fsync'
> 
> "Note that while fsync() will flush all data from the host to the  
> drive (i.e. the "permanent storage device"), the drive itself may not  
> physically write the data to the platters for quite some time and it  
> may be written in an out-of-order sequence.
> Specifically, if the drive loses power or the OS crashes, the  
> application may find that only some or none of their data was written.  
> The disk drive may also re-order the data so that later writes may be  
> present, while earlier writes are not."

The Unix operating system I use says something rather different:

 The fsync() function moves all modified data and  attributes
 of  the  file  descriptor  fildes  to a storage device. When
 fsync() returns, all in-memory modified  copies  of  buffers
 associated  with  fildes  have  been written to the physical
 medium. The fsync() function is different from sync(), which
 schedules disk I/O for all files  but returns before the I/O
 completes. The fsync() function forces all outstanding  data
 operations  to  synchronized  file integrity completion (see
 fcntl.h(3HEAD) definition of O_SYNC.)

 The fsync() function forces all currently queued I/O  opera-
 tions  associated  with  the  file  indicated  by  the  file
 descriptor fildes to the synchronized I/O completion  state.
 All I/O operations are completed as defined for synchronized
 I/O file integrity completion.


> Absolutely.  With an OS designed for ACID, and all storage stuff  

I've never heard of an operating system designed for ACID.  I've heard
of operating systems that are good for enterprise use though.  (It's
hard to speak of operating systems as being designed for anything in
particular when most operating systems are fairly old and general
purpose; special purpose OSes, OTOH, are special purpose OSes.)
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-06-01 Thread Jim Wilcoxson
Microsoft has an interesting article on hard drive caches re: SQL Server:

http://support.microsoft.com/kb/234656

"Many disk drives (SATA, ATA, SCSI and IDE based) contain onboard
caches of 512 KB, 1 MB, and larger. Drive caches usually rely on a
capacitor and not a battery-backed solution. These caching mechanisms
cannot guarantee writes across a power cycle or similar failure point.
They only guarantee the completion of the sector write operations. As
the drives continue to grow in size, the caches become larger, and
they can expose larger amounts of data during a failure. "


On 6/1/09, John Stanton  wrote:
> Simon Slavin wrote:
>> On 31 May 2009, at 11:56pm, John Stanton wrote:
>>
>>
>>> Try studying basic database theory and technology to get a better
>>> understanding of the problem.
>>>
>>
>> I have a pretty good understanding, I think.  Plus 25 years experience.
>>
> Is it 23 years experience or 1 year 23 times?  This forum is to share
> information on Sqlite, for mutual advantage, not get into pointless
> arguments.  Let us keep it that way.
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
Software first.  Software lasts!
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-06-01 Thread John Stanton
Simon Slavin wrote:
> On 31 May 2009, at 11:56pm, John Stanton wrote:
>
>   
>> Try studying basic database theory and technology to get a better
>> understanding of the problem.
>> 
>
> I have a pretty good understanding, I think.  Plus 25 years experience.
>   
Is it 23 years experience or 1 year 23 times?  This forum is to share 
information on Sqlite, for mutual advantage, not get into pointless 
arguments.  Let us keep it that way.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread Simon Slavin

On 1 Jun 2009, at 3:38am, Olaf Schmidt wrote:

> Is fsync() somehow "messed up" on linux currently?

Correct.  Read the man page.  It doesn't do what most people think it  
does.  Nor does sync as the man page for sync explains.

The only way, under most setups, to get all changes to the disk  
surface is to do no reads or writes for a couple of seconds.  This  
gives everything in the data path a chance to write all its caches to  
the next level of hardware.  That's what the pause between the  
commands in 'sync sync halt' does: do nothing for a while.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread Olaf Schmidt

"Marcus Grimm"  schrieb im
Newsbeitrag news:4a1e6034.3030...@medcom-online.de...
> just for anybody who is interested:
>
> I translated Jim's function into window code and added
> a page of 1024 that will be written, instead of a single byte.
> On my Win-XP system I got 55 TPS, much faster than sqlite
> seems to write a page but that might be related to the
> additional overhead sqlite needs to do.

Just tested the TPS here on windows too, using a
(stdcall- VC8) compile of engine-version 3.6.14.1
(based on the "normal amalgamation-download").

And I basically get, what the sqlite-site says, in case
I use a pagesize of 4096 - and somewhat better values,
when I use a pagesize of 1024 (always starting with a
"fresh db").

Running a loop of 1000 small transactions (with
Begin Immediate under the hood) gives:

5400rpm Notebook-Disk, Win-XP-NTFS:
pagesize=4096
16.26msec, 61TPS (sync=2)
14.86msec, 67TPS (sync=1)
pagesize=1024
8.98msec, 111TPS (sync=2)
7.94msec, 126TPS (sync=1)

same test against a 7200rpm-Drive on XP behaves
proportionally (gives somewhat better values with
the correct "rpm-relation").


Under Linux, running the same tests (over the wine-
layer), working there finally against ext3-partitions -
I basically get the (high) values Jim has measured,
so this "wine-check" (using the same binaries which
caused "normal" results on windows) shows, that this
is apparently caused by the IO-subsystem (either due
to an incorrect wine-mapping or by the kernel itself)
and not because of wrong default-settings in the
amalgamation-compile.

Running on a debian-sid - kernel 2.6.29.2
against a WD-"green-drive" with an Ext3-partition
(1TB with only 5400rpm rotation-speed):
pagesize=4096
3.99msec, 251TPS (sync=2)
3.38msec, 296TPS (sync=1)
pagesize=1024
2.82msec, 355TPS (sync=2)
2.79msec, 359TPS (sync=1)

switching the WriteCache off on that disk, gives
basically the same factor 10 performance-decrease
as Jim was reporting = ca. 30TPS (+-3)  remaining.


And here for comparison the values, running within a VM
in VirtualBox, hosted on that very same  WD-harddisk
(guest was Win-XP and the writecache on that disk
was reenabled beforehand):
pagesize=4096
11.66msec, 86TPS (sync=2)
10.00msec, 100TPS (sync=1)
pagesize=1024
8.27msec, 121TPS (sync=2)
7.00msec, 143TPS (sync=1)

So, also that is basically matching with Jims results.


And just for those who are interested in values
against one of these "first affordable SSDs without that
JMicron WriteLag-bug" (a small OCZ Vertex 30GB,
running as the system-disk).
This was measured on the same Linux-machine (debian-sid,
kernel 2.6.29.2, running on Ext3 too - and therefore yet
without the Trim-Cmd-support which was (is?) proposed
for kernel 2.6.30, to support the write-direction of SSDs
somewhat better):
pagesize=4096
2.13msec, 469TPS (sync=2)
1.98msec, 505TPS (sync=1)
pagesize=1024
2.17msec, 461TPS (sync=2)
2.01msec, 497TPS (sync=1)


The Write-Cache on that disk is not off-switchable
(although hdparm reports, that the disk has one).
Maybe that is, because the internal 64MB Cache
of (not only) the Vertex-SSDs is "highly important", to be
able to deal with the Flash-chips in a performant way in
either case - and I could imagine, that these disks will
ensure a proper Cache-flushing to the Flash-Chips even
in case of a power-loss, since these disks don't need that
much energy (only 1-2W), so a small GoldCap(acitor)
should be enough, to hold and provide the needed energy
for the reqired "final Cache-Flushing-TimeFrame".

I was a bit disappointed tough, since I expected higher values -
but from what I've read from different SSD-related posts - it
seems, that (not only) the current Linux-Kernel yet treats these
"disks" as rotational media in the FS-layer - and I tried my
best, to signalize the IO-scheduler, that we have an SSD here,
but not much difference (tried with 'rotational'=0 and also the
other scheduler-switches as noop, deadline etc. - finally the
values I just posted were measured with 'noop' in the
scheduler - but this way only slightly better than 'deadline' or
the usual 'cfq' (or former 'anticipatory') default-settings.

Also tried Jims python-script directly on that debian-box
(python 2.54 - sqlite 3.6.13 = the original sid apt-packages).
and achieve on the SSD:
9663 TPS (off)
345 TPS (normal)
323 TPS (full)

Then against the 5400rpm disk, which surprisingly came
up with these values, running the python-script:
9441 TPS (off)
524 TPS (normal)
497 TPS (full)

Now that was baffling, since I expected the exact opposite
(I checked twice - the path-settings and results regarding
my tests over the wine-layer and also the directly performed
python-results - but no mistakes there - the SSD was coming
up with better results over wine than the 5400rpm WD-disk
and the opposite is seen, when running natively over python
(though against an older sqlite-version in that case).

Hmm - now since the first results from windows-binaries in a
VM were matching the 

Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread Simon Slavin

On 31 May 2009, at 11:56pm, John Stanton wrote:

> Try studying basic database theory and technology to get a better
> understanding of the problem.

I have a pretty good understanding, I think.  Plus 25 years experience.

> You will then understand the reason for
> absolute transactional integrity and why Sqlite must use fsync or
> similar and expects fsync to be a complete implementation which  
> ensures
> that each transaction is permanently stored once a COMMIT has
> succeeded.

You won't be using Unix, then.  I'm about to horrify you: 'man fsync'

"Note that while fsync() will flush all data from the host to the  
drive (i.e. the "permanent storage device"), the drive itself may not  
physically write the data to the platters for quite some time and it  
may be written in an out-of-order sequence.
Specifically, if the drive loses power or the OS crashes, the  
application may find that only some or none of their data was written.  
The disk drive may also re-order the data so that later writes may be  
present, while earlier writes are not."
On Mac OS X systems, the following paragraph describes Apple's way of  
getting around the problem, and why PRAGMA fullsync does what it  
does.  And the only OS it currently works on ?  Apple's.  And it's slow.
> [snip] From the early days of IT transaction processors have been  
> used to
> ensure data integrity.   An example is the development of CICS by IBM
> about 40 years or more ago.  It made it possible to run important
> applications on computers without requiring some form of manual system
> to enforce integrity.  Sqlite gives that capability to small  
> distributed
> and embedded applications.

Absolutely.  With an OS designed for ACID, and all storage stuff  
configured to avoid cached writes, you can do a great job.  That's not  
what the toys we bought to run Word or WoW quickly are.  SQLite does a  
great job of changing the file contents in a consistent way.  Standard  
hardware does not do a great job of immediately writing that file to  
disk.  It does a great job of playing 3D games with a decent frame- 
rate.  That's what it's designed for.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread John Stanton
Try studying basic database theory and technology to get a better 
understanding of the problem.  You will then understand the reason for 
absolute transactional integrity and why Sqlite must use fsync or 
similar and expects fsync to be a complete implementation which ensures 
that each transaction is permanently stored once a COMMIT has 
succeeded.  That ties the transaction rate to the physical rotation rate 
of the disk.

If you want Sqlite to be ACID your design needs to be centered around 
the disk you use since it is the absolute bottleneck.  A 15,000 rpm disk 
will give you more than a 4,500 rpm one.  Writing non-critical data to 
regular files or a non-ACID database will also up the throughput.

 From the early days of IT transaction processors have been used to 
ensure data integrity.   An example is the development of CICS by IBM 
about 40 years or more ago.  It made it possible to run important 
applications on computers without requiring some form of manual system 
to enforce integrity.  Sqlite gives that capability to small distributed 
and embedded applications.
> My impression is that COMMIT/ROLLBACK is for data integrity: don't  
> update the account balance if the transaction didn't work.  That sort  
> of stuff.  Not to deal with hardware failure.
>
> I think that these days your computer can never know when things that  
> reached permanent storage.  Permanent storage is too far away, behind  
> too many levels of caching for the CPU to ever know about it.  Your  
> CPU may be virtual.  Your storage medium may be virtual: a disk image,  
> a RAID controller, emulated hardware, or something like that.  If you  
> talk to a hard disk by IDE it will be using onboard caching and not  
> admitting it to the motherboard (most of them do this these days).
>
> Unless you're going to disable caching all the way down the line from  
> CPU to physical hard disk, you're never going to get any real idea of  
> what the hardware's doing.  And if you did that it would low things  
> down too much anyway.  Cached writes do a really good job of speeding  
> things up.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread Marcus Grimm
>
> On 31 May 2009, at 9:24pm, Jay A. Kreibich wrote:
>
>> The whole point of a transaction is that once it
>>  returns "success" on a commit, it is committed to permanent storage,
>>  no matter what -- including crashes, power failures, and other major
>>  problems.
>
> My impression is that COMMIT/ROLLBACK is for data integrity: don't
> update the account balance if the transaction didn't work.  That sort
> of stuff.  Not to deal with hardware failure.

do not mix sql COMMIT with hardware sync/commit functionality,
the subject of the posts refer mainly to the fsync/commit
issue on hardware level.

You are right with the statements below but this is not the
point here:
The developers of sqlite put a major effort to make it
practically impossible that sqlite will ever end up with
a corrupt database even when considering a powerloss
or OS crash.
One issue here is the requirement to exactly
know and define when data has reached the disk surface.
To achieve this, sqlite relies on the correct low level
implementation of the fsync functions. The goal can't be
reached if these functions do not behave as originally
expected by the sqlite developers and then sqlite behaves
similar if the synchronous pragma is set to default OFF.
It's nothing to complain against sqlite it is just something
users should be aware off, imo.


>
> I think that these days your computer can never know when things that
> reached permanent storage.  Permanent storage is too far away, behind
> too many levels of caching for the CPU to ever know about it.  Your
> CPU may be virtual.  Your storage medium may be virtual: a disk image,
> a RAID controller, emulated hardware, or something like that.  If you
> talk to a hard disk by IDE it will be using onboard caching and not
> admitting it to the motherboard (most of them do this these days).
>
> Unless you're going to disable caching all the way down the line from
> CPU to physical hard disk, you're never going to get any real idea of
> what the hardware's doing.  And if you did that it would low things
> down too much anyway.  Cached writes do a really good job of speeding
> things up.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread Simon Slavin

On 31 May 2009, at 9:24pm, Jay A. Kreibich wrote:

> The whole point of a transaction is that once it
>  returns "success" on a commit, it is committed to permanent storage,
>  no matter what -- including crashes, power failures, and other major
>  problems.

My impression is that COMMIT/ROLLBACK is for data integrity: don't  
update the account balance if the transaction didn't work.  That sort  
of stuff.  Not to deal with hardware failure.

I think that these days your computer can never know when things that  
reached permanent storage.  Permanent storage is too far away, behind  
too many levels of caching for the CPU to ever know about it.  Your  
CPU may be virtual.  Your storage medium may be virtual: a disk image,  
a RAID controller, emulated hardware, or something like that.  If you  
talk to a hard disk by IDE it will be using onboard caching and not  
admitting it to the motherboard (most of them do this these days).

Unless you're going to disable caching all the way down the line from  
CPU to physical hard disk, you're never going to get any real idea of  
what the hardware's doing.  And if you did that it would low things  
down too much anyway.  Cached writes do a really good job of speeding  
things up.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread Jay A. Kreibich
On Sun, May 31, 2009 at 06:38:45PM +0100, Simon Slavin scratched on the wall:
> 
> On 31 May 2009, at 5:53pm, Jim Wilcoxson wrote:
> 
> > the real point here is that Python and SQLite aren't doing real
> > transactions
> 
> But they /are/ real transactions.

  They are not ACID transactions.  They lack what most people consider
  to be the proper degree of Durability.

> The fact that instead of being written as magnetic domains on a disk  
> surface your data is floating around several layers of caching may be  
> a problem if your computer crashes, but only then.  

  "Only then" counts.  The whole point of a transaction is that once it
  returns "success" on a commit, it is committed to permanent storage,
  no matter what -- including crashes, power failures, and other major
  problems.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread Simon Slavin

On 31 May 2009, at 5:53pm, Jim Wilcoxson wrote:

> the real point here is that Python and SQLite aren't doing real
> transactions

But they /are/ real transactions.  You write rows, and read them back,  
and the values are there.  Use SQL commands in the way they're meant  
to be used and the DBMS does what you expect it to do.  It doesn't  
crash your application, and it doesn't return the wrong values.  The  
fact that instead of being written as magnetic domains on a disk  
surface your data is floating around several layers of caching may be  
a problem if your computer crashes, but only then.  It's not as if you  
have several computers accessing the hard disk and they were each  
getting different answers.

> But by disabling the IDE hard drive write cache:

I see what you're saying, but the IDE hard drive is unmistakably part  
of the storage system.  It may not be the write-head and platters but  
it's part of the hard drive system.  My feeling is that even if you  
know from spin speed that it's impossible to write that fast, you  
can't really start splitting the mass storage system up in that way,  
you just have to accept that a 'write' command does writing.

The subject header says 'Slow Transaction Speed'.  You get fast speed  
when you let all the caches do what they normally do (default  
settings).  Compare those with other implementations of SQL and SQLite  
does pretty well.  I don't see any reason to complain.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread Jim Wilcoxson
Simon, the real point here is that Python and SQLite aren't doing real
transactions.  What we should be seeing here is AT MOST 30 TPS with
synchronous=normal, assuming SQLite does 3 hard disk syncs per commit
as was described in an earlier post.

On my old AMD box, the one where fsync() is broken, I get this:

[...@amd toys]$ uname -a
Linux amd 2.6.25-ARCH #1 SMP PREEMPT Sat Jun 14 18:07:19 CEST 2008
i686 AMD Athlon(tm) 64 Processor 3200+ AuthenticAMD GNU/Linux

[...@amd toys]$ py dbsync.py
For pragma synchronous= off Time: 0.380044937134 TPS: 7893.80335553
For pragma synchronous= normal Time: 5.70174884796 TPS: 526.154357197
For pragma synchronous= full Time: 7.00615310669 TPS: 428.195038606

[...@amd toys]$ py dbsync.py
For pragma synchronous= off Time: 0.361596107483 TPS: 8296.54948689
For pragma synchronous= normal Time: 5.72849082947 TPS: 523.698141327
For pragma synchronous= full Time: 7.06226301193 TPS: 424.793015345

But by disabling the IDE hard drive write cache:

[r...@amd toys]# hdparm -W 0 /dev/sda
/dev/sda:
 setting drive write-caching to 0 (off)
 write-caching =  0 (off)

[r...@amd toys]# py dbsync.py
For pragma synchronous= off Time: 0.361658096313 TPS: 8295.12744379
For pragma synchronous= normal Time: 60.0442349911 TPS: 49.9631646643
For pragma synchronous= full Time: 60.0657091141 TPS: 49.9453023072

These last 2 are expected numbers for a true "on the platters" commit
that is limited by the drive's rotational speed.  The CPU doesn't
really matter for this test.

Jim

On 5/31/09, Simon Slavin  wrote:
>
> On 31 May 2009, at 4:59pm, Simon Slavin wrote:
>
>> Standard MacBookPro3,1 Core 2 Duo 2.4 GHz, OS 10.5.7, whatever hard
>> disk Apple puts in them.
>> Using SQLite version 3.4.0 (installed as part of the OS installation).
>>
>> SimonsMBP:Documents simon$ python --version
>> Python 2.5.1
>>
>> SimonsMBP:Documents simon$ python dbsynch.py
>> For pragma synchronous= off Time: 2.70638608932 TPS: 1108.48929199
>> For pragma synchronous= normal Time: 4.9895169735 TPS: 601.260606174
>> For pragma synchronous= full Time: 6.40981006622 TPS: 468.032588954
>
> I am, of course, an idiot.  That was running from the command line.
> If I run it as a .pyc I get
>
>  >>> import dbsynch
> For pragma synchronous= off Time: 2.69734382629 TPS: 1112.2052631
> For pragma synchronous= normal Time: 4.79093813896 TPS: 626.182161611
> For pragma synchronous= full Time: 6.31202697754 TPS: 475.283139739
>
> SimonsMBP:Documents simon$ python dbsynch.pyc
> For pragma synchronous= off Time: 2.61415219307 TPS: 1147.59959575
> For pragma synchronous= normal Time: 4.84184503555 TPS: 619.598516262
> For pragma synchronous= full Time: 6.0191090107 TPS: 498.412637928
>
> I don't see why the 'full' figures vary so much.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
Software first.  Software lasts!
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread Simon Slavin

On 31 May 2009, at 4:59pm, Simon Slavin wrote:

> Standard MacBookPro3,1 Core 2 Duo 2.4 GHz, OS 10.5.7, whatever hard
> disk Apple puts in them.
> Using SQLite version 3.4.0 (installed as part of the OS installation).
>
> SimonsMBP:Documents simon$ python --version
> Python 2.5.1
>
> SimonsMBP:Documents simon$ python dbsynch.py
> For pragma synchronous= off Time: 2.70638608932 TPS: 1108.48929199
> For pragma synchronous= normal Time: 4.9895169735 TPS: 601.260606174
> For pragma synchronous= full Time: 6.40981006622 TPS: 468.032588954

I am, of course, an idiot.  That was running from the command line.   
If I run it as a .pyc I get

 >>> import dbsynch
For pragma synchronous= off Time: 2.69734382629 TPS: 1112.2052631
For pragma synchronous= normal Time: 4.79093813896 TPS: 626.182161611
For pragma synchronous= full Time: 6.31202697754 TPS: 475.283139739

SimonsMBP:Documents simon$ python dbsynch.pyc
For pragma synchronous= off Time: 2.61415219307 TPS: 1147.59959575
For pragma synchronous= normal Time: 4.84184503555 TPS: 619.598516262
For pragma synchronous= full Time: 6.0191090107 TPS: 498.412637928

I don't see why the 'full' figures vary so much.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread Simon Slavin

On 31 May 2009, at 2:27pm, Jim Wilcoxson wrote:

> g5:~ mac$ Python-2.6.1/python.exe dbsync.py
> /Users/mac/Python-2.6.1/Lib/sqlite3/dbapi2.py:27: RuntimeWarning:
> Python C API version mismatch for module _sqlite3: This Python has API
> version 1013, module _sqlite3 has version 1012.
>  from _sqlite3 import *
> For pragma synchronous= off Time: 1.97417807579 TPS: 1519.61975305
> For pragma synchronous= normal Time: 4.06918787956 TPS: 737.247846202
> For pragma synchronous= full Time: 4.78447008133 TPS: 627.028688445


Standard MacBookPro3,1 Core 2 Duo 2.4 GHz, OS 10.5.7, whatever hard  
disk Apple puts in them.
Using SQLite version 3.4.0 (installed as part of the OS installation).

SimonsMBP:Documents simon$ python --version
Python 2.5.1

SimonsMBP:Documents simon$ python dbsynch.py
For pragma synchronous= off Time: 2.70638608932 TPS: 1108.48929199
For pragma synchronous= normal Time: 4.9895169735 TPS: 601.260606174
For pragma synchronous= full Time: 6.40981006622 TPS: 468.032588954

So are Python 2.5.1 and SQLite 3.16.14 so much faster they more than  
make up for a faster Core 2 Duo machine ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread Jim Wilcoxson
Here's an update for the commit rate tester for Mac OSX.  With the
original version, using fsync(), OSX doesn't do real syncs because the
fsync call isn't properly implemented (my opinion).  Apple took a
shortcut with fsync and only flushes OS buffers to the drive, but the
drive is then free to reorder the requests (that's okay) and execute
them at a later time (that's not okay).  You have to use
fdcntl(F_FULLSYNC) to get data actually written to the platters.  At
least Apple does document this behavior.  Here's the updated program:

#include 
#include 
#include 
#include 
#include 

#define MAX 3000

main () {
  int fd;
  int n;
  int loops;
  time_t elap;
  time_t start;

  if ((fd=open("big", O_RDWR+O_CREAT, 0777)) == -1) {
perror("Error opening file");
exit(1);
  }

  start = time(NULL);
  for(loops=0; loops wrote:
> I agree that adding this to the library, and making it accessible via
> a pragma command would be very useful.  For example, pragma commitrate
> 1000 would test the commit rate of 1000 commits and return the results
> in transactions per second as a row.
>
> If I install my app on a client's machine, I could run this test
> periodically to ensure that the system environment is going to support
> "no corruption" operation, and/or send some kind of warning to my
> customer that there is a risk of corruption because their system
> environment has problems.
>
> If it were only an external program bundled with sqlite, I couldn't
> really make use of it, because I'd have to distribute the program and
> instructions how to use it, and rely on customers to actually do it.
>
> This is a pretty small function.  Just for my own use, I'd consider
> foreign key support to be way more bloated that this.
>
> Jim
>
> On 5/29/09, Marcus Grimm  wrote:
>>> On Thu, May 28, 2009 at 03:12:55PM -0700, Allen Fowler scratched on the
>>> wall:

 > just for anybody who is interested:

 >
 > I translated Jim's function into window code and added
 > a page of 1024 that will be written, instead of a single byte.
 > On my Win-XP system I got 55 TPS, much faster than sqlite
 > seems to write a page but that might be related to the
 > additional overhead sqlite needs to do.
>>
>> just to add: I traced a little what sqlite does when an

Re: [sqlite] Slow Transaction Speed?

2009-05-29 Thread Jim Wilcoxson
I agree that adding this to the library, and making it accessible via
a pragma command would be very useful.  For example, pragma commitrate
1000 would test the commit rate of 1000 commits and return the results
in transactions per second as a row.

If I install my app on a client's machine, I could run this test
periodically to ensure that the system environment is going to support
"no corruption" operation, and/or send some kind of warning to my
customer that there is a risk of corruption because their system
environment has problems.

If it were only an external program bundled with sqlite, I couldn't
really make use of it, because I'd have to distribute the program and
instructions how to use it, and rely on customers to actually do it.

This is a pretty small function.  Just for my own use, I'd consider
foreign key support to be way more bloated that this.

Jim

On 5/29/09, Marcus Grimm  wrote:
>> On Thu, May 28, 2009 at 03:12:55PM -0700, Allen Fowler scratched on the
>> wall:
>>>
>>> > just for anybody who is interested:
>>>
>>> >
>>> > I translated Jim's function into window code and added
>>> > a page of 1024 that will be written, instead of a single byte.
>>> > On my Win-XP system I got 55 TPS, much faster than sqlite
>>> > seems to write a page but that might be related to the
>>> > additional overhead sqlite needs to do.
>
> just to add: I traced a little what sqlite does when an
> simple UPDATE is done within a transaction: It does
> two syncs on the journal file and one final sync on the
> db itselve, so achieving something like 15 TPS is reasonable.
>
>
>>> >
>>> > This brings me to a nice to have feature request:
>>> > How about adding similar test function in the sqlite API ?
>>> > This might use the vfs to write pages and gives some feedback
>>> > on the performance of the system where sqlite runs on.
>>> > It might also detect problems with the commit function...
>>> > Just an idea...
>>> >
>>>
>>> Interesting idea.
>>
>>   It would make a lot more sense to make this an external utility
>>   or an extension of the sqlite3 shell.  Adding it to the core library
>>   is a definite case of code bloat.
>
> Adding it into the API would allow my application to
> easily make the test for example the first time it runs
> on a system. But maybe a problem for the users that
> apply a sqlite wrapper.
> However, having it in sqlite3 shell would be very useful as
> well.
>
> Marcus
>
>>
>>   Actually, a whole suite of performance related tests might be
>>   interesting.
>>
>>-j
>>
>> --
>> Jay A. Kreibich < J A Y  @  K R E I B I.C H >
>>
>> "Our opponent is an alien starship packed with atomic bombs.  We have
>>  a protractor."   "I'll go home and see if I can scrounge up a ruler
>>  and a piece of string."  --from Anathem by Neal Stephenson
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
Software first.  Software lasts!
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-29 Thread Marcus Grimm
> On Thu, May 28, 2009 at 03:12:55PM -0700, Allen Fowler scratched on the
> wall:
>>
>> > just for anybody who is interested:
>>
>> >
>> > I translated Jim's function into window code and added
>> > a page of 1024 that will be written, instead of a single byte.
>> > On my Win-XP system I got 55 TPS, much faster than sqlite
>> > seems to write a page but that might be related to the
>> > additional overhead sqlite needs to do.

just to add: I traced a little what sqlite does when an
simple UPDATE is done within a transaction: It does
two syncs on the journal file and one final sync on the
db itselve, so achieving something like 15 TPS is reasonable.


>> >
>> > This brings me to a nice to have feature request:
>> > How about adding similar test function in the sqlite API ?
>> > This might use the vfs to write pages and gives some feedback
>> > on the performance of the system where sqlite runs on.
>> > It might also detect problems with the commit function...
>> > Just an idea...
>> >
>>
>> Interesting idea.
>
>   It would make a lot more sense to make this an external utility
>   or an extension of the sqlite3 shell.  Adding it to the core library
>   is a definite case of code bloat.

Adding it into the API would allow my application to
easily make the test for example the first time it runs
on a system. But maybe a problem for the users that
apply a sqlite wrapper.
However, having it in sqlite3 shell would be very useful as
well.

Marcus

>
>   Actually, a whole suite of performance related tests might be
>   interesting.
>
>-j
>
> --
> Jay A. Kreibich < J A Y  @  K R E I B I.C H >
>
> "Our opponent is an alien starship packed with atomic bombs.  We have
>  a protractor."   "I'll go home and see if I can scrounge up a ruler
>  and a piece of string."  --from Anathem by Neal Stephenson
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-28 Thread Jay A. Kreibich
On Thu, May 28, 2009 at 03:12:55PM -0700, Allen Fowler scratched on the wall:
> 
> > just for anybody who is interested:
> 
> > 
> > I translated Jim's function into window code and added
> > a page of 1024 that will be written, instead of a single byte.
> > On my Win-XP system I got 55 TPS, much faster than sqlite
> > seems to write a page but that might be related to the
> > additional overhead sqlite needs to do.
> > 
> > This brings me to a nice to have feature request:
> > How about adding similar test function in the sqlite API ?
> > This might use the vfs to write pages and gives some feedback
> > on the performance of the system where sqlite runs on.
> > It might also detect problems with the commit function...
> > Just an idea...
> > 
> 
> Interesting idea.

  It would make a lot more sense to make this an external utility
  or an extension of the sqlite3 shell.  Adding it to the core library
  is a definite case of code bloat.

  Actually, a whole suite of performance related tests might be
  interesting.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-28 Thread Allen Fowler

> just for anybody who is interested:

> 
> I translated Jim's function into window code and added
> a page of 1024 that will be written, instead of a single byte.
> On my Win-XP system I got 55 TPS, much faster than sqlite
> seems to write a page but that might be related to the
> additional overhead sqlite needs to do.
> 
> This brings me to a nice to have feature request:
> How about adding similar test function in the sqlite API ?
> This might use the vfs to write pages and gives some feedback
> on the performance of the system where sqlite runs on.
> It might also detect problems with the commit function...
> Just an idea...
> 

Interesting idea.




> Anyway, here is my win32 version Jim's test function:
> 
> ---
> #include
> #include
> 
> #define TESTWRITES1000
> 
> int TestDisk ()
> {
> int fd;
> int n;
> int loops;
> unsigned charPage[1024];
> time_t elap;
> time_t start;
> 
> if ((fd=_open("C:\\testfile.db", O_RDWR+O_CREAT, 0777)) == -1)
> {
>   fprintf(stderr, "Error opening file");
>   exit(1);
> }
> 
> start = time(NULL);
> for(loops=0; loops
> {
>   if (_lseek(fd, 0, SEEK_SET) == -1) {
> fprintf(stderr,"Error seeking file");
> exit(1);
>   }
> 
>   n = _write(fd, Page, 1024);
>   if (n != 1024) {
> fprintf(stderr,"Error writing file");
> exit(1);
>   }
>   _commit(fd);
> }
> elap = time(NULL)-start;
> fprintf(stderr,"Time: %d seconds; TPS=%f\n", (int)elap, 
> TESTWRITES*1.0/((int)elap));
> 
> return(0);
> }
> -
> 
> Jim Wilcoxson wrote:
> > I'm running on Linux with ext3 and just wrote a Python test program to
> > insert rows into a table with a single column, no indexing, and doing
> > a commit after each insert.  When I first ran it, I got around 440
> > inserts per second, which is clearly impossible.  This is a 7200rpm
> > drive, so even if I could write a row on every revolution, the maximum
> > insert rate would be 120 per second.  I tried adding "pragma
> > sychronous=normal", and it sped up to 520 TPS.  With synchronous=full
> > it slowed back to 420-460, so this must be the Python default.  Adding
> > synchronous=off increased the rate to over 6000 TPS -- basically
> > writing to memory instead of disk.
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



  

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-28 Thread Marcus Grimm
just for anybody who is interested:

I translated Jim's function into window code and added
a page of 1024 that will be written, instead of a single byte.
On my Win-XP system I got 55 TPS, much faster than sqlite
seems to write a page but that might be related to the
additional overhead sqlite needs to do.

This brings me to a nice to have feature request:
How about adding similar test function in the sqlite API ?
This might use the vfs to write pages and gives some feedback
on the performance of the system where sqlite runs on.
It might also detect problems with the commit function...
Just an idea...

Anyway, here is my win32 version Jim's test function:

---
#include
#include

#define TESTWRITES  1000

int TestDisk ()
{
int fd;
int n;
int loops;
unsigned char   Page[1024];
time_t elap;
time_t start;

if ((fd=_open("C:\\testfile.db", O_RDWR+O_CREAT, 0777)) == -1)
{
  fprintf(stderr, "Error opening file");
  exit(1);
}

start = time(NULL);
for(loops=0; loops I'm running on Linux with ext3 and just wrote a Python test program to
> insert rows into a table with a single column, no indexing, and doing
> a commit after each insert.  When I first ran it, I got around 440
> inserts per second, which is clearly impossible.  This is a 7200rpm
> drive, so even if I could write a row on every revolution, the maximum
> insert rate would be 120 per second.  I tried adding "pragma
> sychronous=normal", and it sped up to 520 TPS.  With synchronous=full
> it slowed back to 420-460, so this must be the Python default.  Adding
> synchronous=off increased the rate to over 6000 TPS -- basically
> writing to memory instead of disk.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread Jim Wilcoxson
I should have mentioned that when running my C test program on Linux,
with the hard drive cache enabled, it ran instantly.   But it should
have taken quite a while to do 3000 fsyncs.  So this problem with
defeating the hard drive cache is not an SQLite issue; it's a Linux
issue.

I also ran the same C test program on a virtual machine, without any
hdparm tweaking, and the syncs worked correctly: I got about 115 TPS.
This system is also using ext3, but a different version of Linux:

$ uname -a
Linux prirun 2.6.24-19-xen #1 SMP Sat Jul 12 00:15:59 UTC 2008 x86_64
Dual-Core AMD Opteron(tm) Processor 2212 AuthenticAMD GNU/Linux

So I don't know why syncs work correctly in one place but not the
other.  Could be my hardware config, the virtualization layer fixing
things, different version of Linux, ...

If you see SQLite run much slower on ReiserFS than ext3, it probably
means that ReiserFS is doing the syncs correctly, but ext3 is getting
cached.

Jim

On 5/27/09, Jim Wilcoxson  wrote:
> I'm running on Linux with ext3 and just wrote a Python test program to
> insert rows into a table with a single column, no indexing, and doing
> a commit after each insert.  When I first ran it, I got around 440
> inserts per second, which is clearly impossible.  This is a 7200rpm
> drive, so even if I could write a row on every revolution, the maximum
> insert rate would be 120 per second.  I tried adding "pragma
> sychronous=normal", and it sped up to 520 TPS.  With synchronous=full
> it slowed back to 420-460, so this must be the Python default.  Adding
> synchronous=off increased the rate to over 6000 TPS -- basically
> writing to memory instead of disk.
>
> After using hdparm to turn off the drive cache (hdparm -W 0 /dev/sda),
> the insert rate was 15 rows per second.  So it seems that for my
> particular hardware configuration, the Linux fsync call isn't doing
> what it should be doing.  I have built the SQLite stuff from source,
> so perhaps my build has a problem.  If you look on the Linux kernel
> mailing list archives, there are several discussions about drive write
> caching not playing nice with ext3's efforts to ensure data is
> actually on the disk.
>
> $ uname -a
> Linux amd 2.6.25-ARCH #1 SMP PREEMPT Sat Jun 14 18:07:19 CEST 2008
> i686 AMD Athlon(tm) 64 Processor 3200+ AuthenticAMD GNU/Linux
>
> Here is a test program you can run, to show your system's maximum
> physical I/O rate:
>
> #include 
> #include 
> #include 
> #include 
> #include 
>
> #define MAX 3000
>
> main () {
>   int fd;
>   int n;
>   int loops;
>   time_t elap;
>   time_t start;
>
>   if ((fd=open("big", O_RDWR+O_CREAT, 0777)) == -1) {
> perror("Error opening file");
> exit(1);
>   }
>
>   start = time(NULL);
>   for(loops=0; loops if (lseek(fd, 0, SEEK_SET) == -1) {
>   perror("Error seeking file");
>   exit(1);
> }
> n = write(fd, , 1);
> if (n != 1) {
>   perror("Error writing file");
>   exit(1);
> }
> fsync(fd);
>   }
>   elap = time(NULL)-start;
>   printf("Time: %d seconds; TPS=%f\n", elap, MAX*1.0/elap);
> }
>
> On my system, I get these results:
>
> [...@amd ~]$ cc -o sync sync.c
> [...@amd ~]$ ./sync
> Time: 25 seconds; TPS=120.00
> [...@amd ~]$
>
> Running vmstat concurrently, we see this:
>
> [...@amd]$ vmstat 5
> procs ---memory-- ---swap-- -io -system--
> cpu
>  r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id
> wa
>  0  0  0 791108 127632  7635600 0 9   10   33  1  1 98
> 1
>  0  1  0 791100 127648  7635600 0   439  102  212  0  1 13
> 86
>  0  1  0 791100 127664  7635600 0   478  119  242  0  1  0
> 99
>  0  1  0 790976 127672  7635600 0   478  119  242  0  1  0
> 99
>  0  1  0 790976 127688  7635600 0   481  120  244  0
> 1  0 99   <-- steady state
>  0  1  0 790976 127696  7635600 0   482  120  244  0  1  0
> 99
>  0  0  0 790984 127700  7635600 0   167   40   75  0  0 71
> 29
>  0  0  0 790984 127712  7635600 0 316  0  0 100
> 0
>
> During the steady state, there are 480KBytes written per second.
> Linux does I/O in 4K chunks, so dividing 480K by 4K gives you I/O's
> per second: 120.
>
> Jim
>
> On 5/27/09, Marcus Grimm  wrote:
>> Thanks Nick,
>>
>> good point. ahh yes, I've read about this somewhere...
>>
>> My extension is currently ".db", a quick check indicates
>> that using ".abc" gives a slight speed improvement, maybe 10%.
>> But that is allready very close to the variation I get between
>> different test runs, so I'm not really sure if I have this
>> "microsoft effect" here.
>>
>> Anyway, thanks for the reminder.
>>
>> Well, I think now there is nothing wrong here,
>> it is just as it is... Currently I achieve about 10 transactions/second,
>> maybe
>> not that bad... still slower than the "few dozen" that 

Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread Jim Wilcoxson
I'm running on Linux with ext3 and just wrote a Python test program to
insert rows into a table with a single column, no indexing, and doing
a commit after each insert.  When I first ran it, I got around 440
inserts per second, which is clearly impossible.  This is a 7200rpm
drive, so even if I could write a row on every revolution, the maximum
insert rate would be 120 per second.  I tried adding "pragma
sychronous=normal", and it sped up to 520 TPS.  With synchronous=full
it slowed back to 420-460, so this must be the Python default.  Adding
synchronous=off increased the rate to over 6000 TPS -- basically
writing to memory instead of disk.

After using hdparm to turn off the drive cache (hdparm -W 0 /dev/sda),
the insert rate was 15 rows per second.  So it seems that for my
particular hardware configuration, the Linux fsync call isn't doing
what it should be doing.  I have built the SQLite stuff from source,
so perhaps my build has a problem.  If you look on the Linux kernel
mailing list archives, there are several discussions about drive write
caching not playing nice with ext3's efforts to ensure data is
actually on the disk.

$ uname -a
Linux amd 2.6.25-ARCH #1 SMP PREEMPT Sat Jun 14 18:07:19 CEST 2008
i686 AMD Athlon(tm) 64 Processor 3200+ AuthenticAMD GNU/Linux

Here is a test program you can run, to show your system's maximum
physical I/O rate:

#include 
#include 
#include 
#include 
#include 

#define MAX 3000

main () {
  int fd;
  int n;
  int loops;
  time_t elap;
  time_t start;

  if ((fd=open("big", O_RDWR+O_CREAT, 0777)) == -1) {
perror("Error opening file");
exit(1);
  }

  start = time(NULL);
  for(loops=0; loops wrote:
> Thanks Nick,
>
> good point. ahh yes, I've read about this somewhere...
>
> My extension is currently ".db", a quick check indicates
> that using ".abc" gives a slight speed improvement, maybe 10%.
> But that is allready very close to the variation I get between
> different test runs, so I'm not really sure if I have this
> "microsoft effect" here.
>
> Anyway, thanks for the reminder.
>
> Well, I think now there is nothing wrong here,
> it is just as it is... Currently I achieve about 10 transactions/second,
> maybe
> not that bad... still slower than the "few dozen" that are mentioned on the
> www pages, but I agree that there too many issues that may affect the disk
> speed.
>
>
> Marcus
>
>
> Brandon, Nicholas (UK) wrote:
>>> my developemnt system is a Win XP, with of course NTFS, I
>>> don't know which drive it has, I guess a standard 7200 rpm.
>>>
>>
>> What file extension (i.e. the letters after the dot in the filename) do
>> you give the database?
>>
>> I faintly recall there is a windows peculiarity with system restore or
>> something similar that archives certain file extensions in the
>> background. That may contribute to your slow down.
>>
>> Nick
>>
>> 
>> This email and any attachments are confidential to the intended
>> recipient and may also be privileged. If you are not the intended
>> recipient please delete it from your system and notify the sender.
>> You should not copy it or use it for any purpose nor disclose or
>> distribute its contents to any other person.
>> 
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> 

Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread Marcus Grimm
Thanks Nick,

good point. ahh yes, I've read about this somewhere...

My extension is currently ".db", a quick check indicates
that using ".abc" gives a slight speed improvement, maybe 10%.
But that is allready very close to the variation I get between
different test runs, so I'm not really sure if I have this
"microsoft effect" here.

Anyway, thanks for the reminder.

Well, I think now there is nothing wrong here,
it is just as it is... Currently I achieve about 10 transactions/second, maybe
not that bad... still slower than the "few dozen" that are mentioned on the
www pages, but I agree that there too many issues that may affect the disk 
speed.


Marcus


Brandon, Nicholas (UK) wrote:
>> my developemnt system is a Win XP, with of course NTFS, I 
>> don't know which drive it has, I guess a standard 7200 rpm.
>>
> 
> What file extension (i.e. the letters after the dot in the filename) do
> you give the database?
> 
> I faintly recall there is a windows peculiarity with system restore or
> something similar that archives certain file extensions in the
> background. That may contribute to your slow down.
> 
> Nick
> 
> 
> This email and any attachments are confidential to the intended
> recipient and may also be privileged. If you are not the intended
> recipient please delete it from your system and notify the sender.
> You should not copy it or use it for any purpose nor disclose or
> distribute its contents to any other person.
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread python
Dr. Hipp,

> Your OS and filesystem configuration have a big impact too. I've notice, for 
> example, that transactions are really slow on RieserFS on Linux compared to 
> Ext3.

In your experience, which Linux file system(s) provide the high
performance platform for SQLite?

Which Linux file systems do you recommend avoiding for SQLite use?

Thank you,
Malcolm
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread Brandon, Nicholas (UK)

> my developemnt system is a Win XP, with of course NTFS, I 
> don't know which drive it has, I guess a standard 7200 rpm.
> 

What file extension (i.e. the letters after the dot in the filename) do
you give the database?

I faintly recall there is a windows peculiarity with system restore or
something similar that archives certain file extensions in the
background. That may contribute to your slow down.

Nick


This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread Marcus Grimm
thank you for the hint,
my current approach to copy the database file, is even more
restrictive -- it is encapsulated by an EXCLUSIVE transaction. ;)

In my case it is unlikely that it will block anybody from using
the server since it is a typical "working-hours" application and
the backup will usually take place around midnight.

kind regards

Marcus


Nuno Lucas wrote:
> On Wed, May 27, 2009 at 12:51 PM, Marcus Grimm  
> wrote:
>> Yes, I understood that this is unsafe and I'll not use it right now.
>> But my feeling is that it will be an option for the future. My databases
>> are not so big (say in megabyte range) and currently my server application
>> will do a backup every day (more or less a simple file copy). So even if
>> I'll see a corrupt database the customer will loose only one day work,
>> as maximum. This should be tolerated since I estimate the liklyhood
>> of such an event extremely low (powerfailure or OS crash at exactly the
>> "wrong" time), powerfailure by the way can be workaround by using a
>> battery pack which is common on server hardware, I guess.
> 
> Ok,  but note that backing up sqlite database files could get a
> corrupted database on restore if you don't backup the sqlite database
> file AND the journal file as an atomic operation.
> 
> A work around for this situation is to start an immediate transaction
> [1] (which will assure no other writes are pending, although allowing
> reads to proceed), backup the database (copy the file) and only then
> end the transaction (a simple "BEGIN IMMEDIATE" --> "END" operation).
> 
> 
> Regards,
> ~Nuno Lucas
> 
> [1] http://www.sqlite.org/lang_transaction.html
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread Nuno Lucas
On Wed, May 27, 2009 at 12:51 PM, Marcus Grimm  wrote:
> Yes, I understood that this is unsafe and I'll not use it right now.
> But my feeling is that it will be an option for the future. My databases
> are not so big (say in megabyte range) and currently my server application
> will do a backup every day (more or less a simple file copy). So even if
> I'll see a corrupt database the customer will loose only one day work,
> as maximum. This should be tolerated since I estimate the liklyhood
> of such an event extremely low (powerfailure or OS crash at exactly the
> "wrong" time), powerfailure by the way can be workaround by using a
> battery pack which is common on server hardware, I guess.

Ok,  but note that backing up sqlite database files could get a
corrupted database on restore if you don't backup the sqlite database
file AND the journal file as an atomic operation.

A work around for this situation is to start an immediate transaction
[1] (which will assure no other writes are pending, although allowing
reads to proceed), backup the database (copy the file) and only then
end the transaction (a simple "BEGIN IMMEDIATE" --> "END" operation).


Regards,
~Nuno Lucas

[1] http://www.sqlite.org/lang_transaction.html
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread Marcus Grimm
my developemnt system is a Win XP, with of course NTFS, I don't
know which drive it has, I guess a standard 7200 rpm.

I tried also on a Win 2000 and Vista, as expected Vista appears
to be the slowest even this machine has the best hardware, theoretically.

All measurements vary +- 20%.

Just for my curiosity: Is linux considered to be faster than Windows here ?

kind regards

Marcus

D. Richard Hipp wrote:
> On May 27, 2009, at 7:51 AM, Marcus Grimm wrote:
> 
>>
>> Nuno Lucas wrote:
>>> On Wed, May 27, 2009 at 5:58 AM, Marcus Grimm  On Tue, May 26, 2009 at 9:47 PM, Marcus Grimm  http://www.sqlite.org/faq.html#q19
 the faq as well as the speed comparison speaks about a few dozen
 of transaction per second... that's why I'm wondering why I'm
 almost ten times slower on windows...
>>> You don't say much about your hardware or how you are timing the
>>> transaction speed, so 10 times slower for those numbers are within  
>>> the
>>> error margin.
>> my hardware is just a standard desktop computer, not the fastest...
>> Okay, if my measurements are within the typical speed range, then this
>> is the explanation why I see so much slower speed.
> 
> 
> Your OS and filesystem configuration have a big impact too.  I've  
> notice, for example, that transactions are really slow on RieserFS on  
> linux compared to Ext3.  What OS are you using?  And what filesystem?
> 
> 
> D. Richard Hipp
> d...@hwaci.com
> 
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread D. Richard Hipp

On May 27, 2009, at 7:51 AM, Marcus Grimm wrote:

>
>
> Nuno Lucas wrote:
>> On Wed, May 27, 2009 at 5:58 AM, Marcus Grimm > > wrote:
 On Tue, May 26, 2009 at 9:47 PM, Marcus Grimm 
 http://www.sqlite.org/faq.html#q19
>>> the faq as well as the speed comparison speaks about a few dozen
>>> of transaction per second... that's why I'm wondering why I'm
>>> almost ten times slower on windows...
>>
>> You don't say much about your hardware or how you are timing the
>> transaction speed, so 10 times slower for those numbers are within  
>> the
>> error margin.
>
> my hardware is just a standard desktop computer, not the fastest...
> Okay, if my measurements are within the typical speed range, then this
> is the explanation why I see so much slower speed.


Your OS and filesystem configuration have a big impact too.  I've  
notice, for example, that transactions are really slow on RieserFS on  
linux compared to Ext3.  What OS are you using?  And what filesystem?


D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread Marcus Grimm


Nuno Lucas wrote:
> On Wed, May 27, 2009 at 5:58 AM, Marcus Grimm  wrote:
>>> On Tue, May 26, 2009 at 9:47 PM, Marcus Grimm 
>>> http://www.sqlite.org/faq.html#q19
>> the faq as well as the speed comparison speaks about a few dozen
>> of transaction per second... that's why I'm wondering why I'm
>> almost ten times slower on windows...
> 
> You don't say much about your hardware or how you are timing the
> transaction speed, so 10 times slower for those numbers are within the
> error margin.

my hardware is just a standard desktop computer, not the fastest...
Okay, if my measurements are within the typical speed range, then this
is the explanation why I see so much slower speed.

> 
> Do the math yourself. It's more about disk rotation speed than any
> other factor, as you have already seen when wrapping those INSERTs
> within a transaction.

Yes, I know... there has been also a funny proof some weeks ago here, that
where refering to light-speed and the distance of the disc to the memory,
as far as I can remeber.
I'm very well aware of the speed limitations, it was just that I was puzzled
by my results compared to the indication given on the sqlite pages.

> 
> For comparison, use PRAGMA SYNCHRONOUS=OFF, which will return as soon
> as the OS let's you (doesn't depend on the disk access).

correct, I made some experiments today and switching the synchronous to off
will give a dramatic speed improvement. Using the "normal" sync mode
I achieve an appx. 20% improvement as well.

Yes, I understood that this is unsafe and I'll not use it right now.
But my feeling is that it will be an option for the future. My databases
are not so big (say in megabyte range) and currently my server application
will do a backup every day (more or less a simple file copy). So even if
I'll see a corrupt database the customer will loose only one day work,
as maximum. This should be tolerated since I estimate the liklyhood
of such an event extremely low (powerfailure or OS crash at exactly the
"wrong" time), powerfailure by the way can be workaround by using a
battery pack which is common on server hardware, I guess.

I agree that the new async vfs might help as well, havn't tried this yet.

Thanks for sharing your thoughts

kind regards

Marcus


> 
> But note that this option is not safe and I would discourage people
> from using it until they understand very well what they are doing
> (it's not easy, and many time impossible, to recover a corrupt SQLite
> database). If you really want speed, you can try the new async VFS,
> which will do the atomic writes in a background thread.


> 
> 
> Regards,
> ~Nuno Lucas
> 
>> thanks for the reply
>>
>> Marcus
> 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread Nuno Lucas
On Wed, May 27, 2009 at 5:58 AM, Marcus Grimm  wrote:
>> On Tue, May 26, 2009 at 9:47 PM, Marcus Grimm 
>> http://www.sqlite.org/faq.html#q19
>
> the faq as well as the speed comparison speaks about a few dozen
> of transaction per second... that's why I'm wondering why I'm
> almost ten times slower on windows...

You don't say much about your hardware or how you are timing the
transaction speed, so 10 times slower for those numbers are within the
error margin.

Do the math yourself. It's more about disk rotation speed than any
other factor, as you have already seen when wrapping those INSERTs
within a transaction.

For comparison, use PRAGMA SYNCHRONOUS=OFF, which will return as soon
as the OS let's you (doesn't depend on the disk access).

But note that this option is not safe and I would discourage people
from using it until they understand very well what they are doing
(it's not easy, and many time impossible, to recover a corrupt SQLite
database). If you really want speed, you can try the new async VFS,
which will do the atomic writes in a background thread.


Regards,
~Nuno Lucas

>
> thanks for the reply
>
> Marcus
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-26 Thread Marcus Grimm
> On Tue, May 26, 2009 at 9:47 PM, Marcus Grimm 
> wrote:
> [..]
>> So the question is:
>> Is it somehow normal to have only 7 transactions per second?
>
> Yes

ehm... why?

>
> [..]
>> Any comment on this ?
>
> http://www.sqlite.org/faq.html#q19

the faq as well as the speed comparison speaks about a few dozen
of transaction per second... that's why I'm wondering why I'm
almost ten times slower on windows...

thanks for the reply

Marcus

>
>
> Regards,
> ~Nuno Lucas
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-26 Thread Nuno Lucas
On Tue, May 26, 2009 at 9:47 PM, Marcus Grimm  wrote:
[..]
> So the question is:
> Is it somehow normal to have only 7 transactions per second?

Yes

[..]
> Any comment on this ?

http://www.sqlite.org/faq.html#q19


Regards,
~Nuno Lucas
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users