Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-06 Thread Keith Medcalf

>And as far as I know, even the most expensive hardware RAID controllers
>and disks do not yet support multi-disk transactions, so your reference
>to not-yet existing hardware is moot.

They all do, unless the I/O was designed by a moron.  Of course, morons are the 
most plentiful element in the universe, so your likelihood of getting something 
designed by a moron is high -- and that probability increases proportionally 
with your desire to spend less money.

That is to say, you get what you pay for.  Non-morons usually command much 
higher wages and salaries than morons and consequently, non-moron designed 
products tend to be more expensive whereas cheap products tend to be designed 
and built by people who do not consider the consequence of what they are doing 
(or not doing) or how to ensure a good outcome in the face of failure (in other 
words, a safe design).  This is either because they are not paid to do so, or 
because they are incapable of doing so.

In either case, you get what you pay for.




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-05 Thread Markus Schaber
Hi,

Von: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org]
> At 21:35 03/03/2014, you wrote:
> ´¯¯¯
> >RAID3-4-5 was great when disks were expensive, in 80's an 90's. Now
> >not. A minimal RAID5 needs 3 disks. A minimal RAID10 4. An enterprise
> >disk SAS 15Krpm 146 GB 6G is $350, and a not enterprise grade cheaper
> >and bigger. Now RAID1E and RAID10E give more flexibility and variable
> >security, from "paranoid" to "i don't care" grades.
> `---
> 
> The point being discussed was not on performance or cost, but on the
> imaginary fact that RAID5-6 and variations have the inherent, by-design fatal
> flaw that they break down because a parity block can be out of sync with
> corresponding data blocks. This is bullshit, period.

It is not. Period.

> Nothing in RAID5-6 design mandates serialization of writes, by far.

Yes. But I've yet to know a setup for harddisks which allows reliable
transactional writes spanning several disks. (Kinda two-phase commit 
for disk writes).

Of course, a dedicated hardware controller who issues the write requests
to the disks absolutely synchronously lowers the risk by shrinking the 
time window.

But it cannot totally eliminate it, for at least the following reasons:
- The platters are usually not completely physically in sync, so the 
  first disk may have written the block while the second disk still
  needs to wait for another 1/4 rotation for the block to be written.

- One of the disks may have internally remapped a bad sector, needing a
  seek (and thus much longer time) to write the block.

In reality, there are usually some more time variation, e. G. due to
- Both disks may be connected through the same cable, thus the requests
  to the disks need to be serialized.

- There may be other outstanding requests in the disk internal cache
  which the disk firmware might reorder in a different way.

I admit that the remaining risk may be low, but it is not zero. Period.

> It's only when cheap, unreliable hardware is put to work under below par
> software that the issue can be a real-world problem.
>
> So the rant on the design against parity-enabled RAIDs is moot, if not plain
> fallacious unless "software RAID without dedicated controller" is clearly
> mentionned.

I did mention using battery backed power as a way to mitigate the risk.

And as far as I know, even the most expensive hardware RAID controllers
and disks do not yet support multi-disk transactions, so your reference
to not-yet existing hardware is moot.

> About SAS disks: they have actual very high reliability and don't lie,
> contrary to SATA disks (on both points).
>
> This is not a war about mine being bigger, but it's better to have facts
> stated right.

I fully agree there.

> All high-end reliable machines and storage subsystems only run
> parity-enabled RAID levels and this thechnology isn't going to disappear
> tomorrow.

I doubt that _all_ those machines exclusively run on parity-enabled RAID 
levels, but I'm strongly interested in a proof of your "fact" here.

I remember reading that PostgreSQL and Oracle recommend using mirroring based
levels instead of parity-enabled ones for performance reasons, so I'm really
curious to read about how you back up your claim.


Best regards

Markus Schaber

CODESYS® a trademark of 3S-Smart Software Solutions GmbH

Inspiring Automation Solutions

3S-Smart Software Solutions GmbH
Dipl.-Inf. Markus Schaber | Product Development Core Technology
Memminger Str. 151 | 87439 Kempten | Germany
Tel. +49-831-54031-979 | Fax +49-831-54031-50

E-Mail: m.scha...@codesys.com | Web: http://www.codesys.com | CODESYS store: 
http://store.codesys.com
CODESYS forum: http://forum.codesys.com

Managing Directors: Dipl.Inf. Dieter Hess, Dipl.Inf. Manfred Werner | Trade 
register: Kempten HRB 6186 | Tax ID No.: DE 167014915

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-03 Thread Keith Medcalf

>Another way to bust your data is to rely on RAID 5 or 6 or similar, at
>least if the software does not take special care.
>
>When those mechanisms, updating a block always results in at least 2 disk
>writes: The data block and the checksum block. There's a small time
>window where only one of those blocks physically reached the disk. Now, when 
>the
>power fails during said time window, and the third disk fails, it's
>content will be restored using the new data block and the old checksum (or vice
>versa), leaving your data garbled.

Generally this is only an issue with fake-RAID (aka software RAID).  Hardware 
RAID will issue the writes to update the stripe in parallel across all spindles 
which need to be updated.  Moreover, although writes to a hardware RAID device 
are signaled complete once the data has been written into the buffer on the 
RAID controller, the hardware will take special precautions to ensure that any 
write which makes it into the hardware buffers is properly written to disk even 
if there is a power failure before the scatter-write-with-verify to the 
physical media has returned completion-without-error for all spindles.  You 
will only lose data if the power is out for longer than the battery on the 
hardware controller can maintain the buffer -- and the better classes of 
hardware raid contains NVRAM to which "dirty" stripes are flushed on power loss 
so that they can written to the physical spindles even if the power is not 
restored until long after the buffer RAM battery has lost power.  

In other words, you get what you pay for.




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-03 Thread Jean-Christophe Deschamps

At 21:35 03/03/2014, you wrote:
´¯¯¯
RAID3-4-5 was great when disks were expensive, in 80's an 90's. Now 
not. A minimal RAID5 needs 3 disks. A minimal RAID10 4. An enterprise 
disk SAS 15Krpm 146 GB 6G is $350, and a not enterprise grade cheaper 
and bigger. Now RAID1E and RAID10E give more flexibility and variable 
security, from "paranoid" to "i don't care" grades.

`---

The point being discussed was not on performance or cost, but on the 
imaginary fact that RAID5-6 and variations have the inherent, by-design 
fatal flaw that they break down because a parity block can be out of 
sync with corresponding data blocks. This is bullshit, period.


Nothing in RAID5-6 design mandates serialization of writes, by far. 
It's only when cheap, unreliable hardware is put to work under below 
par software that the issue can be a real-world problem.


So the rant on the design against parity-enabled RAIDs is moot, if not 
plain fallacious unless "software RAID without dedicated controller" is 
clearly mentionned.


About SAS disks: they have actual very high reliability and don't lie, 
contrary to SATA disks (on both points).


This is not a war about mine being bigger, but it's better to have 
facts stated right. All high-end reliable machines and storage 
subsystems only run parity-enabled RAID levels and this thechnology 
isn't going to disappear tomorrow. 


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-03 Thread Eduardo Morras
On Mon, 03 Mar 2014 17:36:10 +0100
Jean-Christophe Deschamps  wrote:

> 
> >It's how RAID5 works. Check this page docs http://baarf.com/ about
> >it.
> 
> This is utter BS.

No.
 
> Serious RAID controllers perform parallel I/O on as many drives that 
> are making up a given array. Of course I'm talking of SAS drives here 
> with battery backed-up controller.
> 
> Kid sister RAID5-6 implementations using SATA drives and no dedicated 
> hardware are best avoided and have more drawbacks than are listed in 
> cited prose.
> 
> I run 24/7 an Areca 1882i controller with 6 SAS 15Krpm drives in
> RAID6 and a couple more in RAID1 and I've yet to witness any problem
> whatsoever.

RAID3-4-5 was great when disks were expensive, in 80's an 90's. Now not. A 
minimal RAID5 needs 3 disks. A minimal RAID10 4. An enterprise disk SAS 15Krpm 
146 GB 6G is $350, and a not enterprise grade cheaper and bigger. Now RAID1E 
and RAID10E give more flexibility and variable security, from "paranoid" to "i 
don't care" grades.

When something goes wrong:

RAID 3-4-5-6
When one of your disk brokes, replace it. 
Then rebuild the RAID3-4-5-6. 
You need read from all disks to recover the lost blocks. 
All disks are busy recovering it and your R/W performance drops. 
Recovery reads the same block on and the parity data, makes some computations 
and writes the lost block. 
If any of the RAID disks is near its MTBF and fails, you lost everything.

RAID 10
When one of your disks brokes, replace it.
Then rebuild the RAID10.
You need read from mirror disks to recover lost blocks.
Only the mirror disks are busy recovering and your R/W performance drops only 
when accessing data in those disks.
Recovery reads the same block and directly writes lost block.
If all disks that mirrors to broken one are near its MTBF and fail, you lost 
everything.

The time to recover a RAID 10 is less (lot less) than recreating a RAID3-4-5-6.

> It's just like talking BS on a language because of some obscure bug
> in a non-conformant compiler. 

No, it's talking BS on language that is bad designed for your actual needs and 
no matter which compiler you use because is not an implementation problem. 

---   ---
Eduardo Morras 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-03 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/03/14 03:00, Simon Slavin wrote:
> What the heck ?  Is this a particular implementation of RAID ...

The technical term is "write hole" and can occur at many RAID levels:

  http://www.raid-recovery-guide.com/raid5-write-hole.aspx

You can mitigate it by having a setup that doesn't have failures such as
using battery backup.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.14 (GNU/Linux)

iEYEARECAAYFAlMUxBMACgkQmOOfHg372QTxewCgjuqKWh4m+pz2JRtQWznPA83o
YEcAnjDuMMULpMX14VVlLsQ4NmJbD6PA
=Dp0Y
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-03 Thread Jean-Christophe Deschamps



It's how RAID5 works. Check this page docs http://baarf.com/ about it.


This is utter BS.

Serious RAID controllers perform parallel I/O on as many drives that 
are making up a given array. Of course I'm talking of SAS drives here 
with battery backed-up controller.


Kid sister RAID5-6 implementations using SATA drives and no dedicated 
hardware are best avoided and have more drawbacks than are listed in 
cited prose.


I run 24/7 an Areca 1882i controller with 6 SAS 15Krpm drives in RAID6 
and a couple more in RAID1 and I've yet to witness any problem whatsoever.


It's just like talking BS on a language because of some obscure bug in 
a non-conformant compiler. 


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-03 Thread Eduardo Morras
On Mon, 3 Mar 2014 11:00:47 +
Simon Slavin  wrote:

> What the heck ?  Is this a particular implementation of RAID or a
> conceptual problem with how RAID is designed to work ?  It sounds
> like a bug in one particular model rather than a general problem with
> how RAID works.

It's how RAID5 works. Check this page docs http://baarf.com/ about it. 

> 
> Simon.

---   ---
Eduardo Morras 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-03 Thread Markus Schaber
Hi,

Von: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org]
> On 3 Mar 2014, at 8:18am, Markus Schaber  wrote:
> > Another way to bust your data is to rely on RAID 5 or 6 or similar, at
> > least if the software does not take special care.
> >
> > When those mechanisms, updating a block always results in at least 2
> > disk
> > writes: The data block and the checksum block. There's a small time
> > window where only one of those blocks physically reached the disk.
> > Now, when the power fails during said time window, and the third disk
> > fails, it's content will be restored using the new data block and the
> > old checksum (or vice versa), leaving your data garbled.
> 
> What the heck ?  Is this a particular implementation of RAID or a conceptual
> problem with how RAID is designed to work ?  It sounds like a bug in one
> particular model rather than a general problem with how RAID works.

It is a conceptual problem of the RAID levels 5 and 6 and similar proprietary
mechanisms which are based on parity blocks.

RAID setups using only mirroring and striping like the RAID Levels 0, 1, 10
are not affected, and the risk may be lowered by using battery powered
RAID controllers.

Let's see a simple RAID5 with three disks. The blocks a and b are the two
data blocks which are covered by the parity block c. Let's say the database
code writes the block b. The RAID layer creates a corresponding write to
for the parity block c. As the harddisks are not physically synchronized,
there is a small time slot where only one of the blocks b and c has been 
written, but not the other one. The power fails during that time slot, and
during the reboot, the harddisk containing block a fails. During the raid
rebuild, the contents of block a are recreated using the blocks b and c -
but as only one of those blocks was up to date, and the other contains the
old state, this leads to (more or less) complete garbage in block a.

So using RAID5, you can risk damaging data which is even unrelated to
the data one was actually writing while the machine crashed.

Battery powered RAID controllers may lower the risk, as they either
hold a copy of the not-yet written blocks in their RAM (or flash)
until the power is restored, or they supply power to the harddisks
until all the blocks are written.

Similar things may happen with other parity / checksum based mechanisms,
like RAID 3, 6, or some (nowadays mostly extinct) proprietary solutions.


Best regards

Markus Schaber

CODESYS(r) a trademark of 3S-Smart Software Solutions GmbH

Inspiring Automation Solutions

3S-Smart Software Solutions GmbH
Dipl.-Inf. Markus Schaber | Product Development Core Technology
Memminger Str. 151 | 87439 Kempten | Germany
Tel. +49-831-54031-979 | Fax +49-831-54031-50

E-Mail: m.scha...@codesys.com | Web: http://www.codesys.com | CODESYS store: 
http://store.codesys.com
CODESYS forum: http://forum.codesys.com

Managing Directors: Dipl.Inf. Dieter Hess, Dipl.Inf. Manfred Werner | Trade 
register: Kempten HRB 6186 | Tax ID No.: DE 167014915

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-03 Thread Simon Slavin

On 3 Mar 2014, at 8:18am, Markus Schaber  wrote:

> Another way to bust your data is to rely on RAID 5 or 6 or similar, at least
> if the software does not take special care.
> 
> When those mechanisms, updating a block always results in at least 2 disk 
> writes: The data block and the checksum block. There's a small time window
> where only one of those blocks physically reached the disk. Now, when the
> power fails during said time window, and the third disk fails, it's content
> will be restored using the new data block and the old checksum (or vice
> versa), leaving your data garbled.

What the heck ?  Is this a particular implementation of RAID or a conceptual 
problem with how RAID is designed to work ?  It sounds like a bug in one 
particular model rather than a general problem with how RAID works.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-03 Thread Markus Schaber
Hi,

sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org]
> On 3 Mar 2014, at 3:41am, romtek  wrote:
> 
[...]
> 
> Here's a SQLite engineer writing about the same thing: section 3.1 of
> 
> 
> 
> Your disk hardware, its firmware driver, the OS's storage driver, the OS's
> file system and the OS file API all get a chance to pretend they're doing
> 'sync()' but actually just return 'done it'.  And if even one of them lies,
> synchronisation appears to happen instantly and your software runs faster.  A
> virtualising system is another chance to do processing faster by lying about
> synchronisation.  And unless something crashes or you have a power failure
> nobody will ever find out.

Another way to bust your data is to rely on RAID 5 or 6 or similar, at least
if the software does not take special care.

When those mechanisms, updating a block always results in at least 2 disk 
writes: The data block and the checksum block. There's a small time window
where only one of those blocks physically reached the disk. Now, when the
power fails during said time window, and the third disk fails, it's content
will be restored using the new data block and the old checksum (or vice
versa), leaving your data garbled.




Best regards

Markus Schaber

CODESYS(r) a trademark of 3S-Smart Software Solutions GmbH

Inspiring Automation Solutions

3S-Smart Software Solutions GmbH
Dipl.-Inf. Markus Schaber | Product Development Core Technology
Memminger Str. 151 | 87439 Kempten | Germany
Tel. +49-831-54031-979 | Fax +49-831-54031-50

E-Mail: m.scha...@codesys.com | Web: http://www.codesys.com | CODESYS store: 
http://store.codesys.com
CODESYS forum: http://forum.codesys.com

Managing Directors: Dipl.Inf. Dieter Hess, Dipl.Inf. Manfred Werner | Trade 
register: Kempten HRB 6186 | Tax ID No.: DE 167014915

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-02 Thread Simon Slavin

On 3 Mar 2014, at 3:41am, romtek  wrote:

> Thanks, Simon. Interestingly, for this server, disk operations aren't
> particularly fast. One SQLite write op takes about 4 times longer than on a
> HostGator server.

That supports the idea that storage is simulated (or 'virtualised') to a high 
degree.

> I wonder if what I/you described also means that this file system isn't
> likely to support file locks needed for SQLite to control access to the DB
> file to prevent data corruption.

I think that's likely.  Virtualisation is a pig for ACID: it introduces yet 
another gap between processing and physical changes in your storage which will 
be read after restart.

For those playing along at home, doing transactions properly depends on 
synchronisation.  Something that comes up here repeatedly is that synchronising 
takes a long time, and since people buy kit that quotes faster figures things 
at all levels lie about doing synchronisation.  This leads to articles like the 
following:



"Certain OS/Hardware configurations still fake fsync delivering great 
performance at the cost of being non ACID"

Here's a SQLite engineer writing about the same thing: section 3.1 of



Your disk hardware, its firmware driver, the OS's storage driver, the OS's file 
system and the OS file API all get a chance to pretend they're doing 'sync()' 
but actually just return 'done it'.  And if even one of them lies, 
synchronisation appears to happen instantly and your software runs faster.  A 
virtualising system is another chance to do processing faster by lying about 
synchronisation.  And unless something crashes or you have a power failure 
nobody will ever find out.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-02 Thread romtek
Thanks, Simon. Interestingly, for this server, disk operations aren't
particularly fast. One SQLite write op takes about 4 times longer than on a
HostGator server.

I wonder if what I/you described also means that this file system isn't
likely to support file locks needed for SQLite to control access to the DB
file to prevent data corruption.


On Sun, Mar 2, 2014 at 9:18 PM, Simon Slavin  wrote:

>
> On 3 Mar 2014, at 2:14am, romtek  wrote:
>
> > On one of my hosting servers (this one is a VPS), a bunch of write
> > operations take practically the same amount of time when they are
> performed
> > individually as when they are performed as one explicit transaction. I've
> > varied the number of ops up to 200 -- with the similar results. Why is
> that?
> > What could be about the file system or disk drive that could cause this?
>
> I'm betting it's a running on newer hardware more suited to virtual
> machines.  One of the problems with virtual computers is that their disk
> storage is often virtualised to a very high degree.  For instance, what
> appears to the computer to be disk storage may be entirely held on SSD, or
> on a fast internal disk, and flushed to a huge but slower disk just once a
> minute.  Or once every five minutes.  Or once an hour.  This is an
> efficient way to simulate 20 to 200 virtual machines on what is one lump of
> hardware.
>
> A result of this is that disk operations are very fast.  However, any
> 'sync()' operations do nothing at all because nobody cares what happens if
> an imaginary computer crashes.  Since most of the time involved in ending a
> transaction is waiting for synchronisation, this produces the results you
> note: syncing once takes the same time as syncing 200 times, because
> neither of them is doing much.  And a result of that is that if the
> computer crashes, you lose the last minute/minutes/hour of processing and
> the sync() state of database operations is suspect.
>
> Go read their terms and find out what they guarantee to do if a virtual
> machine crashes.  You'll probably find that they'll get a virtual computer
> running again very quickly but don't make promises about how recent the
> image they restore will be.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-02 Thread Simon Slavin

On 3 Mar 2014, at 2:14am, romtek  wrote:

> On one of my hosting servers (this one is a VPS), a bunch of write
> operations take practically the same amount of time when they are performed
> individually as when they are performed as one explicit transaction. I've
> varied the number of ops up to 200 -- with the similar results. Why is that?
> What could be about the file system or disk drive that could cause this?

I'm betting it's a running on newer hardware more suited to virtual machines.  
One of the problems with virtual computers is that their disk storage is often 
virtualised to a very high degree.  For instance, what appears to the computer 
to be disk storage may be entirely held on SSD, or on a fast internal disk, and 
flushed to a huge but slower disk just once a minute.  Or once every five 
minutes.  Or once an hour.  This is an efficient way to simulate 20 to 200 
virtual machines on what is one lump of hardware.

A result of this is that disk operations are very fast.  However, any 'sync()' 
operations do nothing at all because nobody cares what happens if an imaginary 
computer crashes.  Since most of the time involved in ending a transaction is 
waiting for synchronisation, this produces the results you note: syncing once 
takes the same time as syncing 200 times, because neither of them is doing 
much.  And a result of that is that if the computer crashes, you lose the last 
minute/minutes/hour of processing and the sync() state of database operations 
is suspect.

Go read their terms and find out what they guarantee to do if a virtual machine 
crashes.  You'll probably find that they'll get a virtual computer running 
again very quickly but don't make promises about how recent the image they 
restore will be.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-02 Thread romtek
In case this gives somebody a clue, the server in question is on
http://vps.net/.


On Sun, Mar 2, 2014 at 8:14 PM, romtek  wrote:

> Hi,
>
> On one of my hosting servers (this one is a VPS), a bunch of write
> operations take practically the same amount of time when they are performed
> individually as when they are performed as one explicit transaction. I've
> varied the number of ops up to 200 -- with the similar results. Why is that?
> What could be about the file system or disk drive that could cause this?
>
> P.S. My other servers (shared hosting on HostGator), batched writes take
> MUCH less time than individual write ops, as expected.
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users