[sqlite] ATTACH DATABASE statement speed

2015-08-24 Thread Simon Slavin

On 24 Aug 2015, at 2:48am, Simon Slavin  wrote:

> My suspicion here is that there's a design fault in ZFS.

To correct myself here, what I meant to write was that there was a fault in the 
implementation of ZFS that Paolo is using, not in the basic design of ZFS 
itself.

Simon.


[sqlite] ATTACH DATABASE statement speed

2015-08-24 Thread Simon Slavin

On 24 Aug 2015, at 2:32am, Roger Binns  wrote:

> On 08/19/2015 05:56 PM, Paolo Bolzoni wrote:
>> I left running the pragma quick check during the night and finished
>> in 2 hours and 46 minutes, so it is about 8 times slower than in
>> ext4. Zfs is an advanced filesystem plenty of features, but this
>> speed difference is too much I think.
> 
> I use btrfs which like zfs is also a copy on write filesystem.  It is
> possible for the files to get very fragmented which can result in
> dismal performance, even on an SSD even for reads.  Random small
> writes especially aggravate this.  btrfs has an autodefrag option that
> addresses this in the background, and SQLite is specifically mentioned
> as all the browsers use it behind the scenes as do many email clients.

Paolo is using Linux which does not do read-ahead optimization like some 
versions of Windows.  Therefore if he really is using an SSD then fragmentation 
is not an issue.

The other thing that makes me think fragmentation is a red herring is the 
extent to which ZFS is slower.  I have seen fragmentation make something take 
twice as long.  I might even believe, under some weird situation, it makes 
something take 4 times as long.  But not 6 or 8.

My suspicion here is that there's a design fault in ZFS.  I can't call it a bug 
under the rules of this list, since it is giving the right result, just 
ridiculously slowly.  But whatever it is I'm betting that it's not a problem in 
the code for SQLite, it's a problem in the code for ZFS.  And it'll take a ZFS 
expert to solve it, probably on a ZFS mailing list, not this one.

The one possible exception would be if the SQLite database page size and the 
ZFS sector size are causing problems with one-another.  But here too the best 
diagnosis would be done by a ZFS expert.  And anyway, /nothing/ should cause a 
slow-down of a factor of 8.

Simon.


[sqlite] ATTACH DATABASE statement speed

2015-08-23 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/23/2015 06:48 PM, Simon Slavin wrote:
> Paolo is using Linux which does not do read-ahead optimization like
> some versions of Windows.  Therefore if he really is using an SSD
> then fragmentation is not an issue.

You are confusing things.  The data structures used in a copy on write
filesystem (as both btrfs and zfs are) are what gets fragmented.  This
is not like a traditional filesystem that will update existing data
structures.  It is more analoguous to garbage collection.

I promise you that it really does become an issue, even with an SSD.
When a file is across ten thousand fragments, latency and throughput
suffer.

> I have seen fragmentation make something take twice as long.

You are talking about traditional fragmentation, not a degenerate
state for copy on write based filesystems.

> My suspicion here is that there's a design fault in ZFS.

Unless zfs does automatic defragging, it will have exactly the same
problems as btrfs.  This is inherent in how the data structures are
laid out, and that nothing is modified in place.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iEYEARECAAYFAlXakBgACgkQmOOfHg372QTepACffpEZ/tozxJKv0bKgZQ0D0wIL
HqUAn3ES+b+xr/c8h7I/lqJs1zhQRVrg
=+S02
-END PGP SIGNATURE-


[sqlite] ATTACH DATABASE statement speed

2015-08-23 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/19/2015 05:56 PM, Paolo Bolzoni wrote:
> I left running the pragma quick check during the night and finished
> in 2 hours and 46 minutes, so it is about 8 times slower than in
> ext4. Zfs is an advanced filesystem plenty of features, but this
> speed difference is too much I think.

I use btrfs which like zfs is also a copy on write filesystem.  It is
possible for the files to get very fragmented which can result in
dismal performance, even on an SSD even for reads.  Random small
writes especially aggravate this.  btrfs has an autodefrag option that
addresses this in the background, and SQLite is specifically mentioned
as all the browsers use it behind the scenes as do many email clients.

  https://btrfs.wiki.kernel.org/index.php/Gotchas  (2nd last section)

The filefrag command may be helpful if implemented for zfs and will
tell you if fragmentation is a problem.

Virtual machine images are another problematic file type with similar
read/write patterns to SQLite.

Copy on write filesystems don't modify existing (meta)data, but rather
write new versions that point to the existing data for bits that
aren't changed.  Repeat this many times and the chains of pointers get
very long, which is the fragmentation.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iEYEARECAAYFAlXadDsACgkQmOOfHg372QRdpACfRUmHD4hXfAx6+il0q/7lINxd
9lwAoNAyYV1oa5cYcn1O00JxW4nsI/Sp
=VrTq
-END PGP SIGNATURE-


[sqlite] ATTACH DATABASE statement speed

2015-08-20 Thread Paolo Bolzoni
On Wed, Aug 19, 2015 at 6:53 PM, Simon Slavin  wrote:
> On 19 Aug 2015, at 10:46am, Paolo Bolzoni  
> wrote:

> Just by itself, the above information is significant to the SQLite team.  
> Perhaps when you have had a chance to confirm it a new thread can be started 
> called 'SQLite database on ZFS is very slow'.  This should pull any ZFS 
> experts out of the woodwork.  Unfortunately I am not one of them.

I left running the pragma quick check during the night and finished in
2 hours and 46 minutes, so it is about 8 times slower than in ext4.
Zfs is an advanced filesystem plenty of features, but this speed
difference is too much I think.

However, tests on a plate disk need to wait the end of the month.


[sqlite] ATTACH DATABASE statement speed

2015-08-19 Thread Paolo Bolzoni
On Wed, Aug 19, 2015 at 11:44 AM, Simon Slavin  wrote:

> Hmm.  Would it be possible to format an external drive in ZFS and try the 
> operations on files stored on that ?

As you might have guessed from the timezone I am not at home atm, so I
do not have spare external disks.
However, I do have an expendable 16BG usb stick so I tried on that.

First I formatted it using zfs and I did the Pragma quick_check; I
killed it after 40 minutes.
Secondly I formatted it using ext4 and the Pragma quick_check; it
finished in about 25 minutes.

Now, I am trying again with zfs. But I think it already shows
something is indeed wrong.


[sqlite] ATTACH DATABASE statement speed

2015-08-19 Thread Paolo Bolzoni
I see. Thanks nameless person known as sqlite-mail (npkasm for short),
what you say makes sense.
However it does not explain why the pragma checks are so slow.

Anyhow, npkasm, I will keep in mind for the future. Good point indeed.

On Wed, Aug 19, 2015 at 3:59 PM, sqlite-mail  
wrote:
> Hello !
>
> The problem with foreign keys most of the time is not the the referenced
> table/field (normally primary key that do no need extra index) but the
> dependent table/field when they do not have a proper index, any time you
> update/delete a record on the referenced table a linear scan is performed on
> all dependent tables and that can be a lot time consuming depending on the
> number of records on then.
>
>   I've got this problem on a heavily foreign key constrained database and it
> took me a bit to realize that !
>
> Cheers !
>>  @nameless person known as sqlite-mail,
>> Yes, I do have foreign keys. But each relate to a primary key; there
>> are no explicit indexes on this primary keys, but they should not be
>> needed because primary keys are indexed automatically.
>> Or are they?
>>
>>
>>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] ATTACH DATABASE statement speed

2015-08-19 Thread Steffen Mangold
>
> is there any target date when the preRelease branch gets over to a actual 
> release?
> 
> Is a really hard show stopper for our development at the moment. We checked 
> everything for compatibility before merge your current  trunk to Visual 
> Studio 2015 and we forget about the SQLite design tool. :(
>

Sorry! Wrong thread...


[sqlite] ATTACH DATABASE statement speed

2015-08-19 Thread Steffen Mangold
Hi,

is there any target date when the preRelease branch gets over to a actual 
release?

Is a really hard show stopper for our development at the moment. We checked 
everything for compatibility before merge your current trunk to Visual Studio 
2015 and we forget about the SQLite design tool. :(

Regards
Steffen Mangold


[sqlite] ATTACH DATABASE statement speed

2015-08-19 Thread Paolo Bolzoni
@Simon
I tried zpool scrub on both my disks and it returned nothing, I also
tried executing stress[1] on the disk and no error appeared in the log
or in stress itself.
However, coping the sqlite db on an external disk connected via usb3
and formatted with Ntfs actually does the pragma quick_check in little
more than 20 seconds and pragma integrity_check in 5 minutes.

So I think it is not an hardware problem, but Zfs messes up somewhat.
Is there any known disagreement between Zfs and sqlite? In fact I have
this feeling that my system is working fine in everything a part of
sqlite.
I read in the man that recordsize could be relevant.

Otherwise I have to reinstall the system. But it is of course time consuming.

[1]
http://people.seas.harvard.edu/~apw/stress/


@nameless person known as sqlite-mail,
Yes, I do have foreign keys. But each relate to a primary key; there
are no explicit indexes on this primary keys, but they should not be
needed because primary keys are indexed automatically.
Or are they?


@Marcus Grimm,
It seems have no effect actually in my disk. The test goes fast for a
while and slow down after.


On Tue, Aug 18, 2015 at 9:12 PM, Marcus Grimm  
wrote:
> Just another guess:
> Have you tried to increase the page chache drastically ?
> I can remeber that "PRAGMA quick_check" is pretty slow
> for bigger DBs without an increased page cache.
> Maybe something like:
> PRAGMA cache_size=50;
> PRAGMA quick_check;
>
> Marcus
>
>
> Am 18.08.2015 um 12:38 schrieb Paolo Bolzoni:
>>
>> It really seems something strange happens at filesystem level.
>>
>> This is a simple copy of slightly less than 1gb. It needs 9 seconds
>> including sync.
>> % date && sudo rsync -Pr italy-latest.osm.pbf / && sync && date
>> Tue Aug 18 19:22:23 JST 2015
>> sending incremental file list
>> italy-latest.osm.pbf
>>  946,976,283 100%  123.88MB/s0:00:07 (xfr#1, to-chk=0/1)
>> Tue Aug 18 19:22:32 JST 2015
>>
>>
>> However, when I start sqlite3 db 'PRAGMA quick_check;' the IO looks
>> normal for a while.
>> (I hope gmail don't mess up with the formatting...)
>>
>> 60, 90, 80 MB/s is kinda expected:
>> 08/18/2015 07:27:38 PM
>> Device: rrqm/s   wrqm/s r/s w/srMB/swMB/s
>> avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
>> encplate
>>0.00 0.00  820.00   13.0062.11 0.26
>> 153.34 1.872.271.14   73.46   1.20  99.80
>>0.00 0.00 1214.500.0094.58 0.00
>> 159.49 0.960.780.780.00   0.78  95.20
>>0.00 0.00 1008.50   22.0078.09 0.41
>> 155.99 1.501.460.96   24.16   0.93  95.80
>>
>> but after some seconds it drops terribly to less than 10MB/s
>> 08/18/2015 07:29:04 PM
>> Device: rrqm/s   wrqm/s r/s w/srMB/swMB/s
>> avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
>> encplate
>>0.00 0.00  124.003.50 9.88 0.12
>> 160.72 1.67   12.99   11.21   76.14   7.65  97.50
>>0.00 0.00   69.00   18.00 5.68 0.29
>> 140.55 1.81   20.92   14.15   46.86  11.38  99.00
>>0.00 0.00   86.000.00 7.05 0.00
>> 167.91 1.04   12.03   12.030.00  11.24  96.70
>>
>> And so, going to 10MB per second it can easily require few hours...
>>
>>
>> I am out of ideas, but thanks for all the support.
>>
>>
>>
>> On Tue, Aug 18, 2015 at 7:26 PM, Simon Slavin 
>> wrote:
>>>
>>>
>>> On 18 Aug 2015, at 7:30am, Paolo Bolzoni 
>>> wrote:
>>>
 Any other idea of what can I try? Perhaps my filesystem is
 misconfigured?
>>>
>>>
>>> The long time you quote is not standard for SQLite and I don't think
>>> anyone can help you solve it by knowing picky details of SQLite.  I'm even
>>> surprised that it changed with your -O0 compilation since this suggests
>>> features of your compiler I didn't know about.
>>>
>>> It's possible one of the developer team can help but they're reading this
>>> and can pitch in if they think so.
>>>
>>> Simon.
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users at mailinglists.sqlite.org
>>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users at mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] ATTACH DATABASE statement speed

2015-08-19 Thread Simon Slavin

On 19 Aug 2015, at 10:46am, Paolo Bolzoni  
wrote:

> As you might have guessed from the timezone I am not at home atm, so I
> do not have spare external disks.
> However, I do have an expendable 16BG usb stick so I tried on that.
> 
> First I formatted it using zfs and I did the Pragma quick_check; I
> killed it after 40 minutes.
> Secondly I formatted it using ext4 and the Pragma quick_check; it
> finished in about 25 minutes.
> 
> Now, I am trying again with zfs. But I think it already shows
> something is indeed wrong.

Just by itself, the above information is significant to the SQLite team.  
Perhaps when you have had a chance to confirm it a new thread can be started 
called 'SQLite database on ZFS is very slow'.  This should pull any ZFS experts 
out of the woodwork.  Unfortunately I am not one of them.

Simon.


[sqlite] ATTACH DATABASE statement speed

2015-08-19 Thread sqlite-mail
Hello !  

The problem with foreign keys most of the time is not the the referenced
table/field (normally primary key that do no need extra index) but the
dependent table/field when they do not have a proper index, any time you
update/delete a record on the referenced table a linear scan is performed on
all dependent tables and that can be a lot time consuming depending on the
number of records on then.  

? I've got this problem on a heavily foreign key constrained database and it
took me a bit to realize that !  

Cheers !  
>  @nameless person known as sqlite-mail,
> Yes, I do have foreign keys. But each relate to a primary key; there
> are no explicit indexes on this primary keys, but they should not be
> needed because primary keys are indexed automatically.
> Or are they?
> 
>
>


[sqlite] ATTACH DATABASE statement speed

2015-08-19 Thread Simon Slavin

On 19 Aug 2015, at 3:27am, Paolo Bolzoni  
wrote:

> coping the sqlite db on an external disk connected via usb3
> and formatted with Ntfs actually does the pragma quick_check in little
> more than 20 seconds and pragma integrity_check in 5 minutes.

Those times are completely typical for SQLite.  Those are the sort of times I'd 
expect to see.

> So I think it is not an hardware problem, but Zfs messes up somewhat.
> Is there any known disagreement between Zfs and sqlite?

Over the years SQLite has revealed bugs in various versions of ZFS.  However, 
I'm not aware of any problems with up-to-date versions of ZFS.

> In fact I have
> this feeling that my system is working fine in everything a part of
> sqlite.
> I read in the man that recordsize could be relevant.

You might be able to improve your times by 50% with judicious choices of 
tweaks.  But your original times strongly suggest disk problems of some kind.  
The current theory seems to be a ZFS formatting fault, but I know nothing about 
ZFS in real use so I don't know what to do about checking that.

Hmm.  Would it be possible to format an external drive in ZFS and try the 
operations on files stored on that ?

Simon.


[sqlite] ATTACH DATABASE statement speed

2015-08-18 Thread Paolo Bolzoni
It really seems something strange happens at filesystem level.

This is a simple copy of slightly less than 1gb. It needs 9 seconds
including sync.
% date && sudo rsync -Pr italy-latest.osm.pbf / && sync && date
Tue Aug 18 19:22:23 JST 2015
sending incremental file list
italy-latest.osm.pbf
946,976,283 100%  123.88MB/s0:00:07 (xfr#1, to-chk=0/1)
Tue Aug 18 19:22:32 JST 2015


However, when I start sqlite3 db 'PRAGMA quick_check;' the IO looks
normal for a while.
(I hope gmail don't mess up with the formatting...)

60, 90, 80 MB/s is kinda expected:
08/18/2015 07:27:38 PM
Device: rrqm/s   wrqm/s r/s w/srMB/swMB/s
avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
encplate
  0.00 0.00  820.00   13.0062.11 0.26
153.34 1.872.271.14   73.46   1.20  99.80
  0.00 0.00 1214.500.0094.58 0.00
159.49 0.960.780.780.00   0.78  95.20
  0.00 0.00 1008.50   22.0078.09 0.41
155.99 1.501.460.96   24.16   0.93  95.80

but after some seconds it drops terribly to less than 10MB/s
08/18/2015 07:29:04 PM
Device: rrqm/s   wrqm/s r/s w/srMB/swMB/s
avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
encplate
  0.00 0.00  124.003.50 9.88 0.12
160.72 1.67   12.99   11.21   76.14   7.65  97.50
  0.00 0.00   69.00   18.00 5.68 0.29
140.55 1.81   20.92   14.15   46.86  11.38  99.00
  0.00 0.00   86.000.00 7.05 0.00
167.91 1.04   12.03   12.030.00  11.24  96.70

And so, going to 10MB per second it can easily require few hours...


I am out of ideas, but thanks for all the support.



On Tue, Aug 18, 2015 at 7:26 PM, Simon Slavin  wrote:
>
> On 18 Aug 2015, at 7:30am, Paolo Bolzoni  
> wrote:
>
>> Any other idea of what can I try? Perhaps my filesystem is misconfigured?
>
> The long time you quote is not standard for SQLite and I don't think anyone 
> can help you solve it by knowing picky details of SQLite.  I'm even surprised 
> that it changed with your -O0 compilation since this suggests features of 
> your compiler I didn't know about.
>
> It's possible one of the developer team can help but they're reading this and 
> can pitch in if they think so.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] ATTACH DATABASE statement speed

2015-08-18 Thread Paolo Bolzoni
It seems my system CPUs overheats regularly and so the system throttle
the CPU speed.
It is definitely not good, I need to disassemble my laptop, clean
everything, and probably replace the conductive cpu paste.

However, this does not explain anything. sqlite3 appears IO bound, not
CPU bound: in fact the programs seems to use 5% of the CPU tops.

Any other idea of what can I try? Perhaps my filesystem is misconfigured?

On Tue, Aug 18, 2015 at 3:02 PM, Paolo Bolzoni
 wrote:
> I think the ATTACH slowness was a misunderstanding caused by the
> optimization as it does not happen anymore now I compiled with -O0.
> The long time was actually deleting the table from the input db, this
> explains also why sqlite was making the journal file for the
> operation.
>
> I am aware it sounds implausible, but I really have not idea why the
> standard error message was not appearing before.
>
> Yes, I am using Linux
>
> % uname -a
> Linux slyrogue 4.1.4-1-ARCH #1 SMP PREEMPT Mon Aug 3 21:30:37 UTC 2015
> x86_64 GNU/Linux
>
>
> About hardware failures I do read some:
> mce: [Hardware Error]: Machine check events logged
> lines in dmesg. I try to investigate more.
>
> On Tue, Aug 18, 2015 at 12:28 PM, Simon Slavin  
> wrote:
>>
>> On 18 Aug 2015, at 4:14am, Paolo Bolzoni  
>> wrote:
>>
>>> In the input and output database I had a table of the same name and using:
>>>
>>> DROP TABLE IF EXISTS WaysNodes;
>>>
>>> sqlite3 was actually deleting the table of the input db, this was
>>> unexpected as I thought that
>>> without any prefix it deleted from the main database.
>>
>> So did I.  I'm glad you have a working solution but there is still something 
>> wrong.  Not only is your text above correct but it shouldn't take 13 hours 
>> to do an integrity check on a 13 Gig database with simple indexes.
>>
>> I'm still suspecting some kind of hardware problem.  I'll be interested to 
>> know what happens if you ATTACH your copy of the input database, on the 
>> other disk, rather than the original.  Does it still take a very long time ?
>>
>> Hmm.  What OS are you using again ?  Oh, you used iostat.  Linux.  So not 
>> the Windows pre-read caching bug.  Okay.
>>
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users at mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] ATTACH DATABASE statement speed

2015-08-18 Thread Paolo Bolzoni
I think the ATTACH slowness was a misunderstanding caused by the
optimization as it does not happen anymore now I compiled with -O0.
The long time was actually deleting the table from the input db, this
explains also why sqlite was making the journal file for the
operation.

I am aware it sounds implausible, but I really have not idea why the
standard error message was not appearing before.

Yes, I am using Linux

% uname -a
Linux slyrogue 4.1.4-1-ARCH #1 SMP PREEMPT Mon Aug 3 21:30:37 UTC 2015
x86_64 GNU/Linux


About hardware failures I do read some:
mce: [Hardware Error]: Machine check events logged
lines in dmesg. I try to investigate more.

On Tue, Aug 18, 2015 at 12:28 PM, Simon Slavin  wrote:
>
> On 18 Aug 2015, at 4:14am, Paolo Bolzoni  
> wrote:
>
>> In the input and output database I had a table of the same name and using:
>>
>> DROP TABLE IF EXISTS WaysNodes;
>>
>> sqlite3 was actually deleting the table of the input db, this was
>> unexpected as I thought that
>> without any prefix it deleted from the main database.
>
> So did I.  I'm glad you have a working solution but there is still something 
> wrong.  Not only is your text above correct but it shouldn't take 13 hours to 
> do an integrity check on a 13 Gig database with simple indexes.
>
> I'm still suspecting some kind of hardware problem.  I'll be interested to 
> know what happens if you ATTACH your copy of the input database, on the other 
> disk, rather than the original.  Does it still take a very long time ?
>
> Hmm.  What OS are you using again ?  Oh, you used iostat.  Linux.  So not the 
> Windows pre-read caching bug.  Okay.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] ATTACH DATABASE statement speed

2015-08-18 Thread Marcus Grimm
Just another guess:
Have you tried to increase the page chache drastically ?
I can remeber that "PRAGMA quick_check" is pretty slow
for bigger DBs without an increased page cache.
Maybe something like:
PRAGMA cache_size=50;
PRAGMA quick_check;

Marcus

Am 18.08.2015 um 12:38 schrieb Paolo Bolzoni:
> It really seems something strange happens at filesystem level.
>
> This is a simple copy of slightly less than 1gb. It needs 9 seconds
> including sync.
> % date && sudo rsync -Pr italy-latest.osm.pbf / && sync && date
> Tue Aug 18 19:22:23 JST 2015
> sending incremental file list
> italy-latest.osm.pbf
>  946,976,283 100%  123.88MB/s0:00:07 (xfr#1, to-chk=0/1)
> Tue Aug 18 19:22:32 JST 2015
>
>
> However, when I start sqlite3 db 'PRAGMA quick_check;' the IO looks
> normal for a while.
> (I hope gmail don't mess up with the formatting...)
>
> 60, 90, 80 MB/s is kinda expected:
> 08/18/2015 07:27:38 PM
> Device: rrqm/s   wrqm/s r/s w/srMB/swMB/s
> avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
> encplate
>0.00 0.00  820.00   13.0062.11 0.26
> 153.34 1.872.271.14   73.46   1.20  99.80
>0.00 0.00 1214.500.0094.58 0.00
> 159.49 0.960.780.780.00   0.78  95.20
>0.00 0.00 1008.50   22.0078.09 0.41
> 155.99 1.501.460.96   24.16   0.93  95.80
>
> but after some seconds it drops terribly to less than 10MB/s
> 08/18/2015 07:29:04 PM
> Device: rrqm/s   wrqm/s r/s w/srMB/swMB/s
> avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
> encplate
>0.00 0.00  124.003.50 9.88 0.12
> 160.72 1.67   12.99   11.21   76.14   7.65  97.50
>0.00 0.00   69.00   18.00 5.68 0.29
> 140.55 1.81   20.92   14.15   46.86  11.38  99.00
>0.00 0.00   86.000.00 7.05 0.00
> 167.91 1.04   12.03   12.030.00  11.24  96.70
>
> And so, going to 10MB per second it can easily require few hours...
>
>
> I am out of ideas, but thanks for all the support.
>
>
>
> On Tue, Aug 18, 2015 at 7:26 PM, Simon Slavin  wrote:
>>
>> On 18 Aug 2015, at 7:30am, Paolo Bolzoni  
>> wrote:
>>
>>> Any other idea of what can I try? Perhaps my filesystem is misconfigured?
>>
>> The long time you quote is not standard for SQLite and I don't think anyone 
>> can help you solve it by knowing picky details of SQLite.  I'm even 
>> surprised that it changed with your -O0 compilation since this suggests 
>> features of your compiler I didn't know about.
>>
>> It's possible one of the developer team can help but they're reading this 
>> and can pitch in if they think so.
>>
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users at mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] ATTACH DATABASE statement speed

2015-08-18 Thread sqlite-mail
Hello !  

Do you have foreign keys on your tables ? And if so do you have indexes on
then ?  

A database with foreign keys and no indexes can run very slow for mas
insert/update/delete ?  

Cheers !  

?  
>  Tue Aug 18 2015 12:38:51 CEST from "Paolo Bolzoni"
>  Subject: Re: [sqlite] ATTACH DATABASE
>statement speed
>
>  It really seems something strange happens at filesystem level.
> 
> This is a simple copy of slightly less than 1gb. It needs 9 seconds
> including sync.
> % date && sudo rsync -Pr italy-latest.osm.pbf / && sync && date
> Tue Aug 18 19:22:23 JST 2015
> sending incremental file list
> italy-latest.osm.pbf
> 946,976,283 100% 123.88MB/s 0:00:07 (xfr#1, to-chk=0/1)
> Tue Aug 18 19:22:32 JST 2015
> 
> 
> However, when I start sqlite3 db 'PRAGMA quick_check;' the IO looks
> normal for a while.
> (I hope gmail don't mess up with the formatting...)
> 
> 60, 90, 80 MB/s is kinda expected:
> 08/18/2015 07:27:38 PM
> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
> avgrq-sz avgqu-sz await r_await w_await svctm %util
> encplate
> 0.00 0.00 820.00 13.00 62.11 0.26
> 153.34 1.87 2.27 1.14 73.46 1.20 99.80
> 0.00 0.00 1214.50 0.00 94.58 0.00
> 159.49 0.96 0.78 0.78 0.00 0.78 95.20
> 0.00 0.00 1008.50 22.00 78.09 0.41
> 155.99 1.50 1.46 0.96 24.16 0.93 95.80
> 
> but after some seconds it drops terribly to less than 10MB/s
> 08/18/2015 07:29:04 PM
> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
> avgrq-sz avgqu-sz await r_await w_await svctm %util
> encplate
> 0.00 0.00 124.00 3.50 9.88 0.12
> 160.72 1.67 12.99 11.21 76.14 7.65 97.50
> 0.00 0.00 69.00 18.00 5.68 0.29
> 140.55 1.81 20.92 14.15 46.86 11.38 99.00
> 0.00 0.00 86.00 0.00 7.05 0.00
> 167.91 1.04 12.03 12.03 0.00 11.24 96.70
> 
> And so, going to 10MB per second it can easily require few hours...
> 
> 
> I am out of ideas, but thanks for all the support.
> 
> 
> 
> On Tue, Aug 18, 2015 at 7:26 PM, Simon Slavin  wrote:
> 
>  
>>On 18 Aug 2015, at 7:30am, Paolo Bolzoni 
>>wrote:
>> 
>>  
>>>Any other idea of what can I try? Perhaps my filesystem is misconfigured?
>>> 

>>  The long time you quote is not standard for SQLite and I don't think
>>anyone can help you solve it by knowing picky details of SQLite. I'm even
>>surprised that it changed with your -O0 compilation since this suggests
>>features of your compiler I didn't know about.
>> 
>> It's possible one of the developer team can help but they're reading this
>>and can pitch in if they think so.
>> 
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users at mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

>  ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> 
>
>  



?


[sqlite] ATTACH DATABASE statement speed

2015-08-18 Thread Paolo Bolzoni
I think I got it. As often when something is really weird, it is was
not what I expected.

In the input and output database I had a table of the same name and using:

DROP TABLE IF EXISTS WaysNodes;

sqlite3 was actually deleting the table of the input db, this was
unexpected as I thought that
without any prefix it deleted from the main database.

Besides, I think the compiler optimization changed a bit the order I
was seeing the standard error messages as it appeared the time needed
was in the ATTACH, not in the cleaning and preparing the output db.

However, when I applied the Simon suggestion of vacuum after deleting
everything, but before re-create, and attached to the input db after
the problem apparently disappeared. I also compiled without
optimizations, it does not really matter in a program comprised mainly
by sql statements anyway.

About PRAGMA integrity_check, it returned OK for the db, but after
about 13 hours.

I also tried to copy the db in another disk, copy back and compare
with cmp. All fine.


Cheers,
Paolo

On Mon, Aug 17, 2015 at 9:26 PM, Simon Slavin  wrote:
>
> On 17 Aug 2015, at 9:22am, Paolo Bolzoni  
> wrote:
>
>> The pragma integrity_check is still running... Maybe my disk sucks for
>> some reason?
>
> I wonder whether the hard disk is faulty or the file is on a bad sector.
>
> If the other tests show nothing, can you duplicate the input database file ?  
> Do something that forces the OS to read the whole thing.  If you have a good 
> idea of how long reading and writing 13GB should take (ten minutes ?), this 
> may take an obviously unreasonable time.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] ATTACH DATABASE statement speed

2015-08-18 Thread Simon Slavin

On 18 Aug 2015, at 11:38am, Paolo Bolzoni  
wrote:

> but after some seconds it drops terribly to less than 10MB/s

This, along with some information from your previous posts, all goes to suggest 
you have a hardware problem of some kind.  My guess is that your hard disk is 
becoming faulty, either in general or with failures on certain sectors.  
Whatever it is it's not related to SQLite.

Do you have an external drive available ?  Could you move the database to that 
drive and do the same thing and see if you get a similar slowdown ?

Simon.


[sqlite] ATTACH DATABASE statement speed

2015-08-18 Thread Simon Slavin

On 18 Aug 2015, at 7:30am, Paolo Bolzoni  
wrote:

> Any other idea of what can I try? Perhaps my filesystem is misconfigured?

The long time you quote is not standard for SQLite and I don't think anyone can 
help you solve it by knowing picky details of SQLite.  I'm even surprised that 
it changed with your -O0 compilation since this suggests features of your 
compiler I didn't know about.

It's possible one of the developer team can help but they're reading this and 
can pitch in if they think so.

Simon.


[sqlite] ATTACH DATABASE statement speed

2015-08-18 Thread Simon Slavin

On 18 Aug 2015, at 4:14am, Paolo Bolzoni  
wrote:

> In the input and output database I had a table of the same name and using:
> 
> DROP TABLE IF EXISTS WaysNodes;
> 
> sqlite3 was actually deleting the table of the input db, this was
> unexpected as I thought that
> without any prefix it deleted from the main database.

So did I.  I'm glad you have a working solution but there is still something 
wrong.  Not only is your text above correct but it shouldn't take 13 hours to 
do an integrity check on a 13 Gig database with simple indexes.

I'm still suspecting some kind of hardware problem.  I'll be interested to know 
what happens if you ATTACH your copy of the input database, on the other disk, 
rather than the original.  Does it still take a very long time ?

Hmm.  What OS are you using again ?  Oh, you used iostat.  Linux.  So not the 
Windows pre-read caching bug.  Okay.

Simon.


[sqlite] ATTACH DATABASE statement speed

2015-08-17 Thread Paolo Bolzoni
The pragma integrity_check is still running... Maybe my disk sucks for
some reason?

The output of iostat looks normal to me though..

$ iostat -dmN  encplate 1 5
Linux 4.1.4-1-ARCH (slyrogue)   08/17/2015  _x86_64_(8 CPU)

Device:tpsMB_read/sMB_wrtn/sMB_readMB_wrtn
encplate151.63 6.93 4.35 225847 141971

Device:tpsMB_read/sMB_wrtn/sMB_readMB_wrtn
encplate111.00 7.80 0.07  7  0

Device:tpsMB_read/sMB_wrtn/sMB_readMB_wrtn
encplate158.0010.29 0.47 10  0

Device:tpsMB_read/sMB_wrtn/sMB_readMB_wrtn
encplate110.00 8.43 0.07  8  0

Device:tpsMB_read/sMB_wrtn/sMB_readMB_wrtn
encplate131.0011.36 0.00 11  0


On Mon, Aug 17, 2015 at 4:28 PM, Simon Slavin  wrote:
>
> On 17 Aug 2015, at 8:17am, Simon Slavin  wrote:
>
>> Also, out of interest, can you run "PRAGMA integrity_check" on all the 
>> database files involved ?
>
> Also out of interest, instead of just ATTACHing the input database,
>
> 1) Close the output database
> 2) Reopen the output database
> 3) ATTACH the input database
>
> Does it take the same amount of time ?  If so, which step takes the time ?
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] ATTACH DATABASE statement speed

2015-08-17 Thread Paolo Bolzoni
Dear list,

I have a program that materialize the subset of a database in a second
database for future use.

To do so I create the database object on the OUTPUT database, create
tables, create the indexes, and vacuum it just in the case I reused an
old file.

After I attach the INPUT db and I copy the useful lines.

The input db is about 13GB, so not really large, however the step on
this sql statement (where ? is of course binded to the db name)

ATTACH DATABASE ? AS indb;

requires several minutes (or more, it does not really seem to finish)!
Besides sqlite3 already created a 5GB journal file.


I am confused, why an attach database statement can be so slow?

I am not sure if useful, but sqlite3 --version returns:
3.8.11.1 2015-07-29 20:00:57 cf538e2783e468bbc25e7cb2a9ee64d3e0e80b2f

Your faithfully,
Paolo


[sqlite] ATTACH DATABASE statement speed

2015-08-17 Thread Rowan Worth
On 17 August 2015 at 14:52, Paolo Bolzoni 
wrote:

> After I attach the INPUT db and I copy the useful lines.
>
> The input db is about 13GB, so not really large, however the step on
> this sql statement (where ? is of course binded to the db name)
>
> ATTACH DATABASE ? AS indb;
>
> requires several minutes (or more, it does not really seem to finish)!
> Besides sqlite3 already created a 5GB journal file.
>

Are you sure it's the process trying to attach the DB that created the 5GB
journal? ie. is it possible the 5GB journal is a hot journal leftover from
something else, and needs to be rolled back? Not sure if that happens
during ATTACH or not...
-Rowan


[sqlite] ATTACH DATABASE statement speed

2015-08-17 Thread Simon Slavin

On 17 Aug 2015, at 9:22am, Paolo Bolzoni  
wrote:

> The pragma integrity_check is still running... Maybe my disk sucks for
> some reason?

I wonder whether the hard disk is faulty or the file is on a bad sector.  

If the other tests show nothing, can you duplicate the input database file ?  
Do something that forces the OS to read the whole thing.  If you have a good 
idea of how long reading and writing 13GB should take (ten minutes ?), this may 
take an obviously unreasonable time.

Simon.


[sqlite] ATTACH DATABASE statement speed

2015-08-17 Thread Simon Slavin

On 17 Aug 2015, at 8:17am, Simon Slavin  wrote:

> Also, out of interest, can you run "PRAGMA integrity_check" on all the 
> database files involved ?

Also out of interest, instead of just ATTACHing the input database,

1) Close the output database
2) Reopen the output database
3) ATTACH the input database

Does it take the same amount of time ?  If so, which step takes the time ?

Simon.


[sqlite] ATTACH DATABASE statement speed

2015-08-17 Thread Simon Slavin

On 17 Aug 2015, at 7:52am, Paolo Bolzoni  
wrote:

> I have a program that materialize the subset of a database in a second
> database for future use.
> 
> To do so I create the database object on the OUTPUT database, create
> tables, create the indexes, and vacuum it just in the case I reused an
> old file.

This will be faster, with almost the same effect, if you

1) delete tables you're going to recreate
2) VACUUM
3) create the new tables and indexes

> After I attach the INPUT db and I copy the useful lines.
> 
> The input db is about 13GB, so not really large, however the step on
> this sql statement (where ? is of course binded to the db name)
> 
> ATTACH DATABASE ? AS indb;
> 
> requires several minutes (or more, it does not really seem to finish)!
> Besides sqlite3 already created a 5GB journal file.
> 
> 
> I am confused, why an attach database statement can be so slow?

No idea.  It shouldn't happen.  I can only think it's trying to clear the cache 
and read some of the newly attached database into the cache.  Try opening gthe 
output database then doing the ATTACH in the SQLite shell tool.  Does it take 
the same about of time ?

Also, out of interest, can you run "PRAGMA integrity_check" on all the database 
files involved ?

Simon.