Re: [sqlite] Vacuum into

2019-02-08 Thread David Raymond
Hmm. So after a normal vacuum, "Non-sequential pages" is basically 1 for 
everything, tables and indexes alike. On a read-only "vacuum into" it's 
anywhere from 22% to 99.5%, usually being more towards 99%.

Numbers for "Table X without any indicies" seem identical apart from the 
Non-sequential pages being close to 99%. So the bloat seems to be all in the 
indexes.

Brain now fried. Can provide any additional requested info next week.


-Original Message-
From: David Raymond 
Sent: Friday, February 08, 2019 2:18 PM
To: SQLite mailing list
Subject: RE: [sqlite] Vacuum into

But now, here's the weird part ladies and gentlemen.

I started this on one drive, vacuum into completed, but then I had to do real 
work on that drive, so copied the file over to an unused drive to do the 
previously reported timing without it being biased by other things going on. 
But I saw something weird, which I just ran again to confirm.

"vacuum into" created a larger file when the original file was read only than 
it did when the original file was writable.

Have run this now twice with the same output file size, so my bad eyes are at 
least being consistent in what they're seeing. Am on Windows 7 here. When the 
original db file was read only (Windows properties, not any flag to the CLI) 
the resulting file from "vacuum into" was 6% bigger. When I undid the read only 
checkbox and ran the same thing again, the resulting file was the same size as 
the original.

Original file size:  20,467,359,744
Vacuum into size when original is writable:  20,467,359,744
Vacuum into size when original is read only: 21,760,516,096


Timing was also between the vacuum into, and regular vacuum times for the 
writable file. Well, wall clock time anyway. "user" was way up.

sqlite> vacuum into 'vac_into_from_ro_file.sqlite';
Run Time: real 1107.448 user 853.185069 sys 87.048558


Gonna run sqlite3_analyzer.exe on them now to see what's up. But closing in on 
the end of the work day/week here.


-Original Message-
From: David Raymond 
Sent: Friday, February 08, 2019 1:53 PM
To: SQLite mailing list
Subject: RE: [sqlite] Vacuum into

Non-scientific "let's just try it" results

Short version:
Original file had been vacuumed already as the last thing that had happened to 
it.
File size: 20,467,359,744

sqlite> vacuum into 'vac_into.sqlite';
Run Time: real 589.577 user 222.941029 sys 57.829571

sqlite> vacuum;
Run Time: real 1429.063 user 236.325915 sys 199.322478

Synchronous was off, journal mode was normal rollback journal. Was using the 
computer for normal work while doing it, but nothing else was using the 
physical drive these were on. Resulting files were the exact same size as the 
originals (it had been previously vacuumed as mentioned)


-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Richard Hipp
Sent: Friday, February 08, 2019 11:29 AM
To: SQLite mailing list
Subject: Re: [sqlite] Vacuum into

On 2/8/19, David Raymond  wrote:
> So to make sure I'm understanding it ok, with the new vacuum into command,
> if I'm the only user of a file, then the sequence...
>
> open file1
> vacuum into file2
> close file1
> delete file1
> rename file2 to file1
>
> ...is going to be potentially more than twice as fast as the old...

Think you will be hard-pressed to get a 2x performance increase.
Faster, yes.  2x faster, I'm doubtful.  But perhaps I'm wrong.  Please
try it and see and report back.

>
> open file1
> vacuum
>
> ...as it saves the whole re-write of the original file, along with all the
> rollback journal or wal writes that entails. Correct?
>
> (With of course the whole "make sure it actually finished and didn't just
> die" caveats before doing the delete and rename)
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Was there an announcement of 3.27?

2019-02-08 Thread Dominique Pellé
David Raymond wrote:

> SQLite version 3.27.1 is now available on the SQLite website:
>
>   https://sqlite.org/
>   https://sqlite.org/download.html
>   https://sqlite.org/releaselog/3_27_1.html

Release notes https://sqlite.org/releaselog/3_27_1.html say:

=== BEGIN QUOTE ===
Added the remove_diacritics=2 option to FTS3 and FTS5.
=== END QUOTE ===

I wonder that this does.  FTS3 or FTS5 doc were not updated,
since they only document remove_diacritics=0 and 1.

I also use the opportunity to report a few typos in
https://sqlite.org/fts5.html:

- the second character replaced with an *asterix* (-> asterisk)
- fts5 extension function made as part *of of* (repeated word "of")
-  *an the* (-> the) auxiliary data is set to NULL

Regards
Dominique
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] (no subject)

2019-02-08 Thread Jayram Singh

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum into

2019-02-08 Thread David Raymond
But now, here's the weird part ladies and gentlemen.

I started this on one drive, vacuum into completed, but then I had to do real 
work on that drive, so copied the file over to an unused drive to do the 
previously reported timing without it being biased by other things going on. 
But I saw something weird, which I just ran again to confirm.

"vacuum into" created a larger file when the original file was read only than 
it did when the original file was writable.

Have run this now twice with the same output file size, so my bad eyes are at 
least being consistent in what they're seeing. Am on Windows 7 here. When the 
original db file was read only (Windows properties, not any flag to the CLI) 
the resulting file from "vacuum into" was 6% bigger. When I undid the read only 
checkbox and ran the same thing again, the resulting file was the same size as 
the original.

Original file size:  20,467,359,744
Vacuum into size when original is writable:  20,467,359,744
Vacuum into size when original is read only: 21,760,516,096


Timing was also between the vacuum into, and regular vacuum times for the 
writable file. Well, wall clock time anyway. "user" was way up.

sqlite> vacuum into 'vac_into_from_ro_file.sqlite';
Run Time: real 1107.448 user 853.185069 sys 87.048558


Gonna run sqlite3_analyzer.exe on them now to see what's up. But closing in on 
the end of the work day/week here.


-Original Message-
From: David Raymond 
Sent: Friday, February 08, 2019 1:53 PM
To: SQLite mailing list
Subject: RE: [sqlite] Vacuum into

Non-scientific "let's just try it" results

Short version:
Original file had been vacuumed already as the last thing that had happened to 
it.
File size: 20,467,359,744

sqlite> vacuum into 'vac_into.sqlite';
Run Time: real 589.577 user 222.941029 sys 57.829571

sqlite> vacuum;
Run Time: real 1429.063 user 236.325915 sys 199.322478

Synchronous was off, journal mode was normal rollback journal. Was using the 
computer for normal work while doing it, but nothing else was using the 
physical drive these were on. Resulting files were the exact same size as the 
originals (it had been previously vacuumed as mentioned)


-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Richard Hipp
Sent: Friday, February 08, 2019 11:29 AM
To: SQLite mailing list
Subject: Re: [sqlite] Vacuum into

On 2/8/19, David Raymond  wrote:
> So to make sure I'm understanding it ok, with the new vacuum into command,
> if I'm the only user of a file, then the sequence...
>
> open file1
> vacuum into file2
> close file1
> delete file1
> rename file2 to file1
>
> ...is going to be potentially more than twice as fast as the old...

Think you will be hard-pressed to get a 2x performance increase.
Faster, yes.  2x faster, I'm doubtful.  But perhaps I'm wrong.  Please
try it and see and report back.

>
> open file1
> vacuum
>
> ...as it saves the whole re-write of the original file, along with all the
> rollback journal or wal writes that entails. Correct?
>
> (With of course the whole "make sure it actually finished and didn't just
> die" caveats before doing the delete and rename)
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum into

2019-02-08 Thread Richard Hipp
On 2/8/19, David Raymond  wrote:
> Non-scientific "let's just try it" results
>
> Short version:
> Original file had been vacuumed already as the last thing that had happened
> to it.
> File size: 20,467,359,744
>
> sqlite> vacuum into 'vac_into.sqlite';
> Run Time: real 589.577 user 222.941029 sys 57.829571
>
> sqlite> vacuum;
> Run Time: real 1429.063 user 236.325915 sys 199.322478
>
> Synchronous was off, journal mode was normal rollback journal.

Fair enough.  I wasn't thinking about the overhead of journaling when
a normal VACUUM copies the new database back over top of itself.  It
is having to do two complete copies of the database, not just one.  So
you may well get more than 2x better performance.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Was there an announcement of 3.27?

2019-02-08 Thread Richard Hipp
On 2/8/19, Simon Slavin  wrote:
> On 8 Feb 2019, at 6:25pm, David Raymond  wrote:
>
>>  https://sqlite.org/releaselog/3_27_1.html
>
> "Add options "--expanded", "--normalized", "--plain", "--profile", "--row",
> "--stmt", and "--close" to the ".trace" command."
>
> Is there a discussion of these anywhere ?

Not that I know of, apart from the ".help" text in the CLI.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum into

2019-02-08 Thread David Raymond
Non-scientific "let's just try it" results

Short version:
Original file had been vacuumed already as the last thing that had happened to 
it.
File size: 20,467,359,744

sqlite> vacuum into 'vac_into.sqlite';
Run Time: real 589.577 user 222.941029 sys 57.829571

sqlite> vacuum;
Run Time: real 1429.063 user 236.325915 sys 199.322478

Synchronous was off, journal mode was normal rollback journal. Was using the 
computer for normal work while doing it, but nothing else was using the 
physical drive these were on. Resulting files were the exact same size as the 
originals (it had been previously vacuumed as mentioned)


-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Richard Hipp
Sent: Friday, February 08, 2019 11:29 AM
To: SQLite mailing list
Subject: Re: [sqlite] Vacuum into

On 2/8/19, David Raymond  wrote:
> So to make sure I'm understanding it ok, with the new vacuum into command,
> if I'm the only user of a file, then the sequence...
>
> open file1
> vacuum into file2
> close file1
> delete file1
> rename file2 to file1
>
> ...is going to be potentially more than twice as fast as the old...

Think you will be hard-pressed to get a 2x performance increase.
Faster, yes.  2x faster, I'm doubtful.  But perhaps I'm wrong.  Please
try it and see and report back.

>
> open file1
> vacuum
>
> ...as it saves the whole re-write of the original file, along with all the
> rollback journal or wal writes that entails. Correct?
>
> (With of course the whole "make sure it actually finished and didn't just
> die" caveats before doing the delete and rename)
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Was there an announcement of 3.27?

2019-02-08 Thread Simon Slavin
On 8 Feb 2019, at 6:25pm, David Raymond  wrote:

>  https://sqlite.org/releaselog/3_27_1.html

"Add options "--expanded", "--normalized", "--plain", "--profile", "--row", 
"--stmt", and "--close" to the ".trace" command."

Is there a discussion of these anywhere ?

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Was there an announcement of 3.27?

2019-02-08 Thread David Raymond
As mentioned the release announcements is a separate mailing list, so people 
can get those without being deluged by stuff from this list. Pasting the 
announcement here.


-Original Message-
From: sqlite-announce [mailto:sqlite-announce-boun...@sqlite.org] On Behalf Of 
Richard Hipp
Sent: Friday, February 08, 2019 10:20 AM
To: sqlite-announce
Subject: [sqlite-announce] Version 3.27.1

SQLite version 3.27.1 is now available on the SQLite website:

  https://sqlite.org/
  https://sqlite.org/download.html
  https://sqlite.org/releaselog/3_27_1.html

The only big enhancement in this release is the addition of the VACUUM
INTO command, which allows a database to be backed up atomically such
that the backup is minimal in size and contains no forensic traces of
deleted content. There are also enhancements to the command-line
interface and to some of the APIs such as sqlite3_deserialize(), and
general improvements to the robustness of SQLite when it is reading
and writing maliciously corrupted database files.

There was a 3.27.0 release yesterday. But a bug report came in just
after we had tagged the 3.27.0 release and before we had uploaded the
code and made the release announcement.  The bug report was for a
query optimizer problem in 3.20.0 and was an older bug, completely
unrelated to recent changes.  Rather than announce the 3.27.0 release,
then turn around an announce a 3.27.1 patch release the next day, we
just omitted the 3.27.0 release announcement, and thereby (hopefully)
spared people the trouble of having to upgrade twice. It would have
been better, of course, if the bug report had come to our attention
prior to tagging the 3.27.0 release so that we could have fixed the
problem in 3.27.0. But sometimes bug reports arrive at inopportune
moments.

Some of the Windows build products on the download page are still on
version 3.27.0.  They will be updated to version 3.27.1 within 24
hours.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-announce mailing list
sqlite-annou...@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-announce



-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Christian Schmitz
Sent: Friday, February 08, 2019 12:47 PM
To: SQLite mailing list
Subject: Re: [sqlite] Was there an announcement of 3.27?



> Am 08.02.2019 um 18:43 schrieb Jens Alfke :
> 
> I see 3.27 was released yesterday (and quickly followed up with 3.27.1.) I


The announcement was on the sqlite-annou...@mailinglists.sqlite.org list.

3.27 was ready to go, a bug was found, so 3.27.1 was announced today, so 
everyone could skip 3.27.

Sincerely
Christian

-- 
Read our blog about news on our plugins:

http://www.mbsplugins.de/


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ISO8601 vs Numeric Timestamp for Date Storage

2019-02-08 Thread Ben Asher
Thanks all! Super helpful.

Ben

On Wed, Feb 6, 2019 at 11:55 AM Ben Asher  wrote:

> Hi there! We're having a debate at my company about date storage in
> SQLite. SQLite has builtin support for ISO8601 in its date functions, so
> some folks have started storing dates as ISO8601 SQLite-compatible date
> strings. Are there pitfalls to storing dates this way compared to a unix
> timestamp? I'm curious to know if anyone has experience and would highly
> recommend sticking to one or the other for a particular reason. I'd also be
> grateful if anyone could point me to any articles exploring this subject.
>
> Thanks!
>
> Ben
>


-- 
Ben
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Was there an announcement of 3.27?

2019-02-08 Thread Christian Schmitz


> Am 08.02.2019 um 18:43 schrieb Jens Alfke :
> 
> I see 3.27 was released yesterday (and quickly followed up with 3.27.1.) I


The announcement was on the sqlite-annou...@mailinglists.sqlite.org list.

3.27 was ready to go, a bug was found, so 3.27.1 was announced today, so 
everyone could skip 3.27.

Sincerely
Christian

-- 
Read our blog about news on our plugins:

http://www.mbsplugins.de/


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Was there an announcement of 3.27?

2019-02-08 Thread Jens Alfke
I see 3.27 was released yesterday (and quickly followed up with 3.27.1.) I only 
discovered this because the post asking about VACUUM INTO gave me a clue that 
this might be a newly-released feature, so I went to the release-history page. 
We’ve been awaiting this release because it fixes a bad query bug we reported.

Was there an email posted to this list announcing the release? I didn’t see 
one, but it’s possible my spam filter went haywire.

—Jens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] GROUP BY and ICU collation

2019-02-08 Thread Michael Herger

Remember that fancy collations don't just look at 1 character at a time, they look 
at the whole thing, and can do surprising stuff based on that. In this case the 
order of preference for the collation looks like "when it's part of a larger 
word, then treating 'S' and 'Š' the same is more important than separating them. But 
when it's just 1 character then they're different.


Uh... oh... that might explain things... Then I'll have to re-think 
things considerably.


Thank you all for your input!

Michael




So 'S' might be before 'Š', but in words it might go

'Sam'
'Šam'
'Skunk'
'Škunk'
'Sudden'
'Šudden'

rather than a simple character-at-a-time order of

'Sam'
'Skunk'
'Sudden'
'Šam'
'Škunk'
'Šudden'


-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Michael Herger
Sent: Friday, February 08, 2019 4:13 AM
To: sqlite-users@mailinglists.sqlite.org
Subject: Re: [sqlite] GROUP BY and ICU collation

Hi Keith,

thanks for your response (which partly goes beyond my understanding, but
I'm learning :-)).


But you are not using the same "expression" for selecting, sorting, and 
grouping.  That is, you need to specify:

SELECT expression, count(distinct id)
  FROM artists
GROUP BY expression
ORDER BY expression;

where expression is the expression that you want to use

SELECT substr(name collate de_DE, 1, 1), count(distinct id)
  FROM artists
GROUP BY substr(name collate de_DE, 1, 1)
ORDER BY substr(name collate de_DE, 1, 1);

If you do not do so then you cannot expect the "GROUP BY" to be using the same ordering 
as the "ORDER BY" nor the select to be returning the same value that was used to do the 
grouping and sorting.


Ok, tried that:

SELECT SUBSTR(contributors.namesort COLLATE de_DE , 1, 1)
FROM contributors
GROUP BY SUBSTR(contributors.namesort COLLATE de_DE , 1, 1)
ORDER BY SUBSTR(contributors.namesort COLLATE de_DE , 1, 1)

Resulted in:

"A"
"C"
"D"
"P"
"R"
"S"
"Š"
"T"
"W"

Whereas this:

SELECT contributors.name, contributors.namesort COLLATE de_DE
FROM contributors
ORDER BY contributors.namesort COLLATE de_DE

...resulted in this:

...
["Roddy Woomble & Band", "RODDY WOOMBLE BAND"],
["Samuel Yirga", "SAMUEL YIRGA"],
["Stephin Merritt", "STEPHIN MERRITT"],
["Šuma Čovjek", "ŠUMA ČOVJEK"],
["Syriana", "SYRIANA"],
["Tom Griesgraber", "TOM GRIESGRABER"],
...

So despite my using the same expression for the GROUP as for the ORDER,
the grouping seems to ignore the collation, whereas the sorting alone
would not.

I'm using the ICU extension (otherwise the sorting wouldn't work either,
right?). Could that extension be responsible for this odd behaviour?

I've put a small sample .db file in my dropbox:
https://www.dropbox.com/s/w4h6n3cyamt6yc0/library.db?dl=0




(This substitutes the collation NOCASE for the de_DE since I no have a de_DE 
collation:

sqlite> select substr(name collate nocase, 1, 1), count(distinct id)
 ...>   FROM artists
 ...> group by substr(name collate nocase, 1, 1)
 ...> order by substr(name collate nocase, 1, 1);
QUERY PLAN
|--SCAN TABLE artists (~1048576 rows)
`--USE TEMP B-TREE FOR GROUP BY
addr  opcode p1p2p3p4 p5  comment
  -        -  --  -
0 Init   0 57000  Start at 57
1 Noop   1 4 000
2 SorterOpen 3 3 0 k(1,NOCASE)00
3 Integer0 5 000  r[5]=0; clear abort 
flag
4 Null   0 8 800  r[8..8]=NULL
5 Gosub  7 52000
6 OpenRead   0 3 0 2  00  root=3 iDb=0; artists
7 ColumnsUsed0 0 0 3  00
8 Explain8 0 0 SCAN TABLE artists (~1048576 rows)  00
9 Noop   0 0 000  Begin WHERE-loop0: 
artists
10Rewind 0 20000
11  Noop   0 0 000  Begin WHERE-core
12  Column 0 1 13   00  r[13]=artists.name
13  Function0  6 1310substr(3)  03  
r[10]=func(r[13..15])
14  Column 0 1 11   00  r[11]=artists.name
15  Column 0 0 12   00  r[12]=artists.id
16  MakeRecord 103 16   00  
r[16]=mkrec(r[10..12])
17  SorterInsert   3 16000  key=r[16]
18  Noop   0 0 000  End WHERE-core
19Next   0 11001
20Noop   0 0 000  End WHERE-loop0: 
artists
21OpenPseudo 4 16300  3 columns in r[16]
22SorterSort 3 56000  GROUP BY sort
23

Re: [sqlite] Problem to delete all data with a LIKE

2019-02-08 Thread Simon Slavin
On 8 Feb 2019, at 1:49pm, Ludovic Gasc Lemaire  wrote:

> Thanks for your tip, this command has found problems, see below.
> It should explain this strange behavior with DELETE ?

Yes.  The corruption you have found in your database can explain lots of weird 
and non-standard behaviour.  In fact, even if you can get commands to work now, 
you may lose data later.  Do not proceed with this database.

You may be able to rescue some or all of the data from this database.  To do 
this use the SQLite CLI tool to dump the database as SQL commands, then create 
a new database file by executing those commands.  If you're not familiar with 
'.dump' and '.read', you can find more details here:





Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum into

2019-02-08 Thread Chris Locke
If you renamed file1.db to file1.bak, opened file1.bak, vacuum into
file1.db, close file1.bak, you have a backup pre-vacuum (just in case...)
and 'streamlines' the process some-what.
Obviously, you'd have to rename the file back again if the vacuum failed
(out of disk space, etc)

Just a thought


Chris

On Fri, Feb 8, 2019 at 4:22 PM David Raymond 
wrote:

> So to make sure I'm understanding it ok, with the new vacuum into command,
> if I'm the only user of a file, then the sequence...
>
> open file1
> vacuum into file2
> close file1
> delete file1
> rename file2 to file1
>
> ...is going to be potentially more than twice as fast as the old...
>
> open file1
> vacuum
>
> ...as it saves the whole re-write of the original file, along with all the
> rollback journal or wal writes that entails. Correct?
>
> (With of course the whole "make sure it actually finished and didn't just
> die" caveats before doing the delete and rename)
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] GROUP BY and ICU collation

2019-02-08 Thread Keith Medcalf

Must be whatever the ICU collating sequence does.  It apparently sorts into an 
order you like, but does not sort the characters as being "the same".  Unless 
they are "the same" they will not be in the same group.

The unifuzz "unaccented" collation does sort the two characters as "the same" 
(note that it is just happenstance what character you got for "S"):

sqlite> select substr(name collate unaccented,1,1) from contributors group by 
substr(name collate unaccented,1,1);
QUERY PLAN
|--SCAN TABLE contributors USING COVERING INDEX contributorsNameIndex (~48 rows)
`--USE TEMP B-TREE FOR GROUP BY
addr  opcode p1p2p3p4 p5  comment
  -        -  --  -
0 Init   0 47000  Start at 47
1 SorterOpen 1 2 0 k(1,UNACCENTED)  00
2 Integer0 3 000  r[3]=0; clear abort 
flag
3 Null   0 6 600  r[6..6]=NULL
4 Gosub  5 43000
5 OpenRead   2 5 0 k(2,,) 00  root=5 iDb=0; 
contributorsNameIndex
6 ColumnsUsed2 0 0 1  00
7 Explain7 0 0 SCAN TABLE contributors USING COVERING 
INDEX contributorsNameIndex (~48 rows)  00
8 Noop   0 0 000  Begin WHERE-loop0: 
contributors
9 Rewind 2 188 0  00
10  Noop   0 0 000  Begin WHERE-core
11  Column 2 0 10   00  
r[10]=contributors.name
12  Function0  6 108 substr(3)  03  r[8]=func(r[10..12])
13  Column 2 0 900  
r[9]=contributors.name
14  MakeRecord 8 2 13   00  r[13]=mkrec(r[8..9])
15  SorterInsert   1 13000  key=r[13]
16  Noop   0 0 000  End WHERE-core
17Next   2 10001
18Noop   0 0 000  End WHERE-loop0: 
contributors
19OpenPseudo 3 13200  2 columns in r[13]
20SorterSort 1 46000  GROUP BY sort
21  SorterData 1 13300  r[13]=data
22  Column 3 0 700  r[7]=
23  Compare6 7 1 k(1,UNACCENTED)  00  r[6] <-> r[7]
24  Jump   252925   00
25  Move   7 6 100  r[6]=r[7]
26  Gosub  4 37000  output one row
27  IfPos  3 46000  if r[3]>0 then 
r[3]-=0, goto 46; check abort flag
28  Gosub  5 43000  reset accumulator
29  If 2 31000
30  Column 3 1 100  r[1]=
31  Integer1 2 000  r[2]=1; indicate 
data in accumulator
32SorterNext 1 21000
33Gosub  4 37000  output final row
34Goto   0 46000
35Integer1 3 000  r[3]=1; set abort flag
36Return 4 0 000
37IfPos  2 39000  if r[2]>0 then 
r[2]-=0, goto 39; Groupby result generator entry point
38Return 4 0 000
39Copy   1 15000  r[15]=r[1]
40Function0  6 1514substr(3)  03  r[14]=func(r[15..17])
41ResultRow  141 000  output=r[14]
42Return 4 0 000  end groupby result 
generator
43Null   0 1 100  r[1..1]=NULL
44Integer0 2 000  r[2]=0; indicate 
accumulator empty
45Return 5 0 000
46Halt   0 0 000
47Transaction0 0 518   0  01  usesStmtJournal=0
48Integer1 11000  r[11]=1
49Integer1 12000  r[12]=1
50Integer1 16000  r[16]=1
51Integer1 17000  r[17]=1
52Goto   0 1 000
A
B
C
D
E
F
G
I
J
K
L
M
P
R
S
T
W
sqlite>

---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users [mailto:sqlite-users-
>boun...@mailinglists.sqlite.org] On Behalf Of Michael Herger
>Sent: 

Re: [sqlite] Vacuum into

2019-02-08 Thread Richard Hipp
On 2/8/19, David Raymond  wrote:
> So to make sure I'm understanding it ok, with the new vacuum into command,
> if I'm the only user of a file, then the sequence...
>
> open file1
> vacuum into file2
> close file1
> delete file1
> rename file2 to file1
>
> ...is going to be potentially more than twice as fast as the old...

Think you will be hard-pressed to get a 2x performance increase.
Faster, yes.  2x faster, I'm doubtful.  But perhaps I'm wrong.  Please
try it and see and report back.

>
> open file1
> vacuum
>
> ...as it saves the whole re-write of the original file, along with all the
> rollback journal or wal writes that entails. Correct?
>
> (With of course the whole "make sure it actually finished and didn't just
> die" caveats before doing the delete and rename)
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Committing changes to the database without releasing a writer lock

2019-02-08 Thread Simon Slavin
On 7 Feb 2019, at 9:53pm, Theodore Dubois  wrote:

> I'd like to essentially commit changes to disk in the middle of the 
> transaction, resulting in a transaction that is atomic with respect to other 
> database connections but is two atomic transactions with respect to the 
> filesystem.

Would SAVEPOINT work for you ?



The thing you're actually asking for doesn't happen in SQL.  The state of the 
database on the disk (including the journal file as well as the database file) 
is meant to reflect what other connections are seeing.  I don't think we can do 
exactly what you asked for.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Vacuum into

2019-02-08 Thread David Raymond
So to make sure I'm understanding it ok, with the new vacuum into command, if 
I'm the only user of a file, then the sequence...

open file1
vacuum into file2
close file1
delete file1
rename file2 to file1

...is going to be potentially more than twice as fast as the old...

open file1
vacuum

...as it saves the whole re-write of the original file, along with all the 
rollback journal or wal writes that entails. Correct?

(With of course the whole "make sure it actually finished and didn't just die" 
caveats before doing the delete and rename)
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] GROUP BY and ICU collation

2019-02-08 Thread Richard Damon
On 2/8/19 9:11 AM, David Raymond wrote:
> Remember that fancy collations don't just look at 1 character at a time, they 
> look at the whole thing, and can do surprising stuff based on that. In this 
> case the order of preference for the collation looks like "when it's part of 
> a larger word, then treating 'S' and 'Š' the same is more important than 
> separating them. But when it's just 1 character then they're different.
>
> So 'S' might be before 'Š', but in words it might go
>
> 'Sam'
> 'Šam'
> 'Skunk'
> 'Škunk'
> 'Sudden'
> 'Šudden'
>
> rather than a simple character-at-a-time order of
>
> 'Sam'
> 'Skunk'
> 'Sudden'
> 'Šam'
> 'Škunk'
> 'Šudden'
>
Actually, the way those collations work is that on first pass, S and Š
compare equal, but if two words on first pass compare equal, then
effectively a second pass take place, and on the second pass, S and Š
compare with an order (I thought I remembers there even being some cases
that needed a third pass).

-- 
Richard Damon

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] GROUP BY and ICU collation

2019-02-08 Thread David Raymond
Remember that fancy collations don't just look at 1 character at a time, they 
look at the whole thing, and can do surprising stuff based on that. In this 
case the order of preference for the collation looks like "when it's part of a 
larger word, then treating 'S' and 'Š' the same is more important than 
separating them. But when it's just 1 character then they're different.

So 'S' might be before 'Š', but in words it might go

'Sam'
'Šam'
'Skunk'
'Škunk'
'Sudden'
'Šudden'

rather than a simple character-at-a-time order of

'Sam'
'Skunk'
'Sudden'
'Šam'
'Škunk'
'Šudden'


-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Michael Herger
Sent: Friday, February 08, 2019 4:13 AM
To: sqlite-users@mailinglists.sqlite.org
Subject: Re: [sqlite] GROUP BY and ICU collation

Hi Keith,

thanks for your response (which partly goes beyond my understanding, but 
I'm learning :-)).

> But you are not using the same "expression" for selecting, sorting, and 
> grouping.  That is, you need to specify:
> 
>SELECT expression, count(distinct id)
>  FROM artists
> GROUP BY expression
> ORDER BY expression;
> 
> where expression is the expression that you want to use
> 
>SELECT substr(name collate de_DE, 1, 1), count(distinct id)
>  FROM artists
> GROUP BY substr(name collate de_DE, 1, 1)
> ORDER BY substr(name collate de_DE, 1, 1);
> 
> If you do not do so then you cannot expect the "GROUP BY" to be using the 
> same ordering as the "ORDER BY" nor the select to be returning the same value 
> that was used to do the grouping and sorting.

Ok, tried that:

SELECT SUBSTR(contributors.namesort COLLATE de_DE , 1, 1)
FROM contributors
GROUP BY SUBSTR(contributors.namesort COLLATE de_DE , 1, 1)
ORDER BY SUBSTR(contributors.namesort COLLATE de_DE , 1, 1)

Resulted in:

"A"
"C"
"D"
"P"
"R"
"S"
"Š"
"T"
"W"

Whereas this:

SELECT contributors.name, contributors.namesort COLLATE de_DE
FROM contributors
ORDER BY contributors.namesort COLLATE de_DE

...resulted in this:

...
   ["Roddy Woomble & Band", "RODDY WOOMBLE BAND"],
   ["Samuel Yirga", "SAMUEL YIRGA"],
   ["Stephin Merritt", "STEPHIN MERRITT"],
   ["Šuma Čovjek", "ŠUMA ČOVJEK"],
   ["Syriana", "SYRIANA"],
   ["Tom Griesgraber", "TOM GRIESGRABER"],
...

So despite my using the same expression for the GROUP as for the ORDER, 
the grouping seems to ignore the collation, whereas the sorting alone 
would not.

I'm using the ICU extension (otherwise the sorting wouldn't work either, 
right?). Could that extension be responsible for this odd behaviour?

I've put a small sample .db file in my dropbox:
https://www.dropbox.com/s/w4h6n3cyamt6yc0/library.db?dl=0


> 
> (This substitutes the collation NOCASE for the de_DE since I no have a de_DE 
> collation:
> 
> sqlite> select substr(name collate nocase, 1, 1), count(distinct id)
> ...>   FROM artists
> ...> group by substr(name collate nocase, 1, 1)
> ...> order by substr(name collate nocase, 1, 1);
> QUERY PLAN
> |--SCAN TABLE artists (~1048576 rows)
> `--USE TEMP B-TREE FOR GROUP BY
> addr  opcode p1p2p3p4 p5  comment
>   -        -  --  -
> 0 Init   0 57000  Start at 57
> 1 Noop   1 4 000
> 2 SorterOpen 3 3 0 k(1,NOCASE)00
> 3 Integer0 5 000  r[5]=0; clear abort 
> flag
> 4 Null   0 8 800  r[8..8]=NULL
> 5 Gosub  7 52000
> 6 OpenRead   0 3 0 2  00  root=3 iDb=0; 
> artists
> 7 ColumnsUsed0 0 0 3  00
> 8 Explain8 0 0 SCAN TABLE artists (~1048576 rows)  00
> 9 Noop   0 0 000  Begin WHERE-loop0: 
> artists
> 10Rewind 0 20000
> 11  Noop   0 0 000  Begin WHERE-core
> 12  Column 0 1 13   00  r[13]=artists.name
> 13  Function0  6 1310substr(3)  03  
> r[10]=func(r[13..15])
> 14  Column 0 1 11   00  r[11]=artists.name
> 15  Column 0 0 12   00  r[12]=artists.id
> 16  MakeRecord 103 16   00  
> r[16]=mkrec(r[10..12])
> 17  SorterInsert   3 16000  key=r[16]
> 18  Noop   0 0 000  End WHERE-core
> 19Next   0 11001
> 20Noop   0 0 000  End WHERE-loop0: 
> artists
> 21OpenPseudo 4 16300  3 columns in r[16]
> 22SorterSort 3 56000  GROUP BY sort
> 23  SorterData 3 16 

Re: [sqlite] Problem to delete all data with a LIKE

2019-02-08 Thread Ludovic Gasc Lemaire
Hi Simon,

Thanks for your tip, this command has found problems, see below.
It should explain this strange behavior with DELETE ?

I have found this procedure on the Web to fix the file:
https://www.2doapp.com/kb/article.php?id=743=38

Is it the correct way or another way exists ?

Kind regards.

*** in database main ***
On tree page 290 cell 13: Rowid 21980476 out of order
On tree page 290 cell 6: Rowid 21980466 out of order
On tree page 290 cell 5: Rowid 21980467 out of order
On tree page 290 cell 4: Rowid 21980468 out of order
On tree page 290 cell 3: Rowid 21980468 out of order
On tree page 290 cell 2: Rowid 21980468 out of order
On tree page 290 cell 1: Rowid 21980469 out of order
On tree page 290 cell 0: Rowid 21980474 out of order
On tree page 289 cell 15: Rowid 21980442 out of order
On tree page 289 cell 14: Rowid 21980445 out of order
On tree page 289 cell 13: Rowid 21980448 out of order
On tree page 289 cell 12: Rowid 21980450 out of order
On tree page 289 cell 11: Rowid 21980450 out of order
On tree page 289 cell 10: Rowid 21980450 out of order
On tree page 289 cell 7: Rowid 21980448 out of order
On tree page 289 cell 6: Rowid 21980449 out of order
On tree page 288 cell 23: Rowid 21980426 out of order
On tree page 288 cell 22: Rowid 21980426 out of order
On tree page 288 cell 21: Rowid 21980426 out of order
On tree page 288 cell 20: Rowid 21980429 out of order
On tree page 288 cell 19: Rowid 21980429 out of order
On tree page 288 cell 18: Rowid 21980429 out of order
On tree page 288 cell 17: Rowid 21980429 out of order
On tree page 288 cell 16: Rowid 21980429 out of order
On tree page 288 cell 15: Rowid 21980432 out of order
On tree page 288 cell 14: Rowid 21980434 out of order
On tree page 288 cell 7: Rowid 21980415 out of order
On tree page 288 cell 6: Rowid 21980418 out of order
On tree page 287 cell 20: Rowid 21980410 out of order
On tree page 287 cell 7: Rowid 21980401 out of order
On tree page 287 cell 6: Rowid 21980402 out of order
On tree page 287 cell 5: Rowid 21980408 out of order
On tree page 286 cell 28: Rowid 21980386 out of order
On tree page 286 cell 27: Rowid 21980386 out of order
On tree page 286 cell 26: Rowid 21980388 out of order
On tree page 286 cell 25: Rowid 21980388 out of order
On tree page 286 cell 24: Rowid 21980393 out of order
On tree page 286 cell 23: Rowid 21980394 out of order
On tree page 286 cell 22: Rowid 21980396 out of order
On tree page 286 cell 21: Rowid 21980396 out of order
On tree page 286 cell 20: Rowid 21980400 out of order
On tree page 286 cell 19: Rowid 21980410 out of order
On tree page 286 cell 16: Rowid 21980408 out of order
On tree page 286 cell 15: Rowid 21980410 out of order
On tree page 286 cell 14: Rowid 21980410 out of order
On tree page 286 cell 13: Rowid 21980410 out of order
On tree page 286 cell 12: Rowid 21980410 out of order
On tree page 286 cell 11: Rowid 21980410 out of order
On tree page 286 cell 10: Rowid 21980410 out of order
On tree page 286 cell 9: Rowid 21980410 out of order
On tree page 286 cell 8: Rowid 21980410 out of order
On tree page 286 cell 7: Rowid 21980410 out of order
On tree page 286 cell 6: Rowid 21980411 out of order
On tree page 286 cell 5: Rowid 21980411 out of order
On tree page 286 cell 4: Rowid 21980411 out of order
On tree page 286 cell 3: Rowid 21980411 out of order
On tree page 286 cell 2: Rowid 21980411 out of order
On tree page 286 cell 1: Rowid 21980411 out of order
On tree page 286 cell 0: Rowid 21980411 out of order
On tree page 2 cell 281: Rowid 21980411 out of order
On tree page 285 cell 28: Rowid 21980411 out of order
On tree page 285 cell 27: Rowid 21980411 out of order
On tree page 285 cell 26: Rowid 21980411 out of order
On tree page 285 cell 25: Rowid 21980416 out of order
On tree page 285 cell 23: Rowid 21980411 out of order
On tree page 285 cell 22: Rowid 21980411 out of order
On tree page 285 cell 21: Rowid 21980411 out of order
On tree page 285 cell 20: Rowid 21980411 out of order
On tree page 285 cell 19: Rowid 21980411 out of order
On tree page 285 cell 18: Rowid 21980411 out of order
On tree page 285 cell 17: Rowid 21980412 out of order
On tree page 285 cell 16: Rowid 21980413 out of order
On tree page 285 cell 15: Rowid 21980414 out of order
On tree page 285 cell 14: Rowid 21980415 out of order
On tree page 285 cell 13: Rowid 21980415 out of order
On tree page 285 cell 12: Rowid 21980416 out of order
On tree page 285 cell 11: Rowid 21980416 out of order
On tree page 285 cell 10: Rowid 21980416 out of order
On tree page 285 cell 9: Rowid 21980416 out of order
On tree page 285 cell 8: Rowid 21980417 out of order
On tree page 285 cell 7: Rowid 21980418 out of order
On tree page 285 cell 6: Rowid 21980425 out of order
On tree page 285 cell 4: Rowid 21980418 out of order
On tree page 285 cell 3: Rowid 21980418 out of order
On tree page 285 cell 2: Rowid 21980418 out of order
On tree page 285 cell 1: Rowid 21980418 out of order
On tree page 285 cell 0: Rowid 21980418 out of order
On tree 

[sqlite] Committing changes to the database without releasing a writer lock

2019-02-08 Thread Theodore Dubois
I'd like to essentially commit changes to disk in the middle of the 
transaction, resulting in a transaction that is atomic with respect to other 
database connections but is two atomic transactions with respect to the 
filesystem.

When I first found the locking_mode pragma, my understanding was that it 
disabled the releasing of locks on transaction commit or rollback. So I came up 
with this:

pragma locking_mode=exclusive;
begin;
-- transaction part 1
commit;
pragma locking_mode=normal; -- exclusive lock will be released when next 
transaction completes
begin;
-- transaction part 2
commit;

The problem is that this doesn't work in WAL mode. My understanding according 
to pragma lock_status is that readers in WAL mode never release their shared 
locks, so acquiring an exclusive lock like this would require all readers to be 
closed. It seems like any connection in WAL mode acquires a shared lock, and 
there's some other lock being used to synchronize writers that pragma 
lock_status isn't showing me. So in order to do this I'd need some way to 
control that other write synchronization lock.

Is there a sane way to do this?

~Theodore

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] GROUP BY and ICU collation

2019-02-08 Thread Michael Herger

Hi Keith,

thanks for your response (which partly goes beyond my understanding, but 
I'm learning :-)).



But you are not using the same "expression" for selecting, sorting, and 
grouping.  That is, you need to specify:

   SELECT expression, count(distinct id)
 FROM artists
GROUP BY expression
ORDER BY expression;

where expression is the expression that you want to use

   SELECT substr(name collate de_DE, 1, 1), count(distinct id)
 FROM artists
GROUP BY substr(name collate de_DE, 1, 1)
ORDER BY substr(name collate de_DE, 1, 1);

If you do not do so then you cannot expect the "GROUP BY" to be using the same ordering 
as the "ORDER BY" nor the select to be returning the same value that was used to do the 
grouping and sorting.


Ok, tried that:

SELECT SUBSTR(contributors.namesort COLLATE de_DE , 1, 1)
FROM contributors
GROUP BY SUBSTR(contributors.namesort COLLATE de_DE , 1, 1)
ORDER BY SUBSTR(contributors.namesort COLLATE de_DE , 1, 1)

Resulted in:

"A"
"C"
"D"
"P"
"R"
"S"
"Š"
"T"
"W"

Whereas this:

SELECT contributors.name, contributors.namesort COLLATE de_DE
FROM contributors
ORDER BY contributors.namesort COLLATE de_DE

...resulted in this:

...
  ["Roddy Woomble & Band", "RODDY WOOMBLE BAND"],
  ["Samuel Yirga", "SAMUEL YIRGA"],
  ["Stephin Merritt", "STEPHIN MERRITT"],
  ["Šuma Čovjek", "ŠUMA ČOVJEK"],
  ["Syriana", "SYRIANA"],
  ["Tom Griesgraber", "TOM GRIESGRABER"],
...

So despite my using the same expression for the GROUP as for the ORDER, 
the grouping seems to ignore the collation, whereas the sorting alone 
would not.


I'm using the ICU extension (otherwise the sorting wouldn't work either, 
right?). Could that extension be responsible for this odd behaviour?


I've put a small sample .db file in my dropbox:
https://www.dropbox.com/s/w4h6n3cyamt6yc0/library.db?dl=0




(This substitutes the collation NOCASE for the de_DE since I no have a de_DE 
collation:

sqlite> select substr(name collate nocase, 1, 1), count(distinct id)
...>   FROM artists
...> group by substr(name collate nocase, 1, 1)
...> order by substr(name collate nocase, 1, 1);
QUERY PLAN
|--SCAN TABLE artists (~1048576 rows)
`--USE TEMP B-TREE FOR GROUP BY
addr  opcode p1p2p3p4 p5  comment
  -        -  --  -
0 Init   0 57000  Start at 57
1 Noop   1 4 000
2 SorterOpen 3 3 0 k(1,NOCASE)00
3 Integer0 5 000  r[5]=0; clear abort 
flag
4 Null   0 8 800  r[8..8]=NULL
5 Gosub  7 52000
6 OpenRead   0 3 0 2  00  root=3 iDb=0; artists
7 ColumnsUsed0 0 0 3  00
8 Explain8 0 0 SCAN TABLE artists (~1048576 rows)  00
9 Noop   0 0 000  Begin WHERE-loop0: 
artists
10Rewind 0 20000
11  Noop   0 0 000  Begin WHERE-core
12  Column 0 1 13   00  r[13]=artists.name
13  Function0  6 1310substr(3)  03  
r[10]=func(r[13..15])
14  Column 0 1 11   00  r[11]=artists.name
15  Column 0 0 12   00  r[12]=artists.id
16  MakeRecord 103 16   00  
r[16]=mkrec(r[10..12])
17  SorterInsert   3 16000  key=r[16]
18  Noop   0 0 000  End WHERE-core
19Next   0 11001
20Noop   0 0 000  End WHERE-loop0: 
artists
21OpenPseudo 4 16300  3 columns in r[16]
22SorterSort 3 56000  GROUP BY sort
23  SorterData 3 16400  r[16]=data
24  Column 4 0 900  r[9]=
25  Compare8 9 1 k(1,NOCASE)00  r[8] <-> r[9]
26  Jump   273127   00
27  Move   9 8 100  r[8]=r[9]
28  Gosub  6 44000  output one row
29  IfPos  5 56000  if r[5]>0 then 
r[5]-=0, goto 56; check abort flag
30  Gosub  7 52000  reset accumulator
31  Column 4 2 17   00  r[17]=
32  Found  2 36171  00  key=r[17]
33  MakeRecord 171 18   00  r[18]=mkrec(r[17])
34  IdxInsert  2 18171  10  key=r[18]
35  AggStep0 172 count(1)   01  accum=r[2] 
step(r[17])
36  If