[sqlite] whish list for 2016

2016-01-05 Thread gunnar
Hi Scott,

It is not such a big deal for us but since I saw the subject "whish 
list" I thought to add something that I was surprised that it isn't 
there in sqlite (since it is so complete in many other ways), but that i 
use on regular basis.
I agree that SELECT * in production code is bad, but I do use it on the 
command line client now and then.
Regarding views, I don't know how sqlite handles them but I know that 
they can drastically slow down queries in MySQL for example.

gr.,
Gunnar


On 01/05/2016 03:56 PM, Scott Hess wrote:
> Maybe one option would be to add a layer to affect that explicitly, so that
> instead of the problem being that the existing rows can't be reordered
> without re-writing the entire table, the problem is to just change the
> schema to indicate where the columns should appear in "SELECT *" statements.
>
> Basically, you might have:
>CREATE TABLE x (id INTEGER PRIMARY KEY, v TEXT);
> then say:
>ALTER TABLE x ADD COLUMN k TEXT UNIQUE AFTER id;
> which converts the table to:
>CREATE TABLE x (id INTEGER PRIMARY KEY, v TEXT, k TEXT UNIQUE AFTER id);
>
> This could be extended to delete columns by instead annotating the schema
> to indicate that a column is hidden or ignored.  Perhaps the annotations
> could be constrained to only the end of the statement (so that series of
> placements and deletions can be comprehended).  All that would be left
> would be renaming a column, which is already plausible (just rename the
> column in the schema).
>
> You could already implement this by renaming the table and replacing it
> with a view plus a trigger.  But specifying that would be somewhat involved
> and brittle.
>
> WRT handling things like view and trigger and foreign key references, I
> think that would be reasonable to just expose manually.   Replace the views
> and triggers in the same transaction.  Provide a means to replace column
> definitions so that foreign key references can be handled in a similar
> fashion.
>
> [Personally my position is that "SELECT * FROM x" is not appropriate for
> production code.  Otherwise you're implicitly relying on implementation
> details.]
>
> -scott
>
>
> On Tue, Jan 5, 2016 at 5:23 AM, gunnar  wrote:
>
>> Indeed those two possibilities! And perhaps also to MODIFY a column. We
>> alter tables all the time :)
>>
>> The possibility to specify at which place an added column should appear
>> when I do a "SELECT * ..." would be a nice extra, in MySQL it is done like:
>> ALTER TABLE table_name ADD COLUMN col_name1 [_AFTER_ col_name2 | FIRST]
>>
>>
>>
>>
>> Gr.,
>> Gunnar
>>
>>
>>
>>
>> On 12/24/2015 05:14 PM, Simon Slavin wrote:
>>
>>> On 24 Dec 2015, at 3:12pm, John McKown 
>>> wrote:
>>>
>>> ?I'm curious as to why. Doing so would, most likely, require rewriting the
 entire table. If you want a SELECT * to get the columns in a particular
 order, just create a VIEW with the columns in the order in which you want
 them.

>>> I think Gunnar just wants forms such as
>>>
>>> ALTER TABLE CREATE COLUMN ...
>>> ALTER TABLE DROP COLUMN ...
>>>
>>> to aid with making small changes to the schema.  These are available in
>>> most SQL engines but the way SQLite3 is written makes it difficult or
>>> inefficient to implement them.
>>>
>>> I have a database where one table takes up more than 30 Gigabytes of
>>> space.  While developing the software I needed several times to change a
>>> column definition and since SQLite lacks these facilities I had to move 30
>>> Gig of data around every time I did it.  Annoying.  But it's not normally
>>> that much of a problem for me.
>>>
>>> Simon.
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users at mailinglists.sqlite.org
>>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>>
>> ___
>> sqlite-users mailing list
>> sqlite-users at mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] The database disk image is malformed

2016-01-05 Thread Andrew Stewart
Hi,
I am getting the following error on a database.  The database 
is 78GB large when this started.  I am adding data to it via insert statements. 
 I believe that this is the second time that I have seen this.  It has been 
running for about 2 weeks adding data to it constantly.  The database structure 
is simple.  Following is the code for the create table:
CREATE TABLE dataStreamRecord (
fwParameterID INTEGER NOT NULL,
dateTime INTEGER NOT NULL,
data INTEGER NOT NULL,
UNIQUE (
fwParameterID,
dateTime
)
);
I am wondering what could cause this.  The database does have 
the compression attribute turned on.  The database time/date is still getting 
adjusted.


Andrew Stewart
Software Designer

Argus Control Systems Ltd.
Tel: (604) 536-9100 ext. 108

astewart at arguscontrols.com

#101 - 18445 53rd Avenue | Surrey BC | CANADA | V3S 7A4
(604) 538-3531 | (800) 667-2090 | Fax (604) 538-4728
www.arguscontrols.com

Notice: This electronic transmission contains confidential information, 
intended only for the person(s) named above. If you are not the intended 
recipient, you are hereby notified that any disclosure, copying, distribution, 
or any other use of this email is strictly prohibited. If you have received 
this transmission by error, please notify us immediately by return email and 
destroy the original transmission immediately and all copies thereof.


[sqlite] whish list for 2016

2016-01-05 Thread gunnar
Indeed those two possibilities! And perhaps also to MODIFY a column. We 
alter tables all the time :)

The possibility to specify at which place an added column should appear 
when I do a "SELECT * ..." would be a nice extra, in MySQL it is done 
like: ALTER TABLE table_name ADD COLUMN col_name1 [_AFTER_ col_name2 | 
FIRST]




Gr.,
Gunnar



On 12/24/2015 05:14 PM, Simon Slavin wrote:
> On 24 Dec 2015, at 3:12pm, John McKown  
> wrote:
>
>> ?I'm curious as to why. Doing so would, most likely, require rewriting the
>> entire table. If you want a SELECT * to get the columns in a particular
>> order, just create a VIEW with the columns in the order in which you want
>> them.
> I think Gunnar just wants forms such as
>
> ALTER TABLE CREATE COLUMN ...
> ALTER TABLE DROP COLUMN ...
>
> to aid with making small changes to the schema.  These are available in most 
> SQL engines but the way SQLite3 is written makes it difficult or inefficient 
> to implement them.
>
> I have a database where one table takes up more than 30 Gigabytes of space.  
> While developing the software I needed several times to change a column 
> definition and since SQLite lacks these facilities I had to move 30 Gig of 
> data around every time I did it.  Annoying.  But it's not normally that much 
> of a problem for me.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] 'order by' doesn't work with 'group_concat()'

2016-01-05 Thread Simon Davies
On 5 January 2016 at 00:14, Yuri  wrote:
> Please consider this example:
. 
. 
. 
> The 'order by' clause doesn't work, because if it did the result would have
> been:
> 1|x,y
> 2|x,y
>
> sqlite3-3.9.2
>
> Yuri

See http://www.sqlite.org/lang_aggfunc.html last sentence

"
group_concat(X)
group_concat(X,Y)
The group_concat() function returns a string which is the
concatenation of all non-NULL values of X. If parameter Y is present
then it is used as the separator between instances of X. A comma (",")
is used as the separator if Y is omitted. The order of the
concatenated elements is arbitrary.
"

Regards,
Simon


[sqlite] The database disk image is malformed

2016-01-05 Thread Stephen Chrzanowski
Alright, accepted.  NFS isn't exactly the proper terminology, however, RFS
(Remote File System) might be a better fit for that particular question.
Basically, as soon as another OS handles the files directly, you're looking
at a possible collision of requests against a SQLite database, which is why
I asked.

On Tue, Jan 5, 2016 at 11:09 AM, Keith Medcalf  wrote:

>
> > All of what Richard asked and said, plus, my question of where the file
> > stored in relation to the application? (Network attached storage?)
>
> Be careful with your terminology.  The Attachment technology is pretty
> much irrelevant.  It is the location of the Filesystem which is important.
>
> Network Attached Storage means that the "connection" between the local
> device adapter and the medium uses a "Network Cable" instead of, say, a
> SCSI/SAS cable, and that block transport uses some kind of network
> encapsulation (FC, IP, etc).  Presentation is as a non-shared block device
> -- it is indistinguishable from a locally attached device by the Operating
> System. (In theory any multipoint connection, such as parallel SCSI or GPIB
> is "Network Attached" -- that would include SAS as well I suppose, though
> it is only networked up to the demux).
>
> Network Filesystems are usually problematic and are an entirely different
> beast (despite the propensity of the marketroids to call everything
> connected to a network "Network Attached Storage" thus creating
> considerable confusion amongst those that do not know the difference).
>
>
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] The database disk image is malformed

2016-01-05 Thread Stephen Chrzanowski
All of what Richard asked and said, plus, my question of where the file
stored in relation to the application? (Network attached storage?)

On Tue, Jan 5, 2016 at 10:22 AM, Richard Hipp  wrote:

> On 1/5/16, Andrew Stewart  wrote:
> > Hi,
> > I am getting the following error on a database.  The
> > database is 78GB large when this started.  I am adding data to it via
> insert
> > statements.  I believe that this is the second time that I have seen
> this.
> > It has been running for about 2 weeks adding data to it constantly.  The
> > database structure is simple.  Following is the code for the create
> table:
> > CREATE TABLE dataStreamRecord (
> > fwParameterID INTEGER NOT NULL,
> > dateTime INTEGER NOT NULL,
> > data INTEGER NOT NULL,
> > UNIQUE (
> > fwParameterID,
> > dateTime
> > )
> > );
>
> FWIW, a more efficient schema might be:
>
> CREATE TABLE dataStreamRecord (
>fwParameterId INT,
>dateTime INT,
>data INT NOT NULL,
>PRIMARY KEY(fwParameterId,dateTime)
>) WITHOUT ROWID;
>
>
>
> > I am wondering what could cause this.  The database does
> > have the compression attribute turned on.  The database time/date is
> still
> > getting adjusted.
>
> "compression attribute"?  SQLite doesn't have any such thing.  Is this
> a feature of your filesystem?
>
> Have you run "PRAGMA quick_check" on the faulty database to gather
> more information about the problem?
>
> Have you enabled error logging as described at
> (https://www.sqlite.org/errlog.html)?
>
> --
> D. Richard Hipp
> drh at sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] The database disk image is malformed

2016-01-05 Thread Richard Hipp
On 1/5/16, Andrew Stewart  wrote:
> Hi,
> I am getting the following error on a database.  The
> database is 78GB large when this started.  I am adding data to it via insert
> statements.  I believe that this is the second time that I have seen this.
> It has been running for about 2 weeks adding data to it constantly.  The
> database structure is simple.  Following is the code for the create table:
> CREATE TABLE dataStreamRecord (
> fwParameterID INTEGER NOT NULL,
> dateTime INTEGER NOT NULL,
> data INTEGER NOT NULL,
> UNIQUE (
> fwParameterID,
> dateTime
> )
> );

FWIW, a more efficient schema might be:

CREATE TABLE dataStreamRecord (
   fwParameterId INT,
   dateTime INT,
   data INT NOT NULL,
   PRIMARY KEY(fwParameterId,dateTime)
   ) WITHOUT ROWID;



> I am wondering what could cause this.  The database does
> have the compression attribute turned on.  The database time/date is still
> getting adjusted.

"compression attribute"?  SQLite doesn't have any such thing.  Is this
a feature of your filesystem?

Have you run "PRAGMA quick_check" on the faulty database to gather
more information about the problem?

Have you enabled error logging as described at
(https://www.sqlite.org/errlog.html)?

-- 
D. Richard Hipp
drh at sqlite.org


[sqlite] whish list for 2016

2016-01-05 Thread Keith Medcalf


> On Tuesday, 5 January, 2016 08:33, gunnar  said:

> Regarding views, I don't know how sqlite handles them but I know that
> they can drastically slow down queries in MySQL for example.

While this is true in many cases, using a view rather than a select on the 
underlying table should not incur any overhead provided that the view is a 
"simple projection".  If the view generates a non-trivial projection such that 
those conditions must be maintained (materialized) in the query plan with 
respect to the view visitation, then the view will adversely affect 
performance.  Simple projections should be flattened by the optimizer as if the 
original query had been phrased directly against the underlying tables.

So, if there are no grouping, ordering, or outer join operations in the view, 
it will probably not affect the plan.  If there are, then the view will itself 
become a "subselect" which must be visited as if it were a table.  WHERE 
conditions, in simple projections and equijoins can be simply pushed out into 
the overall constraints.  

explain query plan will make this apparent.







[sqlite] 'order by' doesn't work with 'group_concat()'

2016-01-05 Thread Keith Medcalf

NB:  This format of the select only works co-incidentally.  Presently, the 
SQLite query planner "believes" that the order by is significant to the result 
and preserves it at some expense.  This may not always be the case.  Thus using 
this method to obtain the desired result is relying on an "implementation 
detail".

select a.id,
   (select group_concat(c)
  from (select chr as c
  from b
 where oid = a.id
  order by chr)
   )
  from a;
--EQP-- 0,0,0,SCAN TABLE a
--EQP-- 0,0,0,EXECUTE CORRELATED SCALAR SUBQUERY 1
--EQP-- 2,0,0,SCAN TABLE b
--EQP-- 2,0,0,USE TEMP B-TREE FOR ORDER BY
--EQP-- 1,0,0,SCAN SUBQUERY 2
1|x,y
2|x,y

Using a covering index, however, is more likely to always work (as long as the 
solution method remains recursive row-wise descent):

create index ob on b (oid, chr);

select a.id,
   (select group_concat(chr)
  from b
 where oid = a.id
   )
  from a;
--EQP-- 0,0,0,SCAN TABLE a
--EQP-- 0,0,0,EXECUTE CORRELATED SCALAR SUBQUERY 1
--EQP-- 1,0,0,SEARCH TABLE b USING COVERING INDEX ob (oid=?)
1|x,y
2|x,y






[sqlite] 'order by' doesn't work with 'group_concat()'

2016-01-05 Thread Keith Medcalf

Your example is fundamentally flawed because you are not changing the order of 
the data fed *into* group_concat.  Although the documentation (which I am sure 
someone will point you to) indicates that group_concat ordering is arbitrary, 
it is not (it cannot be).  

It concatenates the data fed into it, in the order that it is presented.  This 
may appear "arbitrary" to some, but it is not.  Any sufficiently advanced 
technology appears magical to the primitive observer.  It is not possible for 
the order to be arbitrary when the data is visited in row-wise fashion -- it is 
only possible to be of limited understanding.

Consider your subselect:

> (select group_concat(chr)
>  from b
>  where oid = a.id
>  group by oid
>  order by chr
> )

The "group by" is meaningless, since the correlated subquery already only 
projects the rows in a single group (determined by the where clause).
The "order by" is meaningless, since it applies to the order of the projected 
results, which in this case is only one row (that is, it is applied AFTER 
group_concat, not before).  You statement after the meaningless cruft is 
removed is equivalent to:

select a.id,
   (select group_concat(chr)
  from b
 where oid = a.id
   )
  from a;

which should make the results you are seeing more understandable.  The table b 
is a b-tree indexed by the rowid.  The table is scanned, and each of the rows 
that passes the "where" clause is fed to the group_concat function.  The 
results are ordered by the rowid which is the default visitation order of the 
rows in the table b.  Baring programmatic diddling with the rowid, this is the 
order in which the data was inserted into table b.

If you want to change the visitation order, you can either create an index with 
the correct visitation order (index ob, below) or you can "feed the data" into 
group concat in the order you wish it to be concatenated.  There is no magic 
here.

>sqlite
SQLite version 3.10.0 2016-01-04 23:43:47
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> .read gc.sql
create table a(id integer not null, primary key(id));
create table b(oid integer not null, chr char null);
insert into a values(1);
insert into a values(2);
insert into b values(1,'y');
insert into b values(1,'x');
insert into b values(2,'x');
insert into b values(2,'y');

select a.id,
   (select group_concat(c)
  from (select chr as c
  from b
 where oid = a.id
  order by chr)
   )
  from a;
1|x,y
2|x,y

create index ob on b (oid, chr);

select a.id,
   (select group_concat(chr)
  from b
 where oid = a.id
   )
  from a;
1|x,y
2|x,y


> -Original Message-
> From: sqlite-users-bounces at mailinglists.sqlite.org [mailto:sqlite-users-
> bounces at mailinglists.sqlite.org] On Behalf Of Yuri
> Sent: Monday, 4 January, 2016 17:14
> To: General Discussion of SQLite Database
> Subject: [sqlite] 'order by' doesn't work with 'group_concat()'
> 
> Please consider this example:
> 
> ---begin---
> #!/bin/sh
> 
> DB=sq.sqlite
> rm -f $DB
> 
> sql() {
>echo "$1" | sqlite3 $DB
> }
> 
> sql "create table a(id integer not null, primary key(id));"
> sql "create table b(oid integer not null, chr char null);"
> sql "insert into a values(1);"
> sql "insert into a values(2);"
> sql "insert into b values(1,'y');"
> sql "insert into b values(1,'x');"
> sql "insert into b values(2,'x');"
> sql "insert into b values(2,'y');"
> 
> sql "select
> a.id,
> (select group_concat(chr)
>  from b
>  where oid = a.id
>  group by oid
>  order by chr
> )
>   from
> a;"
> ---end---
> 
> It returns this dataset:
> 1|y,x
> 2|x,y
> 
> The 'order by' clause doesn't work, because if it did the result would
> have been:
> 1|x,y
> 2|x,y
> 
> sqlite3-3.9.2
> 
> Yuri
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users





[sqlite] whish list for 2016

2016-01-05 Thread Scott Hess
Maybe one option would be to add a layer to affect that explicitly, so that
instead of the problem being that the existing rows can't be reordered
without re-writing the entire table, the problem is to just change the
schema to indicate where the columns should appear in "SELECT *" statements.

Basically, you might have:
  CREATE TABLE x (id INTEGER PRIMARY KEY, v TEXT);
then say:
  ALTER TABLE x ADD COLUMN k TEXT UNIQUE AFTER id;
which converts the table to:
  CREATE TABLE x (id INTEGER PRIMARY KEY, v TEXT, k TEXT UNIQUE AFTER id);

This could be extended to delete columns by instead annotating the schema
to indicate that a column is hidden or ignored.  Perhaps the annotations
could be constrained to only the end of the statement (so that series of
placements and deletions can be comprehended).  All that would be left
would be renaming a column, which is already plausible (just rename the
column in the schema).

You could already implement this by renaming the table and replacing it
with a view plus a trigger.  But specifying that would be somewhat involved
and brittle.

WRT handling things like view and trigger and foreign key references, I
think that would be reasonable to just expose manually.   Replace the views
and triggers in the same transaction.  Provide a means to replace column
definitions so that foreign key references can be handled in a similar
fashion.

[Personally my position is that "SELECT * FROM x" is not appropriate for
production code.  Otherwise you're implicitly relying on implementation
details.]

-scott


On Tue, Jan 5, 2016 at 5:23 AM, gunnar  wrote:

> Indeed those two possibilities! And perhaps also to MODIFY a column. We
> alter tables all the time :)
>
> The possibility to specify at which place an added column should appear
> when I do a "SELECT * ..." would be a nice extra, in MySQL it is done like:
> ALTER TABLE table_name ADD COLUMN col_name1 [_AFTER_ col_name2 | FIRST]
>
>
>
>
> Gr.,
> Gunnar
>
>
>
>
> On 12/24/2015 05:14 PM, Simon Slavin wrote:
>
>> On 24 Dec 2015, at 3:12pm, John McKown 
>> wrote:
>>
>> ?I'm curious as to why. Doing so would, most likely, require rewriting the
>>> entire table. If you want a SELECT * to get the columns in a particular
>>> order, just create a VIEW with the columns in the order in which you want
>>> them.
>>>
>> I think Gunnar just wants forms such as
>>
>> ALTER TABLE CREATE COLUMN ...
>> ALTER TABLE DROP COLUMN ...
>>
>> to aid with making small changes to the schema.  These are available in
>> most SQL engines but the way SQLite3 is written makes it difficult or
>> inefficient to implement them.
>>
>> I have a database where one table takes up more than 30 Gigabytes of
>> space.  While developing the software I needed several times to change a
>> column definition and since SQLite lacks these facilities I had to move 30
>> Gig of data around every time I did it.  Annoying.  But it's not normally
>> that much of a problem for me.
>>
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users at mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] whish list for 2016

2016-01-05 Thread R Smith


On 2016/01/04 6:11 PM, Bernardo Sulzbach wrote:
> On Mon, Jan 4, 2016 at 1:50 PM, Stephen Chrzanowski  
> wrote:
>> I did the test to compare SSD vs 5200rpm and 7200rpm drives.  I should
>> throw this at my 7200rpm hybrid at this app and see what happens.
>>
> Could you please share any performance data you still have? Thank you.
>

I decided to cook up a test for this since I happen to have one dev 
machine that is rather new with 3 different storage media in.

The following script was used to produce a severely fragmented database 
file around 20GB in size (while only about 7GB of actual data remained 
inside). It takes about 20 mins to run on a normal drive:


DROP TABLE IF EXISTS vacTest;

DROP TABLE IF EXISTS vacCopy;

CREATE TABLE vacTest(
   i INTEGER NOT NULL PRIMARY KEY,
   a REAL,
   b NUMERIC,
   c TEXT
);

CREATE TABLE vacCopy(
   i INTEGER NOT NULL PRIMARY KEY,
   a REAL,
   b NUMERIC,
   c TEXT
);

WITH VI(x, r) AS (
 SELECT 0, 245
   UNION ALL
 SELECT x+1, ((random()/9223372036854775806.0)*2)+245 FROM 
VI WHERE x<1
)
INSERT INTO vacTest (i,a,b,c) SELECT x, r, datetime(r), printf('Some 
Text aimed at consuming a few bytes S%6d',substr(r,2,6))
   FROM VI;

INSERT INTO vacCopy SELECT * FROM vacTest;

DELETE FROM vacTest
  WHERE (i / 5) = (i / 5.0);

DELETE FROM vacCopy
  WHERE (i / 6) = (i / 6.0);

WITH VI(x) AS (
 SELECT 0
   UNION ALL
 SELECT x+1 FROM VI WHERE x<1000
)
INSERT INTO vacTest (a,b,c) SELECT 1.0, '2000-01-01 00:00:00', 
printf('Some Text aimed at consuming a few bytes S%6d',x)
   FROM VI;

WITH VI(x) AS (
 SELECT 0
   UNION ALL
 SELECT x+1 FROM VI WHERE x<1000
)
INSERT INTO vacCopy (a,b,c) SELECT 1.0, '2000-01-01 00:00:00', 
printf('Some Text aimed at consuming a few bytes S%6d',x)
   FROM VI;

DELETE FROM vacTest
  WHERE instr(i,'4') > 0;

DELETE FROM vacCopy
  WHERE instr(i,'5') > 0;
=

It was run inside sqlite3.exe using 3.9.2 (the standard pre-compiled CLI 
as downloadable from the downloads page on sqlite.org) with 4K page size 
and WAL mode. OS is Windows 10 64 bit.

The file produced was then copied to 4 different drives as follows:

Test 1 - F: - External Western Digital 3TB 7500 rpm drive via USB3.0, 
Max Read - Write rated @ 120MB/s
Test 2 - D: - Internal Western Digital Velociraptor 600MB 10,000 rpm via 
SATA 6GB/s rated @ 150 MB/s
Test 3 - E: - Internal Western Digital Black 2TB 7500 rpm (WD2002FAEX) 
via SATA 6GB/s rated @ 164 MB/s
Test 4 - C: - Internal Kingston HyperX Predator 480GB SSD via m.2 PCIe 
interface rated @ 1400MB/s (read) - 1000 MB/s (write)

Following is the output from the VACUUM command (with timers enabled) 
using the sqlite3 CLI and connecting to the same copied file on the 4 
drives:

F:\[SCIENCE]>sqlite3 vacTest.sqlitedb
SQLite version 3.9.2 2015-11-02 18:31:45
Enter ".help" for usage hints.
sqlite> .timer ON
sqlite> VACUUM;
Run Time: real 1309.109 user 203.921875 sys 114.984375
sqlite> .q

F:\[SCIENCE]>sqlite3 D:\vacTest.sqlitedb
SQLite version 3.9.2 2015-11-02 18:31:45
Enter ".help" for usage hints.
sqlite> .timer ON
sqlite> VACUUM;
Run Time: real 824.223 user 140.953125 sys 87.921875
sqlite> .q

F:\[SCIENCE]>sqlite3 E:\ADrive\vacTest.sqlitedb
SQLite version 3.9.2 2015-11-02 18:31:45
Enter ".help" for usage hints.
sqlite> .timer ON
sqlite> VACUUM;
Run Time: real 788.750 user 131.921875 sys 81.593750
sqlite> .q

F:\[SCIENCE]>sqlite3 c:\AMD\vacTest.sqlitedb
SQLite version 3.9.2 2015-11-02 18:31:45
Enter ".help" for usage hints.
sqlite> .timer ON
sqlite> VACUUM;
Run Time: real 214.056 user 92.843750 sys 56.640625
sqlite> .q


No surprises really, the VACUUM operation is clearly IO bound.

In all three cases the processor (i7 5930K Hexacore 12-thread @ 3.5GHz) 
Never breaks the 4% total usage, and no single thread rose above 40% 
until right before the VACUUM operation concludes, at which point the 
one thread spikes up to 95% for around 2-6 seconds seemingly in step 
with each drive's general performance. (This is my perception, I had no 
means of measuring it exactly).

File Size goes down from ~20GB to around ~6.8GB, so the Vacuuming has a 
purpose in this case, but it would be hard to find this level of 
fragmentation in a normal-use real-World DB.

In all tests, the TEMP folder was set to the D drive which is the second 
slowest drive, it showed a lot of usage. I started out using the usual 
6GB Ramdrive for TEMP, but it was too small, so redirected it to D.  I 
tried the last test again with the TEMP set to the fast SSD, figures 
were only about 16% better (214 vs. 181), but I imagine using a ramdrive 
or at least a second SSD would show a significant improvement.

I initially added tests for "SELECT * FROM vacTest WHERE xxx;" but they 
seemed to be near instantaneous before and after the vacuum so I dropped 
it - I realized afterward it is probably thanks to the Primary Key I 
used in the WHERE clause, and may have