[sqlite] Is there something like PHPAdmin for SQLite

2016-04-20 Thread Eduardo Morras
On Wed, 20 Apr 2016 22:21:04 +0200
Cecil Westerhof  wrote:

> 2016-04-20 18:27 GMT+02:00 R Smith :
> 
> > If you are happy to log in to the server, we could suggest a few
> > great tools - but please note that there is nothing wrong with
> > MySQL as far as web development goes. There is no need to change to
> > SQLite, what is MySQL doing wrong? A major consideration is that
> > MySQL offers user log-ins per DB (so you can give access to clients
> > per project for instance).
> >
> >
> > SQLite is awesome for local storage, but not every job requires
> > such a hammer.
> 
> 
> ?He told me he liked the idea of SQLite, but had to use MySQL,
> because he needed to give his client a web interface. So if there is
> a web interface (and there is), then he is not forced to use MySQL.
> And of-course he decides, not me. ;-)

Firefox has an extension to work with sqlite databases. Install it, open a 
sqlite3 db file and work. It uses sqlite3 3.9? (don't remember know).

> -- 
> Cecil Westerhof
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras 


[sqlite] Is it possible that dropping a big table takes very long

2016-04-20 Thread Cecil Westerhof
2016-04-20 22:35 GMT+02:00 Cecil Westerhof :

> 2016-04-20 22:21 GMT+02:00 Scott Robison :
>
>> On Wed, Apr 20, 2016 at 2:00 PM, Cecil Westerhof 
>> wrote:
>>
>> > How stupid that I did not think about Google Drive. :'-( Here it is:
>> > https://drive.google.com/file/d/0BzW5q7uL-6z0SDdya2REaFNFUVE
>> >
>> > ?I am very curious.
>> >
>>
>> I downloaded this and opened it with sqlite3.exe shell version 3.11.0.
>>
>> D:\>\bin\sqlite3.exe checkUUID.sqlite
>> SQLite version 3.11.0 2016-02-15 17:29:24
>> Enter ".help" for usage hints.
>> sqlite> .schema
>> sqlite>
>>
>> It's a big enough file, but there is no schema present.
>>
>
> ?That is strange, because I get:
> sqlite3 checkUUID.sqlite
> SQLite version 3.8.7.1 2014-10-29 13:59:56
> Enter ".help" for usage hints.
> sqlite> .schema
> CREATE TABLE testUniqueUUID (
> UUIDblob,
>
> PRIMARY KEY(UUID)
> CHECK(TYPEOF(UUID) = 'blob'   AND
>   LENGTH(UUID) = 16   AND
>   SUBSTR(HEX(UUID), 13, 1) == '4' AND
>   SUBSTR(HEX(UUID), 17, 1) IN ('8', '9', 'A', 'B')
> )
> );
> ?
>
> ?I am going to investigate.?
>

?Interesting: when I unzip the .gz file, I have the same problem. Something
went wrong. I will try to correct it.?



-- 
Cecil Westerhof


[sqlite] Is it possible that dropping a big table takes very long

2016-04-20 Thread Cecil Westerhof
2016-04-20 22:21 GMT+02:00 Scott Robison :

> On Wed, Apr 20, 2016 at 2:00 PM, Cecil Westerhof 
> wrote:
>
> > How stupid that I did not think about Google Drive. :'-( Here it is:
> > https://drive.google.com/file/d/0BzW5q7uL-6z0SDdya2REaFNFUVE
> >
> > ?I am very curious.
> >
>
> I downloaded this and opened it with sqlite3.exe shell version 3.11.0.
>
> D:\>\bin\sqlite3.exe checkUUID.sqlite
> SQLite version 3.11.0 2016-02-15 17:29:24
> Enter ".help" for usage hints.
> sqlite> .schema
> sqlite>
>
> It's a big enough file, but there is no schema present.
>

?That is strange, because I get:
sqlite3 checkUUID.sqlite
SQLite version 3.8.7.1 2014-10-29 13:59:56
Enter ".help" for usage hints.
sqlite> .schema
CREATE TABLE testUniqueUUID (
UUIDblob,

PRIMARY KEY(UUID)
CHECK(TYPEOF(UUID) = 'blob'   AND
  LENGTH(UUID) = 16   AND
  SUBSTR(HEX(UUID), 13, 1) == '4' AND
  SUBSTR(HEX(UUID), 17, 1) IN ('8', '9', 'A', 'B')
)
);
?

?I am going to investigate.?


[sqlite] Field FOREIGN KEY when it is a pointer

2016-04-20 Thread Cecil Westerhof
2016-04-20 19:27 GMT+02:00 R Smith :

>
>
> On 2016/04/20 6:21 PM, Cecil Westerhof wrote:
>
>> 2016-04-20 18:07 GMT+02:00 R Smith :
>>
>>
>>> On 2016/04/20 6:04 PM, Cecil Westerhof wrote:
>>>
>>> I am thinking about creating a table where a certain field will be a
 description, or a key to the description. Would it be possible to have a
 constraint on this field that it is an id, that this id points to an
 existing record in a description table?


 https://www.sqlite.org/foreignkeys.html
>>>
>>
>> ?If I read that correctly then a field has to be NULL or point to a record
>> in the other table. I want it to point to a record if it is an INT and
>> whem
>> it is TEXT it is just the text. (And it should only be allowed to be an
>> INT
>> or TEXT.)
>>
>
> That is a query function, not a Schema/field function. For instance, you
> could do this:
>
> Select A.ID, A.Name, COALESCE(B.Description, A.ForeignID)
> FROM myTable1 AS A
> LEFT JOIN myForeignTable AS B ON B.ID = A.ForeignID
>
> This way, if there exists a valid entry in the "myForeignTable" with ID =
> ForeignID in the primary table, then it displays that item's B.Description
> field. If not, then it displays the primary table's A.ForeignID field which
> may then be a description or whatever else you like it to be.
>
> You can also achieve this by adding user-defined functions via the API, or
> simply creating Virtual tables that do the same, but simply making the
> query do the work is best.
>  Perhaps even a view like this:
>
> CREATE VIEW fV1 AS SELECT A.ForeignID, COALESCE(B.Description,
> A.ForeignID)  AS Description
>   FROM myTable1 AS A
> LEFT JOIN myForeignTable AS B ON B.ID = A.ForeignID
> );
>
>
> Now whenever you do a query you can simply join in the view on ForeignID
> and refer its Description field which would contain the good value.
>
> While this all will work nicely As a matter of clarity, form and
> efficiency, I'd go with Simon's suggestion: use 2 columns - you will thank
> yourself later.
>

?I think you and Simon are right. I only need to make a check that exactly
one of the two is NULL. (There should be a description, but not more as
one.) But that is not to hard.

-- 
Cecil Westerhof


[sqlite] sqlite3cpp, Yet Another C++ Wrapper of SQLite

2016-04-20 Thread Acer Yang
Dear SQLite users,

If you are interested in using SQLite database in modern C++ way
sqlite3cpp( https://github.com/yangacer/sqlite3cpp ) hope to be your
favorite choice.

The sqlite3cpp provides following features:

1. Query with range for-loop and typed parameter binding
2. Create SQL scalar function with C++ lambda function
3. Create SQL aggregate with functor

Please check above link for further detail.

Best wishes,
Acer


[sqlite] SQLite training/meet-up opportunities in Eindhoven and Berlin

2016-04-20 Thread Cecil Westerhof
2016-04-20 19:21 GMT+02:00 Richard Hipp :

> There will be an opportunity to meet some of the developers of SQLite
> in Eindhoven on 2016-06-24 and in Berlin on 2016-06-27.  This can be a
> free public lecture on SQLite or related topics, or just an informal
> get-together over coffee.
>

?I would be interested in the public lecture in Eindhoven.

-- 
Cecil Westerhof


[sqlite] Is there something like PHPAdmin for SQLite

2016-04-20 Thread Cecil Westerhof
2016-04-20 18:27 GMT+02:00 R Smith :

> If you are happy to log in to the server, we could suggest a few great
> tools - but please note that there is nothing wrong with MySQL as far as
> web development goes. There is no need to change to SQLite, what is MySQL
> doing wrong? A major consideration is that MySQL offers user log-ins per DB
> (so you can give access to clients per project for instance).
>
>
> SQLite is awesome for local storage, but not every job requires such a
> hammer.


?He told me he liked the idea of SQLite, but had to use MySQL, because he
needed to give his client a web interface. So if there is a web interface
(and there is), then he is not forced to use MySQL. And of-course he
decides, not me. ;-)

-- 
Cecil Westerhof


[sqlite] Is there something like PHPAdmin for SQLite

2016-04-20 Thread Cecil Westerhof
2016-04-20 18:16 GMT+02:00 Martin Kucej :

> I use phpLiteAdmin. It has served me well.
>
> https://www.phpliteadmin.org/


?Thanks. I will share it with my colleague.

-- 
Cecil Westerhof


[sqlite] Is it possible that dropping a big table takes very long

2016-04-20 Thread Cecil Westerhof
2016-04-20 18:11 GMT+02:00 R Smith :

>
>
> On 2016/04/20 5:56 PM, Cecil Westerhof wrote:
>
>> 2016-04-20 16:07 GMT+02:00 R Smith :
>>
>>
>>> On 2016/04/20 3:31 PM, Cecil Westerhof wrote:
>>>
>>> ?It is still 411 MB. When I am home I will try it on another system also
>>> to
>>> look if there the timing is differently. Is there an upload site you
>>> recommend?
>>>
>>
> Dropbox usually works fine, so does M$'s OneDrive, or Google's
> GoogleDrive, or anything else you can see on here:
>

How stupid that I did not think about Google Drive. :'-( Here it is:
https://drive.google.com/file/d/0BzW5q7uL-6z0SDdya2REaFNFUVE

?I am very curious.

-- 
Cecil Westerhof


[sqlite] Field FOREIGN KEY when it is a pointer

2016-04-20 Thread R Smith


On 2016/04/20 6:21 PM, Cecil Westerhof wrote:
> 2016-04-20 18:07 GMT+02:00 R Smith :
>
>>
>> On 2016/04/20 6:04 PM, Cecil Westerhof wrote:
>>
>>> I am thinking about creating a table where a certain field will be a
>>> description, or a key to the description. Would it be possible to have a
>>> constraint on this field that it is an id, that this id points to an
>>> existing record in a description table?
>>>
>>>
>> https://www.sqlite.org/foreignkeys.html
>
> ?If I read that correctly then a field has to be NULL or point to a record
> in the other table. I want it to point to a record if it is an INT and whem
> it is TEXT it is just the text. (And it should only be allowed to be an INT
> or TEXT.)

That is a query function, not a Schema/field function. For instance, you 
could do this:

Select A.ID, A.Name, COALESCE(B.Description, A.ForeignID)
FROM myTable1 AS A
LEFT JOIN myForeignTable AS B ON B.ID = A.ForeignID

This way, if there exists a valid entry in the "myForeignTable" with ID 
= ForeignID in the primary table, then it displays that item's 
B.Description field. If not, then it displays the primary table's 
A.ForeignID field which may then be a description or whatever else you 
like it to be.

You can also achieve this by adding user-defined functions via the API, 
or simply creating Virtual tables that do the same, but simply making 
the query do the work is best.
  Perhaps even a view like this:

CREATE VIEW fV1 AS SELECT A.ForeignID, COALESCE(B.Description, 
A.ForeignID)  AS Description
   FROM myTable1 AS A
LEFT JOIN myForeignTable AS B ON B.ID = A.ForeignID
);


Now whenever you do a query you can simply join in the view on ForeignID 
and refer its Description field which would contain the good value.

While this all will work nicely As a matter of clarity, form and 
efficiency, I'd go with Simon's suggestion: use 2 columns - you will 
thank yourself later.

Cheers,
Ryan



[sqlite] Is it possible that dropping a big table takes very long

2016-04-20 Thread Keith Medcalf

For me, source_id = 2016-04-18 15:46:14 eba27d4d17a76884292667d570d542e580ee3e77
Windows 10 1511 Pro, i7-3632QM 2.4Ghz, 16 GB Ram, 1 TB 850 Pro SSD

with secure_delete=0

either

drop table testuniqueuuid;

or the sum of both

delete from testuniqueuuid;
drop table testuniqueuuid;

takes under 2 seconds total real time,


with secure_delete=1

either

drop table testuniqueuuid;

or the sum of both 

delete from testuniqueuuid;
drop table testuniqueuuid;

takes just over 10 seconds.

In the cases with separate delete from and drop, the majority of time is spent 
in delete from (where it presumably overwrites and frees the rows and index) 
and the drop takes just milliseconds.  With only the drop, the total time is 
more or less the same.  If the delete from and drop are run in a single 
transaction, the total time for the transaction is closer to the time for just 
the drop of the populated tables, but only by a few milliseconds.

Nothing seems out of sorts to me, at least with the database file.


> -Original Message-
> From: sqlite-users-bounces at mailinglists.sqlite.org [mailto:sqlite-users-
> bounces at mailinglists.sqlite.org] On Behalf Of Cecil Westerhof
> Sent: Wednesday, 20 April, 2016 17:55
> To: SQLite mailing list
> Subject: Re: [sqlite] Is it possible that dropping a big table takes very
> long
> 
> 2016-04-20 22:40 GMT+02:00 Cecil Westerhof :
> 
> > 2016-04-20 22:35 GMT+02:00 Cecil Westerhof :
> >
> >> 2016-04-20 22:21 GMT+02:00 Scott Robison :
> >>
> >>> On Wed, Apr 20, 2016 at 2:00 PM, Cecil Westerhof
>  >>> >
> >>> wrote:
> >>>
> >>> > How stupid that I did not think about Google Drive. :'-( Here it is:
> >>> > https://drive.google.com/file/d/0BzW5q7uL-6z0SDdya2REaFNFUVE
> >>> >
> >>> > ?I am very curious.
> >>> >
> >>>
> >>> I downloaded this and opened it with sqlite3.exe shell version 3.11.0.
> >>>
> >>> D:\>\bin\sqlite3.exe checkUUID.sqlite
> >>> SQLite version 3.11.0 2016-02-15 17:29:24
> >>> Enter ".help" for usage hints.
> >>> sqlite> .schema
> >>> sqlite>
> >>>
> >>> It's a big enough file, but there is no schema present.
> >>>
> >>
> >> ?That is strange, because I get:
> >> sqlite3 checkUUID.sqlite
> >> SQLite version 3.8.7.1 2014-10-29 13:59:56
> >> Enter ".help" for usage hints.
> >> sqlite> .schema
> >> CREATE TABLE testUniqueUUID (
> >> UUIDblob,
> >>
> >> PRIMARY KEY(UUID)
> >> CHECK(TYPEOF(UUID) = 'blob'   AND
> >>   LENGTH(UUID) = 16   AND
> >>   SUBSTR(HEX(UUID), 13, 1) == '4' AND
> >>   SUBSTR(HEX(UUID), 17, 1) IN ('8', '9', 'A', 'B')
> >> )
> >> );
> >> 
> >>
> >> ?I am going to investigate.?
> >>
> >
> > ?Interesting: when I unzip the .gz file, I have the same problem.
> > Something went wrong. I will try to correct it.?
> >
> 
> ?This one I download, unpacked and tried. It worked. So it should be
> correct now:
> https://drive.google.com/file/d/0BzW5q7uL-6z0UjFhRWZWSENnZW8/
> 
> ?Made with zip.
> 
> --
> Cecil Westerhof
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users





[sqlite] Is there something like PHPAdmin for SQLite

2016-04-20 Thread R Smith
There is ample PHP support, and even Python and the like (see the APSW 
suite by Roger Binns for instance).
PHP of course supports also MSSQL and PostGres and the like, there is no 
need to be "stuck with MySQL".

As for phpMyAdmin - that's a different story - the strength of 
phpMyAdmin is of course its browser-based interface and how it 
integrates with the likes of CPanel - but there is a host of tools for 
all kinds of DBs with even better functionality (such as MySQL 
Workbench), though I don't know of one which does a similar browser 
interface - so for the rest you'd need a session access to the actual 
server.  (Maybe someone else knows of such a tool for the other DBs).

If you are happy to log in to the server, we could suggest a few great 
tools - but please note that there is nothing wrong with MySQL as far as 
web development goes. There is no need to change to SQLite, what is 
MySQL doing wrong? A major consideration is that MySQL offers user 
log-ins per DB (so you can give access to clients per project for instance).


SQLite is awesome for local storage, but not every job requires such a 
hammer.



On 2016/04/20 6:12 PM, Cecil Westerhof wrote:
> I am blowing the horn for SQLite. ;-) A colleague of mine likes it in
> principal, but he provides web hosting with a database. He has to provide
> his clients with a web based interface for maintaining there database.
> Because of this he thinks he is ?condemned? to MySQL.
> Is this true, or would there be a plausible way to work with SQLite?
>



[sqlite] Field FOREIGN KEY when it is a pointer

2016-04-20 Thread Cecil Westerhof
2016-04-20 18:07 GMT+02:00 R Smith :

>
>
> On 2016/04/20 6:04 PM, Cecil Westerhof wrote:
>
>> I am thinking about creating a table where a certain field will be a
>> description, or a key to the description. Would it be possible to have a
>> constraint on this field that it is an id, that this id points to an
>> existing record in a description table?
>>
>>
> https://www.sqlite.org/foreignkeys.html


?If I read that correctly then a field has to be NULL or point to a record
in the other table. I want it to point to a record if it is an INT and whem
it is TEXT it is just the text. (And it should only be allowed to be an INT
or TEXT.)
?

 --
Cecil Westerhof


[sqlite] Is there something like PHPAdmin for SQLite

2016-04-20 Thread Gert Van Assche
I'm using https://www.phpliteadmin.org/ for this. Great tool and cheap:
free!


2016-04-20 18:12 GMT+02:00 Cecil Westerhof :

> I am blowing the horn for SQLite. ;-) A colleague of mine likes it in
> principal, but he provides web hosting with a database. He has to provide
> his clients with a web based interface for maintaining there database.
> Because of this he thinks he is ?condemned? to MySQL.
> Is this true, or would there be a plausible way to work with SQLite?
>
> --
> Cecil Westerhof
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] Is there something like PHPAdmin for SQLite

2016-04-20 Thread Cecil Westerhof
I am blowing the horn for SQLite. ;-) A colleague of mine likes it in
principal, but he provides web hosting with a database. He has to provide
his clients with a web based interface for maintaining there database.
Because of this he thinks he is ?condemned? to MySQL.
Is this true, or would there be a plausible way to work with SQLite?

-- 
Cecil Westerhof


[sqlite] Is it possible that dropping a big table takes very long

2016-04-20 Thread R Smith


On 2016/04/20 5:56 PM, Cecil Westerhof wrote:
> 2016-04-20 16:07 GMT+02:00 R Smith :
>
>>
>> On 2016/04/20 3:31 PM, Cecil Westerhof wrote:
>>
>> ?It is still 411 MB. When I am home I will try it on another system also to
>> look if there the timing is differently. Is there an upload site you
>> recommend?

Dropbox usually works fine, so does M$'s OneDrive, or Google's 
GoogleDrive, or anything else you can see on here:

http://lmgtfy.com/?q=file+upload+sites

>> By the way: it could be that Java is the culprit.
>> ?

It might well be, but without that file, we are all just guessing.


[sqlite] Field FOREIGN KEY when it is a pointer

2016-04-20 Thread R Smith


On 2016/04/20 6:04 PM, Cecil Westerhof wrote:
> I am thinking about creating a table where a certain field will be a
> description, or a key to the description. Would it be possible to have a
> constraint on this field that it is an id, that this id points to an
> existing record in a description table?
>

https://www.sqlite.org/foreignkeys.html


[sqlite] Field FOREIGN KEY when it is a pointer

2016-04-20 Thread Cecil Westerhof
I am thinking about creating a table where a certain field will be a
description, or a key to the description. Would it be possible to have a
constraint on this field that it is an id, that this id points to an
existing record in a description table?

-- 
Cecil Westerhof


[sqlite] Is it possible that dropping a big table takes very long

2016-04-20 Thread Cecil Westerhof
2016-04-20 16:07 GMT+02:00 R Smith :

>
>
> On 2016/04/20 3:31 PM, Cecil Westerhof wrote:
>
>> 2016-04-20 12:35 GMT+02:00 R Smith :
>>
>>
>>> On 2016/04/20 10:50 AM, Cecil Westerhof wrote:
>>>
>>> The Devs do read the list, and often post, and they will be very
>>> interested in what you have discovered if it is not a system anomaly on
>>> your side. (Perhaps even if it is). Can you post the DB file somewhere
>>> and
>>> the steps to reproduce?
>>>
>>> ?It is 512 MB, thus that is a bit difficult. But would a Java program to
>> generate it suffice?
>>
>> The steps to reproduce is to run the script I posted. That supposes you
>> run
>> Linux, but it should be easy to adapt.
>>
>
> The problem is, our running the same script (as have been posted) produces
> a DB that takes a mere few seconds to drop the table, as expected.
> Something in your setup is producing DB files that, with your setup, takes
> ages to drop and perhaps better to delete and then drop, but we cannot
> reproduce those results, so if you can post the DB somewhere, on some
> server or file upload site, the devs could examine it and conclude whether
> your system produces a file which somehow takes ages to drop with possible
> difference in time for drop and delete or just drop, - OR - that your file
> takes little time to drop here with delete or not.
>
> Once the result is known the devs will know that either:
> - The produced DB file is not the problem, nor is the SQLite code, your
> setup is just weird (perhaps then further examine/test with your compile
> options), OR
> - The produced DB does have a problem with dropping vs. delete-then-drop,
> so how can it happen?... and thus time to look at what can be done to make
> it better.
>
> But without the file, all the above is a guess.
>
> Did you gzip it? Is it still 512MB?
> Most file upload sites can handle half a gig easily.


?It is still 411 MB. When I am home I will try it on another system also to
look if there the timing is differently. Is there an upload site you
recommend?

By the way: it could be that Java is the culprit.
?

-- 
Cecil Westerhof


[sqlite] Field FOREIGN KEY when it is a pointer

2016-04-20 Thread Simon Slavin

On 20 Apr 2016, at 5:21pm, Cecil Westerhof  wrote:

> I want it to point to a record if it is an INT and whem
> it is TEXT it is just the text.

If you could do this in SQL it would be horrible.

Use two fields.  One for the INT, the other for the TEXT.  The one you're not 
using for that row is set to NULL.

Simon.


[sqlite] Is it possible that dropping a big table takes very long

2016-04-20 Thread J Decker
If you upload a file to google drive and later try to replace it with
a different version google drive often corrupts it.  Always delete and
upload a new version.

On Wed, Apr 20, 2016 at 4:04 PM, jungle Boogie  
wrote:
> On 20 April 2016 at 14:55, Cecil Westerhof  wrote:
>> This one I download, unpacked and tried. It worked. So it should be
>> correct now:
>> https://drive.google.com/file/d/0BzW5q7uL-6z0UjFhRWZWSENnZW8/
>
>
> Windows 8.1 4 gigs of RAM, dell latitude with i5 process from a few
> years ago, lots of tabs and apps open, here's some results.
>
> SQLite version 3.12.1 2016-04-08 15:09:49
>
> sqlite> drop table testuniqueuuid;
> Run Time: real 9.763 user 1.015625 sys 2.031250
>
> sqlite> delete from testuniqueuuid;
> Run Time: real 19.149 user 1.156250 sys 2.109375
>
> I didn't bother with secure deletes in my case.
>
> This whole thread sounds like a good use case for flame graphs:
> http://brendangregg.com/flamegraphs
>
>
> --
> ---
> inum: 883510009027723
> sip: jungleboogie at sip2sip.info
> xmpp: jungle-boogie at jit.si
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Is it possible that dropping a big table takes very long

2016-04-20 Thread Dominique Devienne
On Wed, Apr 20, 2016 at 3:31 PM, Cecil Westerhof 
wrote:

> >>> ?Do the developers read this list, or should I post a bug report?
>

They do, and it's actually the preferred way to discuss
issues/bugs/performances, etc...

Dr. Hipp even wrote not long ago "bugs" entered would be aggressively
removed,
if I recall correctly, unless discussed here first.

SQLite is not a classical OSS project. It is public domain, but no
"outside" developers.
No pull requests here. And bug system is mostly reserved to SQLite
devs/authors, AFAIK. --DD


[sqlite] Is it possible that dropping a big table takes very long

2016-04-20 Thread Scott Robison
On Wed, Apr 20, 2016 at 3:55 PM, Cecil Westerhof 
wrote:

>
> ?This one I download, unpacked and tried. It worked. So it should be
> correct now:
> https://drive.google.com/file/d/0BzW5q7uL-6z0UjFhRWZWSENnZW8/
>
> ?Made with zip.
>

Windows 10 1511 with Feb 2016 updates, x64
i7-6700K 4.00GHz
32 GiB RAM
3 TB 7200 RPM SATA HD
SQLite version 3.11.0 2016-02-15 17:29:24

secure delete on, drop only, Run Time: real 44.058 user 2.406250 sys
11.453125

secure delete on, delete all, Run Time: real 57.858 user 2.343750 sys
13.203125
secure delete on, drop after delete all, Run Time: real 0.125 user 0.00
sys 0.015625

secure delete off, drop only, Run Time: real 6.070 user 0.546875 sys
1.171875

secure delete off, delete all, Run Time: real 6.024 user 0.453125 sys
1.25
secure delete off, drop after delete all, Run Time: real 0.125 user
0.00 sys 0.015625

Nothing seems out of sorts here. Note that I'm running two idle VMs, a
remote desktop instance, Hyper-V manager, two idle command prompts, 9
chrome browser tabs open (each tab being an independent process), and an
idle bittorrent client. Given the CPU & RAM and the idle nature of most
processes, it's not a heavy load.

-- 
Scott Robison


[sqlite] Is it possible that dropping a big table takes very long

2016-04-20 Thread R Smith


On 2016/04/20 3:31 PM, Cecil Westerhof wrote:
> 2016-04-20 12:35 GMT+02:00 R Smith :
>
>>
>> On 2016/04/20 10:50 AM, Cecil Westerhof wrote:
>>
>> The Devs do read the list, and often post, and they will be very
>> interested in what you have discovered if it is not a system anomaly on
>> your side. (Perhaps even if it is). Can you post the DB file somewhere and
>> the steps to reproduce?
>>
> ?It is 512 MB, thus that is a bit difficult. But would a Java program to
> generate it suffice?
>
> The steps to reproduce is to run the script I posted. That supposes you run
> Linux, but it should be easy to adapt.

The problem is, our running the same script (as have been posted) 
produces a DB that takes a mere few seconds to drop the table, as 
expected. Something in your setup is producing DB files that, with your 
setup, takes ages to drop and perhaps better to delete and then drop, 
but we cannot reproduce those results, so if you can post the DB 
somewhere, on some server or file upload site, the devs could examine it 
and conclude whether your system produces a file which somehow takes 
ages to drop with possible difference in time for drop and delete or 
just drop, - OR - that your file takes little time to drop here with 
delete or not.

Once the result is known the devs will know that either:
- The produced DB file is not the problem, nor is the SQLite code, your 
setup is just weird (perhaps then further examine/test with your 
compile  options), OR
- The produced DB does have a problem with dropping vs. 
delete-then-drop, so how can it happen?... and thus time to look at what 
can be done to make it better.

But without the file, all the above is a guess.

Did you gzip it? Is it still 512MB?
Most file upload sites can handle half a gig easily.




[sqlite] Is it possible that dropping a big table takes very long

2016-04-20 Thread jungle Boogie
On 20 April 2016 at 14:55, Cecil Westerhof  wrote:
> This one I download, unpacked and tried. It worked. So it should be
> correct now:
> https://drive.google.com/file/d/0BzW5q7uL-6z0UjFhRWZWSENnZW8/


Windows 8.1 4 gigs of RAM, dell latitude with i5 process from a few
years ago, lots of tabs and apps open, here's some results.

SQLite version 3.12.1 2016-04-08 15:09:49

sqlite> drop table testuniqueuuid;
Run Time: real 9.763 user 1.015625 sys 2.031250

sqlite> delete from testuniqueuuid;
Run Time: real 19.149 user 1.156250 sys 2.109375

I didn't bother with secure deletes in my case.

This whole thread sounds like a good use case for flame graphs:
http://brendangregg.com/flamegraphs


-- 
---
inum: 883510009027723
sip: jungleboogie at sip2sip.info
xmpp: jungle-boogie at jit.si


[sqlite] Is it possible that dropping a big table takes very long

2016-04-20 Thread Cecil Westerhof
2016-04-20 12:35 GMT+02:00 R Smith :

>
>
> On 2016/04/20 10:50 AM, Cecil Westerhof wrote:
>
>> 2016-04-20 10:44 GMT+02:00 Dominique Devienne :
>>
>> On Wed, Apr 20, 2016 at 10:36 AM, Cecil Westerhof >> >
>>> wrote:
>>>
>>> I am baffled. Still DELETE before DROP is a lot more efficient. And it
 looks that it is not bothered when other programs are running (most of
 the time). I would think that a DROP should take the least time:

 I agree. That's weird. Needs investigating indeed.
>>>
>>> ?Do the developers reed this list, or should I post a bug report?
>>
>
> The Devs do read the list, and often post, and they will be very
> interested in what you have discovered if it is not a system anomaly on
> your side. (Perhaps even if it is). Can you post the DB file somewhere and
> the steps to reproduce?
>

?It is 512 MB, thus that is a bit difficult. But would a Java program to
generate it suffice?

The steps to reproduce is to run the script I posted. That supposes you run
Linux, but it should be easy to adapt.



> Also, there is no need to file a bug report for this since it is not a
> bug. A bug is typically considered to be a bug if the answer produced from
> a query is wrong. Since everything happens as expected, you are dealing
> with an efficiency regression at best - still of interest, but not a bug.
> (The distinction matters because bugs gets the highest priority for
> attention, and sometimes cause unscheduled releases to fix - something an
> optimization would never cause).
>

?Well, I would expect a DROP to be more efficient as a DELETE and DROP. ;-)
But I agree that wrong results aremuch more important.

-- 
Cecil Westerhof


[sqlite] storing coordinate pairs efficiently?

2016-04-20 Thread Wolfgang Enzinger
OK, here are a few test results ... 350,000 points with no additional data
... timings with all data in cache and 17 records selected:

#1
CREATE INDEX "pdata_spatial_index_x" ON "pdata"("gx");
CREATE INDEX "pdata_spatial_index_y" ON "pdata"("gy");
file size 24,200 KB, query time ~6.3 ms

#2
CREATE INDEX "pdata_spatial_index_x" ON "pdata"("gx","gy");
CREATE INDEX "pdata_spatial_index_y" ON "pdata"("gy","gx");
file size 27,360 KB, query time ~3.3 ms

#3
CREATE VIRTUAL TABLE "pdata_Shape_Index" USING rtree("IndexedObjectId"
INTEGER,"MinGX" DOUBLE,"MaxGX" DOUBLE,"MinGY" DOUBLE,"MaxGY" DOUBLE);
file size 32,064 KB, query time ~1.3 ms

Now that is something to balance ... I'll probably go with the rtree here
although it may look a bit like a misuse. :-)

Wolfgang



[sqlite] Multiple in-memory database table query

2016-04-20 Thread Dominique Devienne
On Wed, Apr 20, 2016 at 2:49 PM, Jay Kreibich  wrote:

> > To attach several memory DBs, it's more like
> >
> > ATTACH DATABASE "file:mem1?mode=memory" as db1;
> > ATTACH DATABASE "file:mem2?mode=memory" as db2;
> > etc... ?DD
>
> There is no requirement to use the URI style file type, you can just
> re-opening the special value ?:memory:?.
>
> $ ./sqlite3
> SQLite version 3.8.4.2 2014-03-26 18:51:19
> Enter ".help" for usage hints.
> Connected to a transient in-memory database.
> Use ".open FILENAME" to reopen on a persistent database.
> sqlite> ATTACH DATABASE ":memory:" AS db2;
> sqlite> ATTACH DATABASE ":memory:" AS db3;
> sqlite> .databases
> seq  name file
> ---  ---
> --
> 0main
> 2db2
> 3db3


[DD] Thanks. I mistakenly assumes ":memory:" was a "singleton" memory DB
for that particular connection.
[DD] Thanks to your example, and a little testing on my own, I now realize
each one is an independent
[DD] memory DB, and not just different "aliases" to just one memory DB.

[DD] And I was also confused by something we did a few weeks ago, which is
to have several named
[DD] memory DBs, and connecting several times (as independent connections)
to one of those mem DB's by name.
[DD] I believe this use case is only possible with URI style file type. All
in-process of course. Am I wrong on that too? --DD


[sqlite] storing coordinate pairs efficiently?

2016-04-20 Thread Simon Slavin

On 20 Apr 2016, at 2:16pm, Wolfgang Enzinger  wrote:

> CREATE INDEX "pdata_spatial_index" ON "pdata"("gx","gy");
> 
> [snip]
> 
> However, EQP also shows that it is only used with the value of gx, not gy:

It would be used if you searched for one x value and a range of y values.  If 
you do have a range in x then the index is not sorted in a convenient order to 
then search for a range in y.

> OTOH, it doesn't feel right to have no index at all on "gy", so this
> fields got its own index:
> 
> CREATE INDEX "pdata_spatial_index_x" ON "pdata"("gx");
> CREATE INDEX "pdata_spatial_index_y" ON "pdata"("gy");
> 
> With ANALYZE data available,

You must do ANALYZE after you have created the indexes, and after your table 
has a realistic number of realistic rows in it.  If you're doing that, you're 
fine.  And I think that creating two indexes is the right thing to do.

> SQLite is now smart enough to choose the most
> efficient search path, depending on the width / height of the search range:
> 
> SELECT * FROM pdata WHERE gx BETWEEN 21 AND 22 and gy BETWEEN 20
> AND 39
> --> SEARCH TABLE pdata USING INDEX pdata_spatial_index_x (gx>? AND gx 
> SELECT * FROM pdata WHERE gx BETWEEN 20 AND 30 and gy BETWEEN 21
> AND 22
> --> SEARCH TABLE pdata USING INDEX pdata_spatial_index_y (gy>? AND gy 
> So far this seems the most efficient solution to me. Any input on this?

Looks good to me.  But do you really need SELECT * or would SELECT ID be 
sufficient ?  I'll explain why.

SQLite has an optimization for searches.  If it decides to use a certain index 
for the search, and all the columns needed by the SELECT are present in that 
index, then it returns the values taken from the index entry.  It does not need 
to first find the index entry and then consult the table.

So ...
if the speed of these searches is too slow to suit your needs,
and you actually do need both coordinates,
and you don't mind sacrificing speed of data entry
and some disk space,
you may be better off with

CREATE INDEX "pdata_spatial_index_x" ON "pdata"("gx","gy");
CREATE INDEX "pdata_spatial_index_y" ON "pdata"("gy","gx");

(There is no need to add ID to the index since it is your primary key so it is 
always 'invisibly' entered at the end of every index definition.)

This would allow SQLite to pull gx, gy and ID off of whichever of those indexes 
it decided to use, without then having to look the ID up in your table.

However I note that you have "/* a couple more fields here */" in your schema 
so perhaps this would not be worth doing.  Or perhaps elegance and disk space 
are more important to you so you wouldn't want to do it for other reasons.

Simon.


[sqlite] sqlite3.OperationalError: database or disk is full

2016-04-20 Thread Clemens Ladisch
Yadav, Rohit wrote:
> I ... pointed SQLITE_TMPDIR, TMPDIR, TMP, TEMP to the newly created directory.
> ... but it looks like sqlite is still using /tmp directory to write temporary 
> files.

How exactly did you change the environment variables, and how did you
ensure that the SQLite library sees these changes?


Regards,
Clemens


[sqlite] Is it possible that dropping a big table takes very long

2016-04-20 Thread Scott Robison
On Wed, Apr 20, 2016 at 2:00 PM, Cecil Westerhof 
wrote:

> How stupid that I did not think about Google Drive. :'-( Here it is:
> https://drive.google.com/file/d/0BzW5q7uL-6z0SDdya2REaFNFUVE
>
> ?I am very curious.
>

I downloaded this and opened it with sqlite3.exe shell version 3.11.0.

D:\>\bin\sqlite3.exe checkUUID.sqlite
SQLite version 3.11.0 2016-02-15 17:29:24
Enter ".help" for usage hints.
sqlite> .schema
sqlite>

It's a big enough file, but there is no schema present.

-- 
Scott Robison


[sqlite] storing coordinate pairs efficiently?

2016-04-20 Thread Wolfgang Enzinger
Simon, Gunter,

thanks for your input. I also noticed that an index on (gx,gy) would help on
covering index searches, however as you both noticed as well there are more
fields in this table. But wait - probably for the first step (which points
are located in the region in question anyway?) it might well be enough to
know the coordinates and the object ID, the second step (user needs to know
more about a certain point in this subset of points) could probably a
separate query in many situations. I'll have to investigate this in more depth.

Thanks again! Wolfgang



[sqlite] storing coordinate pairs efficiently?

2016-04-20 Thread Hick Gunter
The "Usability" of an index for a certain set of constraints ends with the 
first (in the order mentioned in the index) field with an inequality constraint.

However, and index on (gx,gy) is still be faster for the query shown than just 
on (gx), because the value of the second field is read from the index instead 
of having to retrieve it from the row.

If your application selects only a small subset of fields for the majority of 
queries, it may even be useful to include these field in the index. This is 
called a "covering index". It costs more to maintain, but is faster for queries 
that read only the fields present in (covered by) the index.

-Urspr?ngliche Nachricht-
Von: sqlite-users-bounces at mailinglists.sqlite.org 
[mailto:sqlite-users-bounces at mailinglists.sqlite.org] Im Auftrag von 
Wolfgang Enzinger
Gesendet: Mittwoch, 20. April 2016 15:16
An: sqlite-users at mailinglists.sqlite.org
Betreff: [sqlite] storing coordinate pairs efficiently?

Dear group,

since a couple of years now, I'm using SQLite for storing graphic objects like 
polygons and lines, with great success. Using the rtree extension makes spatial 
queries very efficient.

In my lastest project, however, these objects are just single points 
(coordinate pairs). As I understand it, rtree isn't the appropriate tool here 
because it's for range queries. Of course, I could store every coordinate twice 
in an rtee index, thus simulating ranges (with a width of zero). However, I 
guess that would be a waste of space with probably no benefit.

So I tried a few things. The table definition looks like this:

CREATE TABLE "pdata"(
  "gx" DOUBLE NOT NULL, -- x coordinate
  "gy" DOUBLE NOT NULL, -- y coordinate
  "ID" INTEGER PRIMARY KEY NOT NULL
  /* a couple more fields here */
)

Then I established an index over gx and gy:

CREATE INDEX "pdata_spatial_index" ON "pdata"("gx","gy");

EQP shows that this index is used (not surprisingly) for every spatial query
like:

SELECT * FROM pdata WHERE gx BETWEEN 21 AND 22 AND gy BETWEEN 20 
AND 39

However, EQP also shows that it is only used with the value of gx, not gy:

SEARCH TABLE pdata USING INDEX pdata_spatial_index (gx>? AND gx SEARCH TABLE pdata USING INDEX pdata_spatial_index_x (gx>? AND gx SEARCH TABLE pdata USING INDEX pdata_spatial_index_y (gy>? AND gyhttp://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
 Gunter Hick
Software Engineer
Scientific Games International GmbH
FN 157284 a, HG Wien
Klitschgasse 2-4, A-1130 Vienna, Austria
Tel: +43 1 80100 0
E-Mail: hick at scigames.at

This communication (including any attachments) is intended for the use of the 
intended recipient(s) only and may contain information that is confidential, 
privileged or legally protected. Any unauthorized use or dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please immediately notify the sender by return e-mail message and 
delete all copies of the original communication. Thank you for your cooperation.




[sqlite] SQLite training/meet-up opportunities in Eindhoven and Berlin

2016-04-20 Thread Richard Hipp
There will be an opportunity to meet some of the developers of SQLite
in Eindhoven on 2016-06-24 and in Berlin on 2016-06-27.  This can be a
free public lecture on SQLite or related topics, or just an informal
get-together over coffee.

Private training sessions and/or briefings are also available to
companies for a modest fee.  Paid appearances can be more flexible
with regard to dates and locations.

I (Richard Hipp, the original creator of SQLite) will be at both
locations.  Dan Kennedy (the principal author of the FTS extensions,
ZIPVFS, RBU, sessions, etc.) will be at Berlin.

If you would like to set something up, please contact me directly at
the email below.

But please hurry:  We will be purchasing our non-refundable tickets soon.

-- 
D. Richard Hipp
drh at sqlite.org


[sqlite] storing coordinate pairs efficiently?

2016-04-20 Thread Wolfgang Enzinger
Dear group,

since a couple of years now, I'm using SQLite for storing graphic objects
like polygons and lines, with great success. Using the rtree extension
makes spatial queries very efficient.

In my lastest project, however, these objects are just single points
(coordinate pairs). As I understand it, rtree isn't the appropriate tool
here because it's for range queries. Of course, I could store every
coordinate twice in an rtee index, thus simulating ranges (with a width of
zero). However, I guess that would be a waste of space with probably no benefit.

So I tried a few things. The table definition looks like this:

CREATE TABLE "pdata"(
  "gx" DOUBLE NOT NULL, -- x coordinate
  "gy" DOUBLE NOT NULL, -- y coordinate
  "ID" INTEGER PRIMARY KEY NOT NULL
  /* a couple more fields here */
)

Then I established an index over gx and gy:

CREATE INDEX "pdata_spatial_index" ON "pdata"("gx","gy");

EQP shows that this index is used (not surprisingly) for every spatial query
like:

SELECT * FROM pdata WHERE gx BETWEEN 21 AND 22 AND gy BETWEEN 20
AND 39

However, EQP also shows that it is only used with the value of gx, not gy:

SEARCH TABLE pdata USING INDEX pdata_spatial_index (gx>? AND gx SEARCH TABLE pdata USING INDEX pdata_spatial_index_x (gx>? AND gx SEARCH TABLE pdata USING INDEX pdata_spatial_index_y (gy>? AND gy

[sqlite] Multiple in-memory database table query

2016-04-20 Thread R Smith


On 2016/04/19 10:48 PM, R Smith wrote:
>
>
> On 2016/04/19 8:34 PM, Jarred Ford wrote:
>> Is it possible to create multiple in-memory databases and be able to 
>> access tables with a single query between them?  For example, select 
>> * from db1.dbo.table1 db1 join db2.dbo.table1 db2 on db1.x = db2.x.
>
> No. An in-memory database has no descriptor or handle that can be 
> accessed externally. You can make several connections which can all 
> spawn in-memory databases, but you cannot attach a memory database to 
> another memory database in another connection. (you can of course 
> easily attach other file DBs to a memory DB, but that point is moot).

Correction: As others have pointed out, that answer should have read: 
"Yes - within the same connection". Re-reading your question, I'm not 
sure why I assumed you'd want to have cross-connection memory DBs, but I 
did. Good luck!



[sqlite] Is it possible that dropping a big table takes very long

2016-04-20 Thread R Smith


On 2016/04/20 10:50 AM, Cecil Westerhof wrote:
> 2016-04-20 10:44 GMT+02:00 Dominique Devienne :
>
>> On Wed, Apr 20, 2016 at 10:36 AM, Cecil Westerhof 
>> wrote:
>>
>>> I am baffled. Still DELETE before DROP is a lot more efficient. And it
>>> looks that it is not bothered when other programs are running (most of
>>> the time). I would think that a DROP should take the least time:
>>>
>> I agree. That's weird. Needs investigating indeed.
>>
> ?Do the developers reed this list, or should I post a bug report?

The Devs do read the list, and often post, and they will be very 
interested in what you have discovered if it is not a system anomaly on 
your side. (Perhaps even if it is). Can you post the DB file somewhere 
and the steps to reproduce?

Also, there is no need to file a bug report for this since it is not a 
bug. A bug is typically considered to be a bug if the answer produced 
from a query is wrong. Since everything happens as expected, you are 
dealing with an efficiency regression at best - still of interest, but 
not a bug. (The distinction matters because bugs gets the highest 
priority for attention, and sometimes cause unscheduled releases to fix 
- something an optimization would never cause).


Cheers,
Ryan



[sqlite] DB Version

2016-04-20 Thread Richard Hipp
On 4/20/16, Andy Allord  wrote:
> I?m new to sqlite and this is probably something simple I?m overlooking.
>
> I?m working with binaries for 3.9.x on a Mac using Vagrant as a development
> environment with PHP. PHP currently support Sqlite 3.8.x databases and when
> the database is created it is being created as a 3.9.x. Is there anyway I
> can create a 3.8.x in this situation to work with the PHP binaries?

The database file format does not change.  A database created using
3.9.x (or 3.12.x) will work fine with PHP running 3.8.x.

(Exception:  If your database uses some new feature (Ex: partial
indexes, indexes on expressions) that is not supported by the older
version, then there could be problems.  But as long as you avoid the
latest cutting edge features, you should be fine.)

-- 
D. Richard Hipp
drh at sqlite.org


[sqlite] Avoid duplicate sets with join table

2016-04-20 Thread Barry
Would not the following snippet be useful to return the record ID if the
feature set already exists?

SELECT rec.id
FROM records rec
WHERE
1 = (SELECT SUM(id_f IN (21, 22, 23)) = 3 AND COUNT(*) = 3 FROM
records_features rf WHERE rf.id_r = rec.id)

You would have to have some logic in your calling language to deal with
things appropriately...

I don't know how this could be coded into a DB constraint since SQLite
lacks FOR EACH STATEMENT triggers, and to insert a superset of an existing
set you may at some point insert the existing set.

On 17 April 2016 at 17:32, James K. Lowden  wrote:

> On Sat, 16 Apr 2016 01:20:55 +0200
> Ketil Froyn  wrote:
>
> > I have two tables and a join table, in principle like this:
> >
> > CREATE TABLE records (id INTEGER PRIMARY KEY, data TEXT);
> > CREATE TABLE features (id INTEGER PRIMARY KEY, data TEXT UNIQUE);
> > CREATE TABLE records_features (id_r INTEGER, id_f INTEGER, ord
> > INTEGER);
>
> (As a style suggestion, consider not reduplicating plurals.  It's a
> one-man band, a 3-star general, and a 5-finger discount.  What you have
>  is a set of record_features.)
>
> > Later, if I come across a new record that has exactly features
> > "feature1", "feature2" and "feature3" (IDs 20,21 and 22), I don't want
> > to add a new record for this. So given a new set of feature IDs, how
> > can I best check if there's another record that has these exact
> > features before I insert it?
> >
> > To be clear, a record with features "feature1", "feature2", "feature4"
> > would be ok. So would "feature1", "feature2". Subsets are ok, but not
> > exact duplicates.
>
> This is an application of relational division: you want to know if two
> sets are equal.  Well, almost.  You really want to know if the "new" set
> is a subset of an existing one.
>
> You're also a little stuck for lack of syntax.  There are two sets in
> question: the extant one and the one to be inserted.  You can insert a
> set in SQLite,
>
> insert into T values (a), (b), (c);
>
> but because that set of rows (a, b, and c) doesn't have a name, you
> can't refer to them again in the same query.  So you have to dance a
> little jig, something like:
>
> begin transation
> insert into record features ...
> delete from record features
> id = new_id
> where exists (
> select 1 from record features as rf
> where rf.id_r = record_features.id_r
> and rf.id_f = record_features.id_f
> group by id
> having count(*) = (
> select count(*)
> from record_features where id_r = new_id
> )
> );
> commit transaction
>
> That puts the rows into the database -- where they can be examined --
> and deletes them if they turn out to be a duplicate.  You could also
> apply the same logic in an insert & update trigger, and probably should
> if the rule applies to the *data*, and not just the application's
> expectations.
>
> --jkl
>
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] Is there something like PHPAdmin for SQLite

2016-04-20 Thread Martin Kucej
I use phpLiteAdmin. It has served me well.

https://www.phpliteadmin.org/

On 04/20/2016 11:12 AM, Cecil Westerhof wrote:
> I am blowing the horn for SQLite. ;-) A colleague of mine likes it in
> principal, but he provides web hosting with a database. He has to provide
> his clients with a web based interface for maintaining there database.
> Because of this he thinks he is ?condemned? to MySQL.
> Is this true, or would there be a plausible way to work with SQLite?
>



[sqlite] Is it possible that dropping a big table takes very long

2016-04-20 Thread Cecil Westerhof
2016-04-20 10:44 GMT+02:00 Dominique Devienne :

> On Wed, Apr 20, 2016 at 10:36 AM, Cecil Westerhof 
> wrote:
>
> > I am baffled. Still DELETE before DROP is a lot more efficient. And it
> > looks that it is not bothered when other programs are running (most of
> > the time). I would think that a DROP should take the least time:
> >
>
> I agree. That's weird. Needs investigating indeed.
>

?Do the developers reed this list, or should I post a bug report?

?


> > Also: I understand that SECURE_DELETE=1 takes more time, but the
> > difference seems excessive. What could be happening here?
> >
>
> Secure delete is basically writing to the disk several times, with random
> data.
> From [1], it appears to be between 1x and 35x times. Don't know how SQLite
> does secure-delete. --DD
>

?I will look into it later. Thanks.

-- 
Cecil Westerhof


[sqlite] Is it possible that dropping a big table takes very long

2016-04-20 Thread Dominique Devienne
On Wed, Apr 20, 2016 at 10:36 AM, Cecil Westerhof 
wrote:

> I am baffled. Still DELETE before DROP is a lot more efficient. And it
> looks that it is not bothered when other programs are running (most of
> the time). I would think that a DROP should take the least time:
>

I agree. That's weird. Needs investigating indeed.


> Also: I understand that SECURE_DELETE=1 takes more time, but the
> difference seems excessive. What could be happening here?
>

Secure delete is basically writing to the disk several times, with random
data.
>From [1], it appears to be between 1x and 35x times. Don't know how SQLite
does secure-delete. --DD

[1]
http://www.howtogeek.com/72130/learn-how-to-securely-delete-files-in-windows/


[sqlite] Is it possible that dropping a big table takes very long

2016-04-20 Thread Cecil Westerhof
2016-04-19 15:27 GMT+02:00 Olivier Mascia :

> > Le 19 avr. 2016 ? 14:49, Cecil Westerhof  a
> ?crit :
> >
> > SECURE_DELETE
>
> Are you aware of the net effect of SQLITE_SECURE_DELETE?
>
> https://www.sqlite.org/compile.html#secure_delete
>
> The documentation talks about a "small performance penalty", yet on very
> large tables, especially if overflowing by large the cache size, the amount
> of IO for overwriting deleted data with zeroes can become substantial.  Not
> to say this is the sole reason for your issue, but it should have an impact
> on it.
>
> You should run "PRAGMA secure_delete=0" before doing this bulk deletion
> and see how it impacts your timings.
>

?I have some interesting metrics. Certainly the
PRAGMA SECURE_DELETE=0;
has significant influence. But that makes the DELETE before the DROP
only having a bigger influence.

I am not hindered by any deep knowledge with low level database IO, so it
is possible that not everything I say makes sense when you have this
knowledge. Just educate me. :-)


First of all DELETE before DROP with SECURE_DELETE looks to be always a
very good option. Travelling home yesterday and doing other stuff it took
7 minutes, while without the DELETE it took 14 minutes. At the moment I
have not reproduced it again and sadly I do not have the data from that
run. :'-( If I manage to do it again, I will share the data.

As a programmer I am lazy, so I have made a Bash script to make life
easier. I have attached it with two logs. The first is when I ran the
script without doing something else, the second is while Iceweasel was
running. (Firefox on Debian.)

On a clean system:
SECURE_DELETE=1, DROP ONLY  17  minutes
SECURE_DELETE=1, DELETE and DROP14  minutes
SECURE_DELETE=0, DROP only   4? minutes
SECURE_DELETE=0, DELETE and DROP  ? minute

With Iceweasel running:
SECURE_DELETE=1, DROP ONLY  45  minutes
SECURE_DELETE=1, DELETE and DROP41  minutes
SECURE_DELETE=0, DROP only  11  minutes
SECURE_DELETE=0, DELETE and DROP  ? minute


I am baffled. Still DELETE before DROP is a lot more efficient. And it
looks that it is not bothered when other programs are running (most of
the time). I would think that a DROP should take the least time: it only
has to mark a complete table to be free space. So why does it take
significantly more time? And why is the DELETE before DROP not
influenced by running other programs while DROP only is?

Also: I understand that SECURE_DELETE=1 takes more time, but the
difference seems excessive. What could be happening here?


The SECURE_DELETE=1 certainly is something to warn people for. (In the
current implementation.) By the way: I agree that it is default on.
?
-- 
Cecil Westerhof


[sqlite] Multiple in-memory database table query

2016-04-20 Thread Dominique Devienne
On Wed, Apr 20, 2016 at 10:22 AM, Dominique Devienne 
wrote:

> On Wed, Apr 20, 2016 at 4:40 AM, Jay Kreibich  wrote:
>>
>> On Apr 19, 2016, at 1:34 PM, Jarred Ford  wrote:
>>
>> > Is it possible to create multiple in-memory databases and be able to
>> access tables with a single query between them?
>> > For example, select * from db1.dbo.table1 db1 join db2.dbo.table1 db2
>> on db1.x = db2.x.
>>
>> Sure.  Like any other database, you?ll need to open the additional
>> databases using the ATTACH DATABASE command.  Just open an in-memory
>> database first, then run:
>>
>> ATTACH DATABASE ?:memory:? AS db2
>>
>
> To attach several memory DBs, it's more like
>

Oops, forgot the file: protocol prefix. --DD


> ATTACH DATABASE "file:mem1?mode=memory" as db1;
> ATTACH DATABASE "file:mem2?mode=memory" as db2;
> etc... --DD
>


[sqlite] Multiple in-memory database table query

2016-04-20 Thread Dominique Devienne
On Wed, Apr 20, 2016 at 4:40 AM, Jay Kreibich  wrote:
>
> On Apr 19, 2016, at 1:34 PM, Jarred Ford  wrote:
>
> > Is it possible to create multiple in-memory databases and be able to
> access tables with a single query between them?
> > For example, select * from db1.dbo.table1 db1 join db2.dbo.table1 db2 on
> db1.x = db2.x.
>
> Sure.  Like any other database, you?ll need to open the additional
> databases using the ATTACH DATABASE command.  Just open an in-memory
> database first, then run:
>
> ATTACH DATABASE ?:memory:? AS db2
>

To attach several memory DBs, it's more like

ATTACH DATABASE "mem1?mode=memory" as db1;
ATTACH DATABASE "mem2?mode=memory" as db2;
etc... --DD


[sqlite] DB Version

2016-04-20 Thread Andy Allord
I?m new to sqlite and this is probably something simple I?m overlooking. 

I?m working with binaries for 3.9.x on a Mac using Vagrant as a development 
environment with PHP. PHP currently support Sqlite 3.8.x databases and when the 
database is created it is being created as a 3.9.x. Is there anyway I can 
create a 3.8.x in this situation to work with the PHP binaries?

-Andy


[sqlite] sqlite3.OperationalError: database or disk is full

2016-04-20 Thread Richard Hipp
On 4/19/16, Yadav, Rohit  wrote:
> Hello,
>
> I got this error "sqlite3.OperationalError: database or disk is full? and I
> researched about it and created another directory that has lot of space and
> pointed SQLITE_TMPDIR, TMPDIR, TMP, TEMP to the newly created directory. I
> also used sqlitecur.execute("PRAGMA temp_store = 1;?) but it looks like
> sqlite is still using /tmp directory to write temporary files. Can anyone
> help me on how to solve this issue ?
>

The error might not be due to insufficient temp space.  Please
consider turning on the error and warning log
(https://www.sqlite.org/errlog.html) to see if that provides
additional information.

-- 
D. Richard Hipp
drh at sqlite.org


[sqlite] Multiple in-memory database table query

2016-04-20 Thread Jay Kreibich

On Apr 20, 2016, at 3:22 AM, Dominique Devienne  wrote:

> On Wed, Apr 20, 2016 at 4:40 AM, Jay Kreibich  wrote:
>> 
>> On Apr 19, 2016, at 1:34 PM, Jarred Ford  wrote:
>> 
>>> Is it possible to create multiple in-memory databases and be able to
>> access tables with a single query between them?
>>> For example, select * from db1.dbo.table1 db1 join db2.dbo.table1 db2 on
>> db1.x = db2.x.
>> 
>> Sure.  Like any other database, you?ll need to open the additional
>> databases using the ATTACH DATABASE command.  Just open an in-memory
>> database first, then run:
>> 
>> ATTACH DATABASE ?:memory:? AS db2
>> 
> 
> To attach several memory DBs, it's more like
> 
> ATTACH DATABASE "mem1?mode=memory" as db1;
> ATTACH DATABASE "mem2?mode=memory" as db2;
> etc... ?DD


There is no requirement to use the URI style file type, you can just re-opening 
the special value ?:memory:?.


$ ./sqlite3
SQLite version 3.8.4.2 2014-03-26 18:51:19
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> ATTACH DATABASE ":memory:" AS db2;
sqlite> ATTACH DATABASE ":memory:" AS db3;
sqlite> ATTACH DATABASE ":memory:" AS db4;
sqlite> ATTACH DATABASE ":memory:" AS db5;
sqlite> .databases
seq  name file  
---  ---  --
0main   
2db2
3db3
4db4
5db5 



--  
Jay A. Kreibich < J A Y @ K R E I B I.C H >

"Intelligence is like underwear: it is important that you have it, but showing 
it to the wrong people has the tendency to make them feel uncomfortable." -- 
Angela Johnson





[sqlite] Multiple in-memory database table query

2016-04-20 Thread Olaf Schmidt
Am 19.04.2016 um 20:34 schrieb Jarred Ford:
> Is it possible to create multiple in-memory databases and be able to access 
> tables with a single query between them?  For example, select * from 
> db1.dbo.table1 db1 join db2.dbo.table1 db2 on db1.x = db2.x.

Yes, that's possible over appropriate FileURIs:
https://www.sqlite.org/inmemorydb.html

Olaf