Re: [sqlite] Unable to store 500MB size of row data even after define macro SQLITE_MAX_LENGTH

2017-12-08 Thread Dianne Dunn
How,do I get off this email list?? What is it for??

Sent from my iPad

> On Dec 8, 2017, at 7:30 AM, Kees Nuyt  wrote:
> 
> On Thu, 7 Dec 2017 23:59:02 -0700 (MST), Durgesh
>  wrote:
> 
>> getting error "QSqlError("18", "Unable to bind parameters", "string or blob
>> too big")" while inserting a row data of size 500MB.
>> 
>> However insertion of 450 MB is successful.
>> 
>> Tried define the macro SQLITE_MAX_LENGTH with value of
>> SQLITE_MAX_LENGTH=20 , larger than 500MB.
>> 
>> pls suggest a way insertion can be done for more than 450 MB of data.
>> 
>> I am using Qt SQL , for interfacing with sqlite db.
> 
> What is the databases page size?
> How big is your page cache? 
> Is cache spill enabled?
> 
> See: 
> PRAGMA schema.cache_size;
> https://sqlite.org/pragma.html#pragma_cache_size
> PRAGMA cache_spill; 
> https://sqlite.org/pragma.html#pragma_cache_spill
> PRAGMA schema.page_size; 
> https://sqlite.org/pragma.html#pragma_page_size
> PRAGMA schema.cache_size;
> 
> Other things to look at:
> PRAGMA schema.journal_size_limit;
> https://sqlite.org/pragma.html#pragma_journal_size_limit
> PRAGMA soft_heap_limit;
> https://sqlite.org/pragma.html#pragma_soft_heap_limit
> 
> HTH
> 
> -- 
> Regards,
> Kees Nuyt
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] random value get re-generated too often in SQLite

2017-12-08 Thread Scott Doctor


Is it possible that the first call to random is cached and the 
cached value is being returned in subsequent calls?


-
Scott Doctor
sc...@scottdoctor.com
-

On 12/8/2017 12:09, John McKown wrote:

On Fri, Dec 8, 2017 at 12:54 PM, John Mount  wrote:


I am seeing an issue where a random value in a sub-query gets re-generated
each time the value is used in later queries.  Below is an example query:

SELECT r AS r1, r AS r2 FROM ( SELECT random() AS r FROM ( SELECT * from (
VALUES(1),(2) ) ) a ) b

One would expect r1 == r2.

---
John Mount


​Interesting. I duplicated your example using PostgreSQL and it does what
you said you expected: R1==R2. In SQLite3, the .eqp full shows:

sqlite> .eqp full
sqlite> SELECT r AS r1, r AS r2 FROM ( SELECT random() AS r FROM ( SELECT *
from ( VALUES(1),(2) ) ) a ) b;
--EQP-- 0,0,0,SCAN SUBQUERY 1
addr  opcode p1p2p3p4 p5  comment
  -        -  --  -
0 Init   0 0 000  Start at 0
1 InitCoroutine  1 7 200
sqlite_sq_5619D81F9BF8
2 Integer1 2 000  r[2]=1
3 Yield  1 0 000
4 Integer2 2 000  r[2]=2
5 Yield  1 0 000
6 EndCoroutine   1 0 000
7 InitCoroutine  1 0 200
8   Yield  1 13000  next row of
"sqlite_sq_5619D81F9BF8"
9   Function0  0 0 4 random(0)  00  r[4]=func(r[0])
10  Function0  0 0 5 random(0)  00  r[5]=func(r[0])
11  ResultRow  4 2 000  output=r[4..5]
12Goto   0 8 000
13Halt   0 0 000
4548137244590923354  8821858240296964415
761559492082550893  2723588653195689097
​
I think this is being done due to the "subquery flattening" as described on
http://sqlite.org/optoverview.html, which says:

To overcome this problem, SQLite attempts to flatten subqueries in the FROM
clause of a SELECT. This involves inserting the FROM clause of the subquery
into the FROM clause of the outer query and rewriting expressions in the
outer query that refer to the result set of the subquery. For example:

So your "SELECT r AS r1, r AS r2 FROM ( SELECT random() AS r FROM ( SELECT
* from ( VALUES(1),(2) ) ) a ) b;" ​be​comes

"SELECT random() AS r1, random() AS r2 FROM ( SELECT * from (values(1),(2))
a) b;"

I think the above is born out if you put the word DISTINCT in front of the
random() in the original example. This forces SQLite to _not_ use subquery
flattening. Example:

sqlite> SELECT  r AS r1, r AS r2 FROM ( SELECT distinct random() AS r FROM
( SELECT * from ( VALUES(1),(2) ) c ) a ) b;
--EQP-- 1,0,0,SCAN SUBQUERY 2 AS c
--EQP-- 1,0,0,USE TEMP B-TREE FOR DISTINCT
--EQP-- 0,0,0,SCAN SUBQUERY 1 AS b
addr  opcode p1p2p3p4 p5  comment
  -        -  --  -
0 Init   0 0 000  Start at 0
1 InitCoroutine  1 18200
sqlite_sq_55E270A58EA8
2 InitCoroutine  2 8 300
sqlite_sq_55E270A58AA8
3 Integer1 3 000  r[3]=1
4 Yield  2 0 000
5 Integer2 3 000  r[3]=2
6 Yield  2 0 000
7 EndCoroutine   2 0 000
8 OpenEphemeral  3 0 0 k(1,B) 08  nColumn=0
9 InitCoroutine  2 0 300
10  Yield  2 17000  next row of
"sqlite_sq_55E270A58AA8"
11  Function0  0 0 5 random(0)  00  r[5]=func(r[0])
12  Found  3 165 1  00  key=r[5]
13  MakeRecord 5 1 600  r[6]=mkrec(r[5])
14  IdxInsert  3 6 000  key=r[6]
15  Yield  1 0 000
16Goto   0 10000
17EndCoroutine   1 0 000
18InitCoroutine  1 0 200
19  Yield  1 24000  next row of
"sqlite_sq_55E270A58EA8"
20  Copy   5 7 000  r[7]=r[5];
sqlite_sq_55E270A58EA8.r
21  Copy   7 8 000  r[8]=r[7]
22  ResultRow  7 2 000  output=r[7..8]
23Goto   0 19000
24Halt   0 0 000

Re: [sqlite] random value get re-generated too often in SQLite

2017-12-08 Thread John McKown
On Fri, Dec 8, 2017 at 12:54 PM, John Mount  wrote:

> I am seeing an issue where a random value in a sub-query gets re-generated
> each time the value is used in later queries.  Below is an example query:
>
> SELECT r AS r1, r AS r2 FROM ( SELECT random() AS r FROM ( SELECT * from (
> VALUES(1),(2) ) ) a ) b
>
> One would expect r1 == r2.
>
> ---
> John Mount
>

​Interesting. I duplicated your example using PostgreSQL and it does what
you said you expected: R1==R2. In SQLite3, the .eqp full shows:

sqlite> .eqp full
sqlite> SELECT r AS r1, r AS r2 FROM ( SELECT random() AS r FROM ( SELECT *
from ( VALUES(1),(2) ) ) a ) b;
--EQP-- 0,0,0,SCAN SUBQUERY 1
addr  opcode p1p2p3p4 p5  comment
  -        -  --  -
0 Init   0 0 000  Start at 0
1 InitCoroutine  1 7 200
sqlite_sq_5619D81F9BF8
2 Integer1 2 000  r[2]=1
3 Yield  1 0 000
4 Integer2 2 000  r[2]=2
5 Yield  1 0 000
6 EndCoroutine   1 0 000
7 InitCoroutine  1 0 200
8   Yield  1 13000  next row of
"sqlite_sq_5619D81F9BF8"
9   Function0  0 0 4 random(0)  00  r[4]=func(r[0])
10  Function0  0 0 5 random(0)  00  r[5]=func(r[0])
11  ResultRow  4 2 000  output=r[4..5]
12Goto   0 8 000
13Halt   0 0 000
4548137244590923354  8821858240296964415
761559492082550893  2723588653195689097
​
I think this is being done due to the "subquery flattening" as described on
http://sqlite.org/optoverview.html, which says:

To overcome this problem, SQLite attempts to flatten subqueries in the FROM
clause of a SELECT. This involves inserting the FROM clause of the subquery
into the FROM clause of the outer query and rewriting expressions in the
outer query that refer to the result set of the subquery. For example:

So your "SELECT r AS r1, r AS r2 FROM ( SELECT random() AS r FROM ( SELECT
* from ( VALUES(1),(2) ) ) a ) b;" ​be​comes

"SELECT random() AS r1, random() AS r2 FROM ( SELECT * from (values(1),(2))
a) b;"

I think the above is born out if you put the word DISTINCT in front of the
random() in the original example. This forces SQLite to _not_ use subquery
flattening. Example:

sqlite> SELECT  r AS r1, r AS r2 FROM ( SELECT distinct random() AS r FROM
( SELECT * from ( VALUES(1),(2) ) c ) a ) b;
--EQP-- 1,0,0,SCAN SUBQUERY 2 AS c
--EQP-- 1,0,0,USE TEMP B-TREE FOR DISTINCT
--EQP-- 0,0,0,SCAN SUBQUERY 1 AS b
addr  opcode p1p2p3p4 p5  comment
  -        -  --  -
0 Init   0 0 000  Start at 0
1 InitCoroutine  1 18200
sqlite_sq_55E270A58EA8
2 InitCoroutine  2 8 300
sqlite_sq_55E270A58AA8
3 Integer1 3 000  r[3]=1
4 Yield  2 0 000
5 Integer2 3 000  r[3]=2
6 Yield  2 0 000
7 EndCoroutine   2 0 000
8 OpenEphemeral  3 0 0 k(1,B) 08  nColumn=0
9 InitCoroutine  2 0 300
10  Yield  2 17000  next row of
"sqlite_sq_55E270A58AA8"
11  Function0  0 0 5 random(0)  00  r[5]=func(r[0])
12  Found  3 165 1  00  key=r[5]
13  MakeRecord 5 1 600  r[6]=mkrec(r[5])
14  IdxInsert  3 6 000  key=r[6]
15  Yield  1 0 000
16Goto   0 10000
17EndCoroutine   1 0 000
18InitCoroutine  1 0 200
19  Yield  1 24000  next row of
"sqlite_sq_55E270A58EA8"
20  Copy   5 7 000  r[7]=r[5];
sqlite_sq_55E270A58EA8.r
21  Copy   7 8 000  r[8]=r[7]
22  ResultRow  7 2 000  output=r[7..8]
23Goto   0 19000
24Halt   0 0 000
920225462863128947|920225462863128947
-723158119245037038|-723158119245037038

​Hope this at least explains what is happening. I don't know if this is
"correct" or not.​


-- 
I have a theory that it's impossible to prove anything, but I 

[sqlite] random value get re-generated too often in SQLite

2017-12-08 Thread John Mount
I am seeing an issue where a random value in a sub-query gets re-generated each 
time the value is used in later queries.  Below is an example query:

SELECT r AS r1, r AS r2 FROM ( SELECT random() AS r FROM ( SELECT * from ( 
VALUES(1),(2) ) ) a ) b

One would expect r1 == r2.

---
John Mount
http://www.win-vector.com/  
Our book: Practical Data Science with R http://www.manning.com/zumel/ 




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Virtual table row deletition on trigger

2017-12-08 Thread Tibor Balog

All right,

I think I was missing a potentialy important info.

I am deleting rows as ranges.

It is up to SQLite how to do the deletation.

I am pushing ranges in a loop in the transaction.

Than the loop breaks signaling "busy" or "overwhelmed" state.

Without the transaction frame it is working fine.

-Ursprüngliche Nachricht- 
From: Tibor Balog

Sent: Friday, December 8, 2017 7:11 PM
To: SQLite
Subject: [sqlite] Virtual table row deletition on trigger

Hello,

I run into a little anoyance regarding virtual table.

I have a contentless table:

CREATE VIRTUAL TABLE "OCR" using fts5 (content='',FullText)

bind with a trigger for row deletition:

CREATE TRIGGER "ART_AD" AFTER DELETE ON "ART" BEGIN INSERT INTO "OCR" 
("OCR",rowid) VALUES('DELETE',old.rowid);END


It works as advertised however if I am deleting rows in transaction above 
~20.000 rows put SQLite himself in “busy” state.


Is this a known limitation ?
That the virtual table row deletition triggered on normal table deletition 
interfers with the normal table in transaction?


Info:
Im not using VAL, jurnal is in memory.

Latest build.

Thank You,
Tibor
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users 


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DateTime to bigint

2017-12-08 Thread Tibor Balog

Well,

coming from a strongly typed education this is too much freedom for me. - 
Just kidding -


Thanks for the vital info, appreciated.

-Ursprüngliche Nachricht- 
From: Keith Medcalf

Sent: Friday, December 8, 2017 7:14 PM
To: SQLite mailing list
Subject: Re: [sqlite] DateTime to bigint


That is:

UPDATE Table
  SET Datum = (StrfTime('%s', Datum) + 62135596800) * 1000
WHERE typeof(Datum) == 'text';

There is no such thing as a datatype of "bigint".
This declaration will give you a column affinity of "integer".

There is no such thing as a datatype of "DateTime".
This declaration will give you a column affinity of "numeric".

https://sqlite.org/datatype3.html

The TypeOf(...) function tells you the type of a dataitem.  That is, the 
particular data stored in a specific column of the current row.  It says 
nothing about the types of data stored in other columns in the same row, or 
in the same column in other rows.  Every dataitem is individually stored 
using of the the supported types.


Results from TypeOf(...) may be 'null', 'real', 'integer', 'text' or 'blob' 
as these are the only "data types" for a dataitem in SQLite3.


https://sqlite.org/lang_corefunc.html#typeof

The builtin datetime functions do not have any clue what to do with "rata 
die huns" timestamp values so your application will, of course, have to 
handle them itself.  Or you will have to compute a "supported" value for use 
with the builtins (IOS8601 text string *subset*, Unix Timestamp, Julianday 
number in the Propeleptic Gregorian Calendar, for example).  Additionally, 
the datetime values returned by the "localtime"/"UTC" modifier(s) may only 
be correct for 'NOW' where the value of 'NOW' happens to be the current 
instant, if and only if the computer is displaying the correct Zulu and 
Wallclock time of this current instant, depending on the vagaries of time as 
implemented by your platform C library, and such conversions for any time 
that is not the ever continuous streams of the instant 'now' may be 
incorrect, again depending on the platform C library.


---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



-Original Message-
From: sqlite-users [mailto:sqlite-users-
boun...@mailinglists.sqlite.org] On Behalf Of Paul Sanderson
Sent: Friday, 8 December, 2017 10:17
To: SQLite mailing list
Subject: Re: [sqlite] DateTime to bigint

Hi Tibor

Your date format is windows ticks, i.e. 100 nano seconds intervals
since
01/01/0001

You can convert it as follows

SELECT (StrfTime('%s', '2004-08-05') + 62135596800) * 1000 AS
Ticks

where StrfTime('%s', '2004-08-05') is the number of seconds between
the
provided date and 1/1/1970
62135596800 is the difference in seconds between 1/1/1970 and
1/1/0001
and 1000 converts it to nanoseconds

HTH

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-
Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 8 December 2017 at 16:53, Tibor Balog  wrote:


Hi there,

I am very new to SQLite but like it sofar.
Run into something couldn’t find an answer may someone can put me

in the

right direction..

Have an application uses SQLite table with a column “Datum” defined

as

bigint.
I have an identical table with column “Datum” as DateTime “-mm-

dd”


How can I convert this column -preferable in place- to bigint?

Some more info:

“2004-08-05” should be converted to “6322726080” .

Thank You,
in advence.
(Tibor)
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-

users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users 


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DateTime to bigint

2017-12-08 Thread Keith Medcalf

That is:

UPDATE Table
   SET Datum = (StrfTime('%s', Datum) + 62135596800) * 1000
 WHERE typeof(Datum) == 'text';

There is no such thing as a datatype of "bigint".  
This declaration will give you a column affinity of "integer".

There is no such thing as a datatype of "DateTime". 
This declaration will give you a column affinity of "numeric". 

https://sqlite.org/datatype3.html

The TypeOf(...) function tells you the type of a dataitem.  That is, the 
particular data stored in a specific column of the current row.  It says 
nothing about the types of data stored in other columns in the same row, or in 
the same column in other rows.  Every dataitem is individually stored using of 
the the supported types.

Results from TypeOf(...) may be 'null', 'real', 'integer', 'text' or 'blob' as 
these are the only "data types" for a dataitem in SQLite3.

https://sqlite.org/lang_corefunc.html#typeof

The builtin datetime functions do not have any clue what to do with "rata die 
huns" timestamp values so your application will, of course, have to handle them 
itself.  Or you will have to compute a "supported" value for use with the 
builtins (IOS8601 text string *subset*, Unix Timestamp, Julianday number in the 
Propeleptic Gregorian Calendar, for example).  Additionally, the datetime 
values returned by the "localtime"/"UTC" modifier(s) may only be correct for 
'NOW' where the value of 'NOW' happens to be the current instant, if and only 
if the computer is displaying the correct Zulu and Wallclock time of this 
current instant, depending on the vagaries of time as implemented by your 
platform C library, and such conversions for any time that is not the ever 
continuous streams of the instant 'now' may be incorrect, again depending on 
the platform C library.

---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users [mailto:sqlite-users-
>boun...@mailinglists.sqlite.org] On Behalf Of Paul Sanderson
>Sent: Friday, 8 December, 2017 10:17
>To: SQLite mailing list
>Subject: Re: [sqlite] DateTime to bigint
>
>Hi Tibor
>
>Your date format is windows ticks, i.e. 100 nano seconds intervals
>since
>01/01/0001
>
>You can convert it as follows
>
>SELECT (StrfTime('%s', '2004-08-05') + 62135596800) * 1000 AS
>Ticks
>
>where StrfTime('%s', '2004-08-05') is the number of seconds between
>the
>provided date and 1/1/1970
>62135596800 is the difference in seconds between 1/1/1970 and
>1/1/0001
>and 1000 converts it to nanoseconds
>
>HTH
>
>Paul
>www.sandersonforensics.com
>skype: r3scue193
>twitter: @sandersonforens
>Tel +44 (0)1326 572786
>http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-
>Toolkit
>-Forensic Toolkit for SQLite
>email from a work address for a fully functional demo licence
>
>On 8 December 2017 at 16:53, Tibor Balog  wrote:
>
>> Hi there,
>>
>> I am very new to SQLite but like it sofar.
>> Run into something couldn’t find an answer may someone can put me
>in the
>> right direction..
>>
>> Have an application uses SQLite table with a column “Datum” defined
>as
>> bigint.
>> I have an identical table with column “Datum” as DateTime “-mm-
>dd”
>>
>> How can I convert this column -preferable in place- to bigint?
>>
>> Some more info:
>>
>> “2004-08-05” should be converted to “6322726080” .
>>
>> Thank You,
>> in advence.
>> (Tibor)
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-
>users
>>
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Virtual table row deletition on trigger

2017-12-08 Thread Tibor Balog
Hello,

I run into a little anoyance regarding virtual table.

I have a contentless table:

CREATE VIRTUAL TABLE "OCR" using fts5 (content='',FullText)

bind with a trigger for row deletition:

CREATE TRIGGER "ART_AD" AFTER DELETE ON "ART" BEGIN INSERT INTO "OCR" 
("OCR",rowid) VALUES('DELETE',old.rowid);END

It works as advertised however if I am deleting rows in transaction above 
~20.000 rows put SQLite himself in “busy” state.

Is this a known limitation ?
That the virtual table row deletition triggered on normal table deletition 
interfers with the normal table in transaction?

Info:
Im not using VAL, jurnal is in memory.

Latest build.

Thank You,
Tibor
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DateTime to bigint

2017-12-08 Thread Tibor Balog

Bow!

that's what I call 'service' !

I was looking for a good day for this and nothing habe poped up only for the 
reverse task.


You made my day!

Thank You Sir.

-Ursprüngliche Nachricht- 
From: Paul Sanderson

Sent: Friday, December 8, 2017 6:16 PM
To: SQLite mailing list
Subject: Re: [sqlite] DateTime to bigint

Hi Tibor

Your date format is windows ticks, i.e. 100 nano seconds intervals since
01/01/0001

You can convert it as follows

SELECT (StrfTime('%s', '2004-08-05') + 62135596800) * 1000 AS Ticks

where StrfTime('%s', '2004-08-05') is the number of seconds between the
provided date and 1/1/1970
62135596800 is the difference in seconds between 1/1/1970 and 1/1/0001
and 1000 converts it to nanoseconds

HTH

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 8 December 2017 at 16:53, Tibor Balog  wrote:


Hi there,

I am very new to SQLite but like it sofar.
Run into something couldn’t find an answer may someone can put me in the
right direction..

Have an application uses SQLite table with a column “Datum” defined as
bigint.
I have an identical table with column “Datum” as DateTime “-mm-dd”

How can I convert this column -preferable in place- to bigint?

Some more info:

“2004-08-05” should be converted to “6322726080” .

Thank You,
in advence.
(Tibor)
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users 


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DateTime to bigint

2017-12-08 Thread Paul Sanderson
Hi Tibor

Your date format is windows ticks, i.e. 100 nano seconds intervals since
01/01/0001

You can convert it as follows

SELECT (StrfTime('%s', '2004-08-05') + 62135596800) * 1000 AS Ticks

where StrfTime('%s', '2004-08-05') is the number of seconds between the
provided date and 1/1/1970
62135596800 is the difference in seconds between 1/1/1970 and 1/1/0001
and 1000 converts it to nanoseconds

HTH

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 8 December 2017 at 16:53, Tibor Balog  wrote:

> Hi there,
>
> I am very new to SQLite but like it sofar.
> Run into something couldn’t find an answer may someone can put me in the
> right direction..
>
> Have an application uses SQLite table with a column “Datum” defined as
> bigint.
> I have an identical table with column “Datum” as DateTime “-mm-dd”
>
> How can I convert this column -preferable in place- to bigint?
>
> Some more info:
>
> “2004-08-05” should be converted to “6322726080” .
>
> Thank You,
> in advence.
> (Tibor)
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] DateTime to bigint

2017-12-08 Thread Tibor Balog
Hi there,

I am very new to SQLite but like it sofar.
Run into something couldn’t find an answer may someone can put me in the right 
direction..

Have an application uses SQLite table with a column “Datum” defined as bigint.
I have an identical table with column “Datum” as DateTime “-mm-dd”

How can I convert this column -preferable in place- to bigint?

Some more info:

“2004-08-05” should be converted to “6322726080” .

Thank You,
in advence.
(Tibor)
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to dump all the tables into separate files?

2017-12-08 Thread Luuk
You are correct, but i'm not using sqlite enough to know (or remember) 
it also has a sqlite_master :-):-)


On 08-12-17 16:48, Donald Griggs wrote:

For windows, I think the following would work:

sqlite3 %DB% -batch "SELECT TBL_NAME FROM sqlite_master WHERE type=='table'
order by tbl_name;"   >%temp%\dump.tmp


The order by is not needed here (who cares about the order of exporting 
tables??? :-P:-D



for /f "usebackq" %%i in (`type %temp%\dump.tmp`) do sqlite3 %DB% -batch
".mode tabs" -batch "SELECT * FROM %%i " >%%i.tsv

delete %temp%\dump.tmp >nul

On Fri, Dec 8, 2017 at 10:19 AM, Luuk  wrote:


On 08-12-17 14:52, Luuk wrote:


On 08-12-17 14:13, Simon Slavin wrote:


On 8 Dec 2017, at 7:02am, Peng Yu  wrote:

I'd like to dump all the tables to separate files, one table one file.

Each file should be in TSV format.

Is there a convenient way to do so in sqlite3?


which you might want to use with

.mode tabs

Simon.


On Windows you can use this batchfile:
@echo off
set DB=somedirectory\yourDBfile.sqlite
sqlite3 %DB% -batch ".tables" >c:\temp\dump.tmp
for /f "usebackq" %%i in (`type c:\temp\dump.tmp`) do sqlite3 %DB% -batch
".mode tabs" -batch "SELECT * FROM %%i" >%%i.tsv

Let /me know if you do not use Windows, this can be rewritten to work on
another OS too ;)




There's a bug in my script (Windows verion only, its solved in the
linux-version :-):-)!)

when doing this:
sqlite3 db.sqlite -batch ".tables"

My script 'assumes' the output gives 1 table per line, and now only the
first table on a line is exported to TSV

in Linux writing scripts is more easy, so this problem can be dealt with:

#!/bin/bash

db=somedirectory/yourDBfile.sqlite
sqlite3 $db -batch ".tables" | sed -e 's/  */\n/g' | grep -v '^$' | while
read line
do
 echo $line
 sqlite3 $db -batch ".mode tabs" -batch "SELECT * FROM $line"

$line.tsv

done



For MacOS you have to replace the sed expression:
's/  */\n/g'(BTW there are 2 spaces before the '*'!!
with:
's/  */\$'\n/g' (still 2 spaces before the '*' :-)



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why Unicode is difficult

2017-12-08 Thread John G
Fascinating article.
Thanks.
John Gillespie


On 4 December 2017 at 13:08, Simon Slavin  wrote:

> Every so often someone asks on this list for Unicode to be handled
> properly.  I did it myself.  Then other people have to explain how hard
> this is.  So here’s an article which, after introductory material,
> discusses the hard questions in Unicode:
>
> 
>
> Are two strings the same?
> How long is a string?
> How do you sort things in alphabetical order?
>
> The first and third questions are requirements for implementing COLLATE in
> SQLite.  And the fact that the second question is a difficult one
> emphasises that one shouldn’t take Unicode as simple.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to dump all the tables into separate files?

2017-12-08 Thread Donald Griggs
For windows, I think the following would work:

sqlite3 %DB% -batch "SELECT TBL_NAME FROM sqlite_master WHERE type=='table'
order by tbl_name;"   >%temp%\dump.tmp

for /f "usebackq" %%i in (`type %temp%\dump.tmp`) do sqlite3 %DB% -batch
".mode tabs" -batch "SELECT * FROM %%i " >%%i.tsv

delete %temp%\dump.tmp >nul

On Fri, Dec 8, 2017 at 10:19 AM, Luuk  wrote:

> On 08-12-17 14:52, Luuk wrote:
>
>> On 08-12-17 14:13, Simon Slavin wrote:
>>
>>>
>>> On 8 Dec 2017, at 7:02am, Peng Yu  wrote:
>>>
>>> I'd like to dump all the tables to separate files, one table one file.
 Each file should be in TSV format.

 Is there a convenient way to do so in sqlite3?

>>>
>>> which you might want to use with
>>>
>>> .mode tabs
>>>
>>> Simon.
>>>
>> On Windows you can use this batchfile:
>> @echo off
>> set DB=somedirectory\yourDBfile.sqlite
>> sqlite3 %DB% -batch ".tables" >c:\temp\dump.tmp
>> for /f "usebackq" %%i in (`type c:\temp\dump.tmp`) do sqlite3 %DB% -batch
>> ".mode tabs" -batch "SELECT * FROM %%i" >%%i.tsv
>>
>> Let /me know if you do not use Windows, this can be rewritten to work on
>> another OS too ;)
>>
>>
>>
> There's a bug in my script (Windows verion only, its solved in the
> linux-version :-):-)!)
>
> when doing this:
> sqlite3 db.sqlite -batch ".tables"
>
> My script 'assumes' the output gives 1 table per line, and now only the
> first table on a line is exported to TSV
>
> in Linux writing scripts is more easy, so this problem can be dealt with:
>
> #!/bin/bash
>
> db=somedirectory/yourDBfile.sqlite
> sqlite3 $db -batch ".tables" | sed -e 's/  */\n/g' | grep -v '^$' | while
> read line
> do
> echo $line
> sqlite3 $db -batch ".mode tabs" -batch "SELECT * FROM $line"
> >$line.tsv
> done
>
>
>
> For MacOS you have to replace the sed expression:
> 's/  */\n/g'(BTW there are 2 spaces before the '*'!!
> with:
> 's/  */\$'\n/g' (still 2 spaces before the '*' :-)
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unable to store 500MB size of row data even after define macro SQLITE_MAX_LENGTH

2017-12-08 Thread Kees Nuyt
On Thu, 7 Dec 2017 23:59:02 -0700 (MST), Durgesh
 wrote:

> getting error "QSqlError("18", "Unable to bind parameters", "string or blob
> too big")" while inserting a row data of size 500MB.
>
> However insertion of 450 MB is successful.
>
> Tried define the macro SQLITE_MAX_LENGTH with value of
> SQLITE_MAX_LENGTH=20 , larger than 500MB.
>
> pls suggest a way insertion can be done for more than 450 MB of data.
>
> I am using Qt SQL , for interfacing with sqlite db.

What is the databases page size?
How big is your page cache? 
Is cache spill enabled?

See: 
PRAGMA schema.cache_size;
https://sqlite.org/pragma.html#pragma_cache_size
PRAGMA cache_spill; 
https://sqlite.org/pragma.html#pragma_cache_spill
PRAGMA schema.page_size; 
https://sqlite.org/pragma.html#pragma_page_size
PRAGMA schema.cache_size;

Other things to look at:
PRAGMA schema.journal_size_limit;
https://sqlite.org/pragma.html#pragma_journal_size_limit
PRAGMA soft_heap_limit;
https://sqlite.org/pragma.html#pragma_soft_heap_limit

HTH

-- 
Regards,
Kees Nuyt
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to dump all the tables into separate files?

2017-12-08 Thread Luuk

On 08-12-17 14:52, Luuk wrote:

On 08-12-17 14:13, Simon Slavin wrote:


On 8 Dec 2017, at 7:02am, Peng Yu  wrote:


I'd like to dump all the tables to separate files, one table one file.
Each file should be in TSV format.

Is there a convenient way to do so in sqlite3?


which you might want to use with

.mode tabs

Simon.

On Windows you can use this batchfile:
@echo off
set DB=somedirectory\yourDBfile.sqlite
sqlite3 %DB% -batch ".tables" >c:\temp\dump.tmp
for /f "usebackq" %%i in (`type c:\temp\dump.tmp`) do sqlite3 %DB% 
-batch ".mode tabs" -batch "SELECT * FROM %%i" >%%i.tsv


Let /me know if you do not use Windows, this can be rewritten to work 
on another OS too ;)





There's a bug in my script (Windows verion only, its solved in the 
linux-version :-):-)!)


when doing this:
sqlite3 db.sqlite -batch ".tables"

My script 'assumes' the output gives 1 table per line, and now only the 
first table on a line is exported to TSV


in Linux writing scripts is more easy, so this problem can be dealt with:

#!/bin/bash

db=somedirectory/yourDBfile.sqlite
sqlite3 $db -batch ".tables" | sed -e 's/  */\n/g' | grep -v '^$' | 
while read line

do
    echo $line
    sqlite3 $db -batch ".mode tabs" -batch "SELECT * FROM $line" 
>$line.tsv

done



For MacOS you have to replace the sed expression:
's/  */\n/g'    (BTW there are 2 spaces before the '*'!!
with:
's/  */\$'\n/g' (still 2 spaces before the '*' :-)


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to dump all the tables into separate files?

2017-12-08 Thread Simon Slavin


On 8 Dec 2017, at 1:58pm, Peng Yu  wrote:

> Could you provide the working code for bash (on Mac OS X or linux)? Thanks.

Something like this …

sqlite3 myDatabase.sqlite > myTable.tsv << EOS
.mode tabs 
select * from myTable;
EOS

The first EOS must be at the very end of the line — no whitespace.
The first EOS must be the only thing on the line — no whitespace — and followed 
by the return character.

Alternatively put the tool commands in a script file and do

sqlite3 myDatabase.sqlite > myTable.tsv < myScript.txt

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Timing issue with min, max and count

2017-12-08 Thread x
>But these special optimizations only apply when min(), max(), and
count(*) are used in isolation.  Hence, they do not work for the first
query above that uses all three functions at one.

Thanks Richard.

>(1) If you are using INTEGER PRIMARY KEY, you should *not* be using a
WITHOUT ROWID.  You can.  It will get the correct answer.  But it will
be unnecessarily slower.

I created that table when i first started learning SQLite because I thought it 
would take up less space. I later read what you mention in (1) above and went 
back and rebuilt the table minus the WITHOUT ROWID. It then took up more space 
and there was no noticeable speed gain so I reverted back. As I said, it was a 
while ago but I’ll revisit the whole affair when I’ve more time.


>(2) You should never include a single-column PRIMARY KEY as part of an
index.  The database engine will do that for you automatically.

That’s something else I only realised recently.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to dump all the tables into separate files?

2017-12-08 Thread Peng Yu
Could you provide the working code for bash (on Mac OS X or linux)? Thanks.

On Fri, Dec 8, 2017 at 7:52 AM, Luuk  wrote:
> On 08-12-17 14:13, Simon Slavin wrote:
>>
>>
>> On 8 Dec 2017, at 7:02am, Peng Yu  wrote:
>>
>>> I'd like to dump all the tables to separate files, one table one file.
>>> Each file should be in TSV format.
>>>
>>> Is there a convenient way to do so in sqlite3?
>>
>> There’s no direct output from the SQLite library to produce TSV format.
>>
>> You could write one in your own programming language.
>>
>> Alternatively, you could script the SQLite Command-line tool to produce
>> your .tsv files for you.  Take a look at section 15 of
>>
>> 
>>
>> which you might want to use with
>>
>> .mode tabs
>>
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
>
> On Windows you can use this batchfile:
> @echo off
> set DB=somedirectory\yourDBfile.sqlite
> sqlite3 %DB% -batch ".tables" >c:\temp\dump.tmp
> for /f "usebackq" %%i in (`type c:\temp\dump.tmp`) do sqlite3 %DB% -batch
> ".mode tabs" -batch "SELECT * FROM %%i" >%%i.tsv
>
> Let /me know if you do not use Windows, this can be rewritten to work on
> another OS too ;)
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



-- 
Regards,
Peng
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to dump all the tables into separate files?

2017-12-08 Thread Luuk

On 08-12-17 14:13, Simon Slavin wrote:


On 8 Dec 2017, at 7:02am, Peng Yu  wrote:


I'd like to dump all the tables to separate files, one table one file.
Each file should be in TSV format.

Is there a convenient way to do so in sqlite3?

There’s no direct output from the SQLite library to produce TSV format.

You could write one in your own programming language.

Alternatively, you could script the SQLite Command-line tool to produce your 
.tsv files for you.  Take a look at section 15 of



which you might want to use with

.mode tabs

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


On Windows you can use this batchfile:
@echo off
set DB=somedirectory\yourDBfile.sqlite
sqlite3 %DB% -batch ".tables" >c:\temp\dump.tmp
for /f "usebackq" %%i in (`type c:\temp\dump.tmp`) do sqlite3 %DB% 
-batch ".mode tabs" -batch "SELECT * FROM %%i" >%%i.tsv


Let /me know if you do not use Windows, this can be rewritten to work on 
another OS too ;)



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Timing issue with min, max and count

2017-12-08 Thread Simon Slavin


On 8 Dec 2017, at 1:17pm, Simon Slavin  wrote:

> helps you (and us, if you want to post it) to understand what’s happening.

Please ignore my post.  Dr H explain your situation exactly.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Possible bug when compiling with SQLITE_OMIT_TRIGGER option

2017-12-08 Thread M Irya
Hello,

I'm trying to create a minimal SQLite3 library build and seems like the
foreign keys logic is not properly run on DELETE when building the sources
with -DSQLITE_OMIT_TRIGGER=1.

Here's the patch against amalgamation v3.21.0 sqlite.c:

--- sqlite3.c.orig 2017-12-08 14:06:04.814913000 +0100
+++ sqlite3.c 2017-12-08 14:06:26.125563000 +0100
@@ -105900,6 +105900,7 @@
 #else
 # define pTrigger 0
 # define isView 0
+  bComplex = sqlite3FkRequired(pParse, pTab, 0, 0);
 #endif
 #ifdef SQLITE_OMIT_VIEW
 # undef isView

Without that bComplex in sqlite3DeleteFrom() is at least undefined in the
if () condition 90 lines below.

Let me know if this looks correct/can be pushed to the official SQLite repo.

Regards,
KM
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Timing issue with min, max and count

2017-12-08 Thread Simon Slavin


On 8 Dec 2017, at 12:20pm, x  wrote:

> I have a table with 2.4 million records. It’s a without rowid table (I don’t 
> know if that’s significant) with an integer primary key (ID) and several 
> secondary indexes of the form (OtherCol, ID). If I run
> 
> select min(ID), max(ID), count(*) from BigTbl;
> 
> It takes [far longer than the individual SELECTs take in total)

Are you doing this in your own programming language ?

If so, can you try obtaining timings using the SQLite command-line tool ?  If 
you want to post them please tell us which version of SQLite you’re using.

You might find the output of

EXPLAIN QUERY PLAN select min(ID), max(ID), count(*) from BigTbl;

helps you (and us, if you want to post it) to understand what’s happening.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Timing issue with min, max and count

2017-12-08 Thread Richard Hipp
On 12/8/17, x  wrote:
>
> I have a table with 2.4 million records. It’s a without rowid table (I don’t
> know if that’s significant) with an integer primary key (ID) and several
> secondary indexes of the form (OtherCol, ID). If I run

(1) If you are using INTEGER PRIMARY KEY, you should *not* be using a
WITHOUT ROWID.  You can.  It will get the correct answer.  But it will
be unnecessarily slower.

(2) You should never include a single-column PRIMARY KEY as part of an
index.  The database engine will do that for you automatically.

>
> select min(ID), max(ID), count(*) from BigTbl;
>
> It takes 0.67 secs
>
> If I run the three commands separately
>
> select min(ID) from BigTbl;
> select max(ID) from BigTbl;
> select count(*) from BigTbl;


SQLite internally performs special optimizations for the first two
cases, transforming them as follows:

(A) SELECT id FROM bittbl ORDER BY id LIMIT 1;
(B) SELECT id FROM bigtbl ORDER BY id DESC LIMIT 1;

And isolated "count(*)" without a WHERE uses a separate special
optimization in the storage engine that allows it to count the number
of entries in the btree very quick.

But these special optimizations only apply when min(), max(), and
count(*) are used in isolation.  Hence, they do not work for the first
query above that uses all three functions at one.

>
> the TOTAL time to run all 3 is around 0.1 secs.
>
> explain query plan select min(ID) from BigTbl; suggests the primary key is
> used
> explain query plan select max(ID) from BigTbl; suggests the primary key is
> used
> explain query plan select count(*) from BigTbl; suggests a secondary index
> (call it Ndx) is used
>
> Any combo also seems to use secondary index Ndx e.g.
>
> explain query plan select min(ID), max(ID) from BigTbl;
> and
> explain query plan select min(ID), max(ID), count(*) from BigTbl;
>
> both use secondary index Ndx.
>
> All come up with the correct answer but obviously when Ndx is used min and
> max require checking all values of ID rather than obtaining the result from
> first and last entries in primary key.
>
> Tom
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to dump all the tables into separate files?

2017-12-08 Thread Simon Slavin


On 8 Dec 2017, at 7:02am, Peng Yu  wrote:

> I'd like to dump all the tables to separate files, one table one file.
> Each file should be in TSV format.
> 
> Is there a convenient way to do so in sqlite3?

There’s no direct output from the SQLite library to produce TSV format.

You could write one in your own programming language.

Alternatively, you could script the SQLite Command-line tool to produce your 
.tsv files for you.  Take a look at section 15 of



which you might want to use with

.mode tabs

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Timing issue with min, max and count

2017-12-08 Thread x

I have a table with 2.4 million records. It’s a without rowid table (I don’t 
know if that’s significant) with an integer primary key (ID) and several 
secondary indexes of the form (OtherCol, ID). If I run

select min(ID), max(ID), count(*) from BigTbl;

It takes 0.67 secs

If I run the three commands separately

select min(ID) from BigTbl;
select max(ID) from BigTbl;
select count(*) from BigTbl;

the TOTAL time to run all 3 is around 0.1 secs.

explain query plan select min(ID) from BigTbl; suggests the primary key is used
explain query plan select max(ID) from BigTbl; suggests the primary key is used
explain query plan select count(*) from BigTbl; suggests a secondary index 
(call it Ndx) is used

Any combo also seems to use secondary index Ndx e.g.

explain query plan select min(ID), max(ID) from BigTbl;
and
explain query plan select min(ID), max(ID), count(*) from BigTbl;

both use secondary index Ndx.

All come up with the correct answer but obviously when Ndx is used min and max 
require checking all values of ID rather than obtaining the result from first 
and last entries in primary key.

Tom



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] getting error "string or blob too big" for 500 MB data insertion

2017-12-08 Thread Richard Hipp
On 12/8/17, Durgesh  wrote:
> I am trying to insert 500 MB of row data using Qt SQL into sqlite db.

How are you measuring the row size?

>
> Insertion is successful up to 450 MB.
>
> defined macro SQLITE_MAX_LENGTH to larger value than 500 MB, as mentioned in
> http://www.sqlite.org/limits.html
>
> Still getting above error while insertion.
>
>
>
>
>
> --
> Sent from: http://sqlite.1065341.n5.nabble.com/
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] getting error "string or blob too big" for 500 MB data insertion

2017-12-08 Thread Durgesh
I am trying to insert 500 MB of row data using Qt SQL into sqlite db.

Insertion is successful up to 450 MB.

defined macro SQLITE_MAX_LENGTH to larger value than 500 MB, as mentioned in
http://www.sqlite.org/limits.html

Still getting above error while insertion.





--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to dump all the tables into separate files?

2017-12-08 Thread Peng Yu
I'd like to dump all the tables to separate files, one table one file.
Each file should be in TSV format.

Is there a convenient way to do so in sqlite3?

-- 
Regards,
Peng
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Unable to store 500MB size of row data even after define macro SQLITE_MAX_LENGTH

2017-12-08 Thread Durgesh
getting error "QSqlError("18", "Unable to bind parameters", "string or blob
too big")" while inserting a row data of size 500MB.

However insertion of 450 MB is successful.

Tried define the macro SQLITE_MAX_LENGTH with value of
SQLITE_MAX_LENGTH=20 , larger than 500MB.

pls suggest a way insertion can be done for more than 450 MB of data.

I am using Qt SQL , for interfacing with sqlite db.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users