Re: [sqlite] Sqlite3.6 Command-line delete/update not working with large db file(>280GB)

2017-10-17 Thread David Raymond
If you're doing 1 gigantic insert to populate the table then it's more 
efficient to delay creating an index until the end. If you've already got a 
bunch of data in there and are inserting more, then you get diminishing returns 
from that. In this case though you're not creating a separate good index, 
you're creating a duplicate one that doesn't add value even when it's there and 
complete, so it's doing nothing but slowing things down. When you declare a 
primary key SQLite implements that by creating a unique index on those fields 
to keep track of it. (named sqlite_autoindex_map_1 below) When you explicitly 
declare another index on the same data SQLite shrugs and does what you ask, 
making a second index of the same things.

Depending on your confidence level you could get rid of the "primary key" line 
and only add the explicit unique index at the end of the initial population. 
Though I'd advise for any primary key to be in there from the start and only 
delay the creation of non-unique/helper indexes. Just my own personal 
preference on that though.


CREATE TABLE map(
  zoom_level INTEGER,
  tile_column INTEGER,
  tile_row INTEGER,
  tile_id VARCHAR(256) NOT NULL,
  PRIMARY KEY(zoom_level, tile_column, tile_row)
);
CREATE UNIQUE INDEX map_index ON map(
  zoom_level ASC,
  tile_column ASC,
  tile_row ASC
);

sqlite> select * from sqlite_master where type = 'index' and tbl_name = 'map';
type|name|tbl_name|rootpage|sql
index|sqlite_autoindex_map_1|map|5|
index|map_index|map|7|CREATE UNIQUE INDEX map_index ON map (zoom_level ASC, 
tile_column ASC, tile_row ASC)

sqlite> pragma index_list(map);
seq|name|unique|origin|partial
0|map_index|1|c|0
1|sqlite_autoindex_map_1|1|pk|0

sqlite> pragma index_info(sqlite_autoindex_map_1);
seqno|cid|name
0|0|zoom_level
1|1|tile_column
2|2|tile_row

sqlite> pragma index_info(map_index);
seqno|cid|name
0|0|zoom_level
1|1|tile_column
2|2|tile_row


-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Fiona
Sent: Monday, October 16, 2017 10:36 PM
To: sqlite-users@mailinglists.sqlite.org
Subject: Re: [sqlite] Sqlite3.6 Command-line delete/update not working with 
large db file(>280GB)

>>The setting for synchronous is basically what level of safety net do you
want if it dies in the middle of something. Setting it to off shouldn't
cause any corruption if things go well, it should only come into play if you
saw errors or didn't close things down correctly etc. 

You're right, my Python code was ended manually sometime for it's taking too
much time to finish the INSERT/UPDATE operation, or the image data I get is
wrong.

>>The unique index you declared is redundant by the way, declaring those
three fields as the primary key makes a unique index already to keep track
of that. Did you intend to make that on the retry table? 

The redundant map_index is an attempt to improve insert speed, as I learned
drop index before insert operation is a better way to go. But now with my
data growing so huge, drop/rebuild index also takes quite a long time, and I
never choice to drop then create this index anymore, just leave the index
there. Does it still effect my operation and I should just drop it? 

And if I want to speed up insert operation further more, what measures
should I consider?  I'v already set synchronous and journal_mode off, use
transaction and prepared statement, but when insert data there's still 3
times' speed difference between my code and  the *attach-insert* method. 



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3.6 Command-line delete/update not working with large db file(>280GB)

2017-10-16 Thread Fiona
>>The setting for synchronous is basically what level of safety net do you
want if it dies in the middle of something. Setting it to off shouldn't
cause any corruption if things go well, it should only come into play if you
saw errors or didn't close things down correctly etc. 

You're right, my Python code was ended manually sometime for it's taking too
much time to finish the INSERT/UPDATE operation, or the image data I get is
wrong.

>>The unique index you declared is redundant by the way, declaring those
three fields as the primary key makes a unique index already to keep track
of that. Did you intend to make that on the retry table? 

The redundant map_index is an attempt to improve insert speed, as I learned
drop index before insert operation is a better way to go. But now with my
data growing so huge, drop/rebuild index also takes quite a long time, and I
never choice to drop then create this index anymore, just leave the index
there. Does it still effect my operation and I should just drop it? 

And if I want to speed up insert operation further more, what measures
should I consider?  I'v already set synchronous and journal_mode off, use
transaction and prepared statement, but when insert data there's still 3
times' speed difference between my code and  the *attach-insert* method. 



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3.6 Command-line delete/update not working with large db file(>280GB)

2017-10-16 Thread Kees Nuyt
On Sun, 15 Oct 2017 18:36:56 -0700 (MST), Fiona
 wrote:

> Thanks for noticing that problem! Follow your instructions, now I'm sure
> it's all because my db file is corrupted.  Is there anything I can do to fix
> it?
>
> Integrity check result:
>  

I can think of three options:

1- Rebuild the database from the original input, 
   with the schema improvements suggested in
   this thread

2- Restore a recent backup, then import the data into
   a new database with the correct schema [*].

3- the recipe that Simon gave to retrieve as much of 
   the contents as possible using the .dump command
   and build a new database from the dump file,
   then import the data into a new database with
   the correct schema [*].


[*] The script for importing data from a database with the old
schema into a database with a better schema loks like this:

sqlite3 newdb.sqite http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3.6 Command-line delete/update not working with large db file(>280GB)

2017-10-16 Thread Richard Hipp
On 10/16/17, David Raymond  wrote:
> The setting for synchronous is basically what level of safety net do you
> want if it dies in the middle of something. Setting it to off shouldn't
> cause any corruption if things go well, it should only come into play if you
> saw errors or didn't close things down correctly etc.

Actually, synchronous=off is safe as long as you don't take a
hard-reboot or power loss in the middle of a transaction.  An
application crash with synchronous=off should be harmless.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3.6 Command-line delete/update not working with large db file(>280GB)

2017-10-16 Thread David Raymond
The setting for synchronous is basically what level of safety net do you want 
if it dies in the middle of something. Setting it to off shouldn't cause any 
corruption if things go well, it should only come into play if you saw errors 
or didn't close things down correctly etc.

The unique index you declared is redundant by the way, declaring those three 
fields as the primary key makes a unique index already to keep track of that. 
Did you intend to make that on the retry table?


-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Fiona
Sent: Sunday, October 15, 2017 9:15 PM
To: sqlite-users@mailinglists.sqlite.org
Subject: Re: [sqlite] Sqlite3.6 Command-line delete/update not working with 
large db file(>280GB)

Thanks a lot! That may be the problem: my db file is corrupted. Below is the
*PRAGMA integrity_check* result. It didn't return OK. 
<http://sqlite.1065341.n5.nabble.com/file/t8403/integrity_check.jpg> 

I think it's because I set PRAGMA synchronous=off in Python code to enhance
insert speed. 
Does that mean this db file can not be used anymore? Or is there any way I
can fix it?



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3.6 Command-line delete/update not working with large db file(>280GB)

2017-10-15 Thread Simon Slavin


On 16 Oct 2017, at 3:33am, Kees Nuyt  wrote:

> In SQLite, columns are stored in database pages in the order
> they are defined.  [snip]
> 
> So, it is best practice put keys and all columns with small
> contents up front, and all big TEXT and BLOB columns at the end
> of the column list, in ascending order of expected size.

Kees’ clear answer missed an important point.  When retrieving data from a row, 
SQLite does not bother to read past the last column it needs.  So if you have

CREATE TABLE MyTable (c1 INTEGER, c2 TEXT, c3 INTEGER, c4 TEXT, c5 BLOB)

and do this

SELECT c1, c4 FROM MyTale WHERE c2 = "HELLO"

then SQLite reads from c1 to c4, because it needs c4, but it stops at c4 for 
every row.  It never has to read any BLOB data at all.  This can save a lot of 
time, especially if your BLOBs are long and may cross a page boundary.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3.6 Command-line delete/update not working with large db file(>280GB)

2017-10-15 Thread Kees Nuyt
On Sun, 15 Oct 2017 17:45:03 -0700 (MST), Fiona
 wrote:

>>> Swapping the columns tile_data and tile_id may improve performance
> significantly, especially if the BLOB can get bigger than a database page. 
>
> Thanks for your advice, could you please explain more why is that? 
> The primary key is not change at all, what exectly causes the improvement?

In SQLite, columns are stored in database pages in the order
they are defined. The primary key of any table is usually
accessed more often than any other column, becuase it is either
used by a WHERE clause or the ON clause of a JOIN.

If a BLOB is bigger than a page, SDQLite has to read one or more
so called overflow paged to get to the primary key.

So, it is best practice put keys and all columns with small
contents up front, and all big TEXT and BLOB columns at the end
of the column list, in ascending order of expected size.

HTH

-- 
Regards,
Kees Nuyt
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3.6 Command-line delete/update not working with large db file(>280GB)

2017-10-15 Thread Simon Slavin


On 16 Oct 2017, at 2:14am, Fiona  wrote:

> Thanks a lot! That may be the problem: my db file is corrupted. Below is the
> *PRAGMA integrity_check* result. It didn't return OK. 
>  
> 
> I think it's because I set PRAGMA synchronous=off in Python code to enhance
> insert speed. 
> Does that mean this db file can not be used anymore?

Yes.  Anything done with this database may give the wrong result.  Any changes 
you make to it may overwrite data you want preserved.  Do not use it any more.  
This is our best method of rescuing as much data as possible:

1) Open the database in the SQLite shell tool.
2) Use the ".dump" command to dump the data into a file of SQL commands:

.dump sql.txt

3) Quit the SQLite shell tool with the ".quit" command.
4) Using a text editor open the text file and make sure it looks like it has 
the right sort of commands in.
5) Use the SQLite shell tool to create a new database file:

sqlite newfile.sqlite

6) Use the ".read" command to read and execute the commands from the text file:

.read sql.txt

7) Quit the SQLite shell tool with the ".quit" command.

You now have a new database which is guaranteed uncorrupt as far as SQLite is 
concerned (the data in it may not be all the data you had originally, some of 
that may have been lost).  You can rename your old and new database files and 
see whether things are working correctly now.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3.6 Command-line delete/update not working with large db file(>280GB)

2017-10-15 Thread Fiona
Thanks a lot! That may be the problem: my db file is corrupted. Below is the
*PRAGMA integrity_check* result. It didn't return OK. 
 

I think it's because I set PRAGMA synchronous=off in Python code to enhance
insert speed. 
Does that mean this db file can not be used anymore? Or is there any way I
can fix it?



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3.6 Command-line delete/update not working with large db file(>280GB)

2017-10-15 Thread Fiona
>>Interesting, because you cannot have two rows (two tile_id) for the same
combination of tile_zoom / tile_row / tile_column since the latter are a
required to be unique primary keys. 

Thanks for noticing that problem! Follow your instructions, now I'm sure
it's all because my db file is corrupted.  Is there anything I can do to fix
it?

Integrity check result:
 




--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3.6 Command-line delete/update not working with large db file(>280GB)

2017-10-15 Thread Fiona
>> Swapping the columns tile_data and tile_id may improve performance
significantly, especially if the BLOB can get bigger than a database page. 

Thanks for your advice, could you please explain more why is that? 
The primary key is not change at all, what exectly causes the improvement?



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3.6 Command-line delete/update not working with large db file(>280GB)

2017-10-15 Thread Fiona
>> SELECT typeof(tile_id), tile_id FROM map WHERE zoom_level=18 AND
tile_column=214233 AND tile_row=147702; 

tile_id is text type



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3.6 Command-line delete/update not working with large db file(>280GB)

2017-10-13 Thread Keith Medcalf

Interesting, because you cannot have two rows (two tile_id) for the same 
combination of tile_zoom / tile_row / tile_column since the latter are a 
required to be unique primary keys.  That is, the map table is declared as if 
it were the dictionary:

map[tile_zoom, tile_row, tile_column] = tile_id

Unless your keys are not of the correct type somewhere, it is impossible to 
have the same dictionay selectors have two results.  You need to check the 
typeof() each of your keys aka:

select *, typeof(tile_zoom), typeof(tile_row), typeof(tile_column), 
typeof(tile_id), '"' || tile_id || '"' from map where ;

is returning duplicate rows.  You must have one of them with an incorrect 
(different) type.  Also, your tile_id could have "trailing spaces" that you 
cannot see thus counfounding your delete request (which will be shown by the 
above select).

so your delete statement is effectively:

if map[tile_zoom, tile_row, tile_column] == tile_id:
del map[tile_zoom, tile_row, tile_column]

Have you run an integrity_check on the database to make sure it is not 
corrupted (because it certainly appears that it is).  Either that or you have 
simply inserted some bad data.

---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users [mailto:sqlite-users-
>boun...@mailinglists.sqlite.org] On Behalf Of Fiona
>Sent: Thursday, 12 October, 2017 20:33
>To: sqlite-users@mailinglists.sqlite.org
>Subject: Re: [sqlite] Sqlite3.6 Command-line delete/update not
>working with large db file(>280GB)
>
>>> SQLite does none of those.  Have your program print out the actual
>command
>it’s trying to execute.  Then try typing it manually.  See if it
>works when
>you type it by hand.
>
>Yes, I manually type all the command, also I check all the records I
>wanna
>delete by SELECT with the same where clause. Here are the scripts.
>
>schema:
><http://sqlite.1065341.n5.nabble.com/file/t8403/schema.jpg>
>
>delete/insert operations:
><http://sqlite.1065341.n5.nabble.com/file/t8403/delete.jpg>
>
>
>
>--
>Sent from: http://sqlite.1065341.n5.nabble.com/
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3.6 Command-line delete/update not working with large db file(>280GB)

2017-10-13 Thread Simon Slavin


On 13 Oct 2017, at 3:32am, Fiona  wrote:

> delete/insert operations:
>  

The two screenshots are useful.  Your two commands do not have the same WHERE 
clause.  I agree that it looks like they should have the same result, but 
obviously this is not working.

Please take a backup copy of your database.  Then, using the SQLite shell tool, 
starting with the same PRAMGA settings, I would like to see the results from 
these lines

.mode quote
PRAGMA count_changes;
PRAGMA integrity_check;
SELECT COUNT(*) FROM map WHERE tile_id='a37e1dba ….';
SELECT * FROM map WHERE zoom_level=18 AND tile_column=214233 AND 
tile_row=147702 AND tile_id='a37e1dba ';
DELETE FROM map WHERE zoom_level=18 AND tile_column=214233 AND tile_row=147702 
AND tile_id='a37e1dba ';
SELECT * FROM map WHERE zoom_level=18 AND tile_column=214233 AND 
tile_row=147702;
DELETE FROM map WHERE zoom_level=18 AND tile_column=214233 AND tile_row=147702;

Use the full value for tile_id where indicated.  I may also have made a mistake 
typing the other values above.  Please check they match yours.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3.6 Command-line delete/update not working with large db file(>280GB)

2017-10-13 Thread Kees Nuyt
On Thu, 12 Oct 2017 19:32:53 -0700 (MST), Fiona
 wrote:

> schema:
>  

Not related to your problem, just a hint:
Swapping the columns tile_data and tile_id may improve
performance significantly, especially if the BLOB can get bigger
than a database page.

Original:
CREATE TABLE images (
tile_data BLOB
,   tile_id VARCHAR(256) NOT NULL
,   PRIMARY KEY (tile_id)); -- might be slow

Improved:
CREATE TABLE images (
tile_id VARCHAR(256) NOT NULL
,   tile_data BLOB
,   PRIMARY KEY (tile_id)); -- could be faster


-- 
Regards,
Kees Nuyt

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3.6 Command-line delete/update not working with large db file(>280GB)

2017-10-13 Thread Richard Hipp
On 10/12/17, Fiona  wrote:
>
> As you can see, insert works, and I can also delete/update this last
> inserted record. It seems some pages of my db file is locked or something.

Before running your query, enter ".mode quote".  Then show us what the
output of this query is:

SELECT typeof(tile_id), tile_id FROM map WHERE zoom_level=18 AND
tile_column=214233 AND tile_row=147702;



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3.6 Command-line delete/update not working with large db file(>280GB)

2017-10-13 Thread Fiona
>> SQLite does none of those.  Have your program print out the actual command
it’s trying to execute.  Then try typing it manually.  See if it works when
you type it by hand. 

Yes, I manually type all the command, also I check all the records I wanna
delete by SELECT with the same where clause. Here are the scripts.

schema:
 

delete/insert operations:
 



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3.6 Command-line delete/update not working with large db file(>280GB)

2017-10-13 Thread Fiona
Sorry about my ambiguous description. 
Here is what I wanna do. Normally I use Python code to insert/update data of
two tables in my sqlite database file: *map and images*, table *map* stores
the indexs of pics, while table *images* stores the contents of these pics.
My Python code works well, untill the db file grows too big: I find some
pics I insert into the db file is incorrect,  then I rerun my Python code to
update these pics with right ones, but the select result of table images is
still not right. So i use command-line tool to test it, and there goes my
problem, I find the update/delete operation not working or giving any error
message, while I can still insert data into the same file.

>> Are you trying to use the CLI  the Python script is doing inserts?
Of course I use the CLI after my Python script finishes all inserts,
otherwise the db file is locked and all my operations in CLI would retrun
error.

>> If you do ".changes on" before running the query, does the reported
>> change count increase?
I can't find this command in my CLI, and I use count_changes,  here is the
schema and my commands. I think my delete operation returns correctly, but
after delete, the select result confuses me. 

Schem:
 

delete:
 

As you can see, insert works, and I can also delete/update this last
inserted record. It seems some pages of my db file is locked or something.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3.6 Command-line delete/update not working with large db file(>280GB)

2017-10-13 Thread Andy Ling
Try changing the "DELETE FROM table WHERE " to "SELECT COUNT(1) FROM table 
WHERE " and see if you get a number bigger than 0. If not, then your WHERE 
isn't matching the rows you think it should.

Regards

Andy Ling


-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Fiona
Sent: Fri 13 October 2017 02:19
To: sqlite-users@mailinglists.sqlite.org
Subject: Re: [sqlite] Sqlite3.6 Command-line delete/update not working with 
large db file(>280GB)

CAUTION - EXTERNAL EMAIL

>> This suggests INSERT works but UPDATE and DELETE does not.  Is this
>> correct ?

Thanks a lot! Yes, that's the situation. I'm using WHERE clause in my
UPDATE/DELETE sentences, and I'm pretty sure the syntax and my shell are not
the problem, because there has no retrun of error, and I also test the same
UPDATE/DELETE sentences with the same shell tool but in a small db file, it
works correctly.

I assume the problem is caused by some kind of *db file consistency check* I
don't know? Or sqlite is doing some *index rebuilding* for my UPDATE/DELETE
operation?  What I really can't understand is why INSERT still works in the
mean time.




--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
---
This email has been scanned for email related threats and delivered safely by 
Mimecast.
For more information please visit http://www.mimecast.com
---

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3.6 Command-line delete/update not working with large db file(>280GB)

2017-10-12 Thread Simon Slavin
On 13 Oct 2017, at 2:19am, Fiona  wrote:

> Thanks a lot! Yes, that's the situation. I'm using WHERE clause in my
> UPDATE/DELETE sentences, and I'm pretty sure the syntax and my shell are not
> the problem, because there has no retrun of error, and I also test the same
> UPDATE/DELETE sentences with the same shell tool but in a small db file, it
> works correctly.

Here is an example of a command which returns no error but does not do what you 
want:

DELETE FROM MyTable WHERE myColumn = '"2"'

A DELETE or UPDATE command which affects no rows is a successful command and 
returns SQLITE_OK.

> I assume the problem is caused by some kind of *db file consistency check* I
> don't know? Or sqlite is doing some *index rebuilding* for my UPDATE/DELETE
> operation?  What I really can't understand is why INSERT still works in the
> mean time.

SQLite does none of those.  Have your program print out the actual command it’s 
trying to execute.  Then try typing it manually.  See if it works when you type 
it by hand.

Your problem is probably to do with quote characters " ' “ ' `.  The quote 
characters you expect to be passed to SQLite may be interpreted by your shell 
instead.  Or maybe the other way around.

If you can post the command you’re trying to execute we might be able to help 
further.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3.6 Command-line delete/update not working with large db file(>280GB)

2017-10-12 Thread Fiona

>> This suggests INSERT works but UPDATE and DELETE does not.  Is this
>> correct ?

Thanks a lot! Yes, that's the situation. I'm using WHERE clause in my
UPDATE/DELETE sentences, and I'm pretty sure the syntax and my shell are not
the problem, because there has no retrun of error, and I also test the same
UPDATE/DELETE sentences with the same shell tool but in a small db file, it
works correctly. 

I assume the problem is caused by some kind of *db file consistency check* I
don't know? Or sqlite is doing some *index rebuilding* for my UPDATE/DELETE
operation?  What I really can't understand is why INSERT still works in the
mean time.




--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3.6 Command-line delete/update not working with large db file(>280GB)

2017-10-12 Thread David Raymond
Could you provide a little more clarification on what you're doing?

Are you trying to use the CLI  the Python script is doing inserts?

When you try to do an update or delete with the CLI does it hang and not 
complete, or does it happily continue on and let you keep going?

If you do ".changes on" before running the query, does the reported change 
count increase?

If there's no sensitive data in the schema could you share the schema and/or 
copy the screen text from the CLI with an example?


-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Fiona
Sent: Thursday, October 12, 2017 5:41 AM
To: sqlite-users@mailinglists.sqlite.org
Subject: [sqlite] Sqlite3.6 Command-line delete/update not working with large 
db file(>280GB)

Here is the specifics of my problem:
ubuntu,  sqlite 3.6.20

I have only two tables, each with primary key and index, I use python code
to insert/update these two data, one table have a column with large blob
data.
Now I have a db file of about 289GB in size, when I updata/delete with
command-line,  the data is not changed/deleted at all, and no error ever
returned, while insert is still working. 

I look through the sqlite limits, it says practically there is no limit
about the size of a db file given that you have enough disk space. So please
help me, where I can look into to solve this? 

Thanks a lot!



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite3.6 Command-line delete/update not working with large db file(>280GB)

2017-10-12 Thread Simon Slavin


On 12 Oct 2017, at 10:41am, Fiona  wrote:

> Now I have a db file of about 289GB in size, when I updata/delete with
> command-line,  the data is not changed/deleted at all, and no error ever
> returned, while insert is still working. 

You say that the data is not changed, but the insert is working.  This suggests 
INSERT works but UPDATE and DELETE does not.  Is this correct ?

Are you sure you have the right syntax for the UPDATE and DELETE commands ?  Do 
they have a WHERE clause ?  Have you escaped the quote signs around your values 
correctly ?

Try issuing the same commands using the SQLite shell tool interactively.  If it 
works, the problem is with your shell.  You have be having trouble figuring out 
how to escape quote characters.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Sqlite3.6 Command-line delete/update not working with large db file(>280GB)

2017-10-12 Thread Fiona
Here is the specifics of my problem:
ubuntu,  sqlite 3.6.20

I have only two tables, each with primary key and index, I use python code
to insert/update these two data, one table have a column with large blob
data.
Now I have a db file of about 289GB in size, when I updata/delete with
command-line,  the data is not changed/deleted at all, and no error ever
returned, while insert is still working. 

I look through the sqlite limits, it says practically there is no limit
about the size of a db file given that you have enough disk space. So please
help me, where I can look into to solve this? 

Thanks a lot!



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users