Re: [sqlite] Sqlite delete slow in 4GB

2018-06-06 Thread Eduardo
On Wed, 6 Jun 2018 14:33:12 +0300
Aydin Ozgur Yagmur  escribió:

> I was applying the changes. After moving blob to last column suggested by
> Hick and eduardo, 500 MB delete took 40-45 seconds.
> I was already attempted to change page_size & cache size but i had not seen
> any significant improvement, so i didnt apply this now.

When you change the page_size you must vacuum the db. If not, the change in
page_size is noop.

The change in cache size will speed up the access to blobs, in a basic way.
There are better ways to do so but bit more complex. 

For example, a 4GB cache_size will put, lazily, all db file in memory.

> could you please explain positive effects of "autovacuum=incremental"
> solution? I did not see too much positive comment on it.

Autovacumm incremental, adds sqlite metadata in pages, performing better read
access to blobs bigger than your page_size. The trick is never call the pragma
incremental_autovacuum. That's, you set incremental but never use it.

> Thanks for your help,

-- 
Eduardo 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite delete slow in 4GB

2018-06-06 Thread Aydin Ozgur Yagmur
I was applying the changes. After moving blob to last column suggested by
Hick and eduardo, 500 MB delete took 40-45 seconds.
I was already attempted to change page_size & cache size but i had not seen
any significant improvement, so i didnt apply this now.
could you please explain positive effects of "autovacuum=incremental"
solution? I did not see too much positive comment on it.

Thanks for your help,



On Wed, Jun 6, 2018 at 1:10 PM, Eduardo 
wrote:

> On Wed, 6 Jun 2018 12:06:15 +0300
> Aydin Ozgur Yagmur  escribió:
>
> > Thank you very quick response.
> >
> > We have already planned to change file system. But this problem seem not
> > related with the file size, because we did not hit the limit and when i
> try
> > to delete same data in ,for example, 2 GB-sized db, i encounter with the
> > same problem. This is my table and index definitions.
>
> Why didn't apply the changes suggested by others on the other thread?
>
> > Thanks for your help.
> >
> > CREATE TABLE "ANHXT" (
> >   "id" integer primary key autoincrement,
> >   "ANH_AD" text,
> >   "ANH_DBGMHWID" text,
> >   "ANH_TYPE" integer,
> >   "ANH_INDEXNO" int64_t)
> > CREATE TABLE "PRCXT" (
> >   "id" integer primary key autoincrement,
> >   "ANP_SEGMENTNO" integer not null,
> >   "ANP_VALUE" blob,
> >   "ANH_PRC_id" bigint,
> >   constraint "fk_ANHPRC_ANH_PRC" foreign key ("ANH_PRC_id") references
> >  "ANHXT" ("id") on update cascade on delete cascade deferrable
> > initially deferred)
> > CREATE UNIQUE INDEX UQC_ANH_TYPE on ANHXT( ANH_TYPE)CREATE UNIQUE
> > INDEX UQC_ANP_SEGMENTNO_ANAHTARID on PRCXT(
> > ANP_SEGMENTNO,ANH_PRC_id)CREATE INDEX findex on PRCXT( ANH_PRC_id)
>
>
> Move the blob to the last column
>
>  CREATE TABLE "PRCXT" (
>"id" integer primary key autoincrement,
>"ANP_SEGMENTNO" integer not null,
>"ANH_PRC_id" bigint,
>constraint "fk_ANHPRC_ANH_PRC" foreign key ("ANH_PRC_id") references
>   "ANHXT" ("id") on update cascade on delete cascade deferrable
>  initially deferred),
>"ANP_VALUE" blob
>
>
> even better, use a table specific for the blob:
>
>   CREATE TABLE the_blob (
> id integer primary key,
> content blob
>   )
>
> the best for it, use 2 databases, one for metadata (ANHXT PRCXT tables) and
> other for the blobs. Open the metadata and attach the blob.
>
> If you use foreign key constraint to point to blob table, don't use wal
> mode.
> If you want/need wal mode, use an integer column to the blob id table and
> manage updates and deletes yourself.
>
> Set these pragmas for the blob schema before creation:
>
> pragma the_blob.autovacuum = incremental;
> pragma the_blob.page_size = 65536;
> pragma the_blob.cache_size = 1024; -- must set on open/attach too. 64MB
>
> Depending on your insert/update/delete ratios, there are other patterns
> that may fit better.
>
> > On Wed, Jun 6, 2018 at 11:42 AM, Karl Billeter 
> wrote:
> >
> > > On Wed, Jun 06, 2018 at 09:27:57AM +0300, Aydin Ozgur Yagmur wrote:
> > >
> > > > I have been using fat32 file system. I have a database which has 4 GB
> > > size.
> > >
> > > Er... I'm surprised there aren't more problems due to 4GB being the max
> > > file
> > > size supported by fat32.  Any chance to change it to exFAT?
> > >
>
> Don't use FAT32. Use your native fs, ntfs, ufs2, hfs+, hammer2, ext4,
>
> > > K
>
> --
> Eduardo 
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite delete slow in 4GB

2018-06-06 Thread Eduardo
On Wed, 6 Jun 2018 12:06:15 +0300
Aydin Ozgur Yagmur  escribió:

> Thank you very quick response.
> 
> We have already planned to change file system. But this problem seem not
> related with the file size, because we did not hit the limit and when i try
> to delete same data in ,for example, 2 GB-sized db, i encounter with the
> same problem. This is my table and index definitions.

Why didn't apply the changes suggested by others on the other thread?

> Thanks for your help.
> 
> CREATE TABLE "ANHXT" (
>   "id" integer primary key autoincrement,
>   "ANH_AD" text,
>   "ANH_DBGMHWID" text,
>   "ANH_TYPE" integer,
>   "ANH_INDEXNO" int64_t)
> CREATE TABLE "PRCXT" (
>   "id" integer primary key autoincrement,
>   "ANP_SEGMENTNO" integer not null,
>   "ANP_VALUE" blob,
>   "ANH_PRC_id" bigint,
>   constraint "fk_ANHPRC_ANH_PRC" foreign key ("ANH_PRC_id") references
>  "ANHXT" ("id") on update cascade on delete cascade deferrable
> initially deferred)
> CREATE UNIQUE INDEX UQC_ANH_TYPE on ANHXT( ANH_TYPE)CREATE UNIQUE
> INDEX UQC_ANP_SEGMENTNO_ANAHTARID on PRCXT(
> ANP_SEGMENTNO,ANH_PRC_id)CREATE INDEX findex on PRCXT( ANH_PRC_id)


Move the blob to the last column

 CREATE TABLE "PRCXT" (
   "id" integer primary key autoincrement,
   "ANP_SEGMENTNO" integer not null,
   "ANH_PRC_id" bigint,
   constraint "fk_ANHPRC_ANH_PRC" foreign key ("ANH_PRC_id") references
  "ANHXT" ("id") on update cascade on delete cascade deferrable
 initially deferred),
   "ANP_VALUE" blob


even better, use a table specific for the blob:

  CREATE TABLE the_blob (
id integer primary key,
content blob
  )

the best for it, use 2 databases, one for metadata (ANHXT PRCXT tables) and
other for the blobs. Open the metadata and attach the blob. 

If you use foreign key constraint to point to blob table, don't use wal mode.
If you want/need wal mode, use an integer column to the blob id table and 
manage updates and deletes yourself.

Set these pragmas for the blob schema before creation:

pragma the_blob.autovacuum = incremental;
pragma the_blob.page_size = 65536;
pragma the_blob.cache_size = 1024; -- must set on open/attach too. 64MB

Depending on your insert/update/delete ratios, there are other patterns that 
may fit better.
 
> On Wed, Jun 6, 2018 at 11:42 AM, Karl Billeter  wrote:
> 
> > On Wed, Jun 06, 2018 at 09:27:57AM +0300, Aydin Ozgur Yagmur wrote:
> >
> > > I have been using fat32 file system. I have a database which has 4 GB
> > size.
> >
> > Er... I'm surprised there aren't more problems due to 4GB being the max
> > file
> > size supported by fat32.  Any chance to change it to exFAT?
> >

Don't use FAT32. Use your native fs, ntfs, ufs2, hfs+, hammer2, ext4,

> > K

-- 
Eduardo 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite delete slow in 4GB

2018-06-06 Thread Aydin Ozgur Yagmur
Thank you very quick response.

We have already planned to change file system. But this problem seem not
related with the file size, because we did not hit the limit and when i try
to delete same data in ,for example, 2 GB-sized db, i encounter with the
same problem. This is my table and index definitions.

Thanks for your help.

CREATE TABLE "ANHXT" (
  "id" integer primary key autoincrement,
  "ANH_AD" text,
  "ANH_DBGMHWID" text,
  "ANH_TYPE" integer,
  "ANH_INDEXNO" int64_t)
CREATE TABLE "PRCXT" (
  "id" integer primary key autoincrement,
  "ANP_SEGMENTNO" integer not null,
  "ANP_VALUE" blob,
  "ANH_PRC_id" bigint,
  constraint "fk_ANHPRC_ANH_PRC" foreign key ("ANH_PRC_id") references
 "ANHXT" ("id") on update cascade on delete cascade deferrable
initially deferred)
CREATE UNIQUE INDEX UQC_ANH_TYPE on ANHXT( ANH_TYPE)CREATE UNIQUE
INDEX UQC_ANP_SEGMENTNO_ANAHTARID on PRCXT(
ANP_SEGMENTNO,ANH_PRC_id)CREATE INDEX findex on PRCXT( ANH_PRC_id)



On Wed, Jun 6, 2018 at 11:42 AM, Karl Billeter  wrote:

> On Wed, Jun 06, 2018 at 09:27:57AM +0300, Aydin Ozgur Yagmur wrote:
>
> > I have been using fat32 file system. I have a database which has 4 GB
> size.
>
> Er... I'm surprised there aren't more problems due to 4GB being the max
> file
> size supported by fat32.  Any chance to change it to exFAT?
>
>
> K
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite delete slow in 4GB

2018-06-06 Thread Karl Billeter
On Wed, Jun 06, 2018 at 09:27:57AM +0300, Aydin Ozgur Yagmur wrote:

> I have been using fat32 file system. I have a database which has 4 GB size.

Er... I'm surprised there aren't more problems due to 4GB being the max file
size supported by fat32.  Any chance to change it to exFAT?


K
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Sqlite delete slow in 4GB

2018-06-06 Thread Aydin Ozgur Yagmur
I have been using fat32 file system. I have a database which has 4 GB size.
Database consist of 1 parent table and 1 child table. Parent table has 10
rows and child table has 4000 rows. 1 row of child table has 1 MB size.
There are unique indexes on the tables.

When I delete a row in parent table, deletion cascades 1MB-sized child
records. (pragma foreign_keys is on) When I try to delete 100 MB data by
cascade (1 parent record - 100 child records) it takes too long time
(almost 1-10 minute) to complete, and the duration increase/decrease by
size of data (100 Mb: 1-10 minute, 300 MB: 3-30 minute,etc).

I tried some pragma commands (synchronous, temp_store, journal_mode)
suggested by others posts and i also tried to add index on foreign key, but
those does not help solve my problem.(Actually, after adding index on
foreign key, 1 MB data deletion became faster/st, but 100 MB data deletion
duration did not change) Can you give me please any suggestion to increase
deletion performance?


Thanx in advance,
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users