I was applying the changes. After moving blob to last column suggested by
Hick and eduardo, 500 MB delete took 40-45 seconds.
I was already attempted to change page_size & cache size but i had not seen
any significant improvement, so i didnt apply this now.
could you please explain positive effects of "autovacuum=incremental"
solution? I did not see too much positive comment on it.

Thanks for your help,



On Wed, Jun 6, 2018 at 1:10 PM, Eduardo <eduardo.mor...@mobelservices.com>
wrote:

> On Wed, 6 Jun 2018 12:06:15 +0300
> Aydin Ozgur Yagmur <ayagmu...@gmail.com> escribió:
>
> > Thank you very quick response.
> >
> > We have already planned to change file system. But this problem seem not
> > related with the file size, because we did not hit the limit and when i
> try
> > to delete same data in ,for example, 2 GB-sized db, i encounter with the
> > same problem. This is my table and index definitions.
>
> Why didn't apply the changes suggested by others on the other thread?
>
> > Thanks for your help.
> >
> > CREATE TABLE "ANHXT" (
> >   "id" integer primary key autoincrement,
> >   "ANH_AD" text,
> >   "ANH_DBGMHWID" text,
> >   "ANH_TYPE" integer,
> >   "ANH_INDEXNO" int64_t)
> > CREATE TABLE "PRCXT" (
> >   "id" integer primary key autoincrement,
> >   "ANP_SEGMENTNO" integer not null,
> >   "ANP_VALUE" blob,
> >   "ANH_PRC_id" bigint,
> >   constraint "fk_ANHPRC_ANH_PRC" foreign key ("ANH_PRC_id") references
> >  "ANHXT" ("id") on update cascade on delete cascade deferrable
> > initially deferred)
> > CREATE UNIQUE INDEX UQC_ANH_TYPE on ANHXT( ANH_TYPE)CREATE UNIQUE
> > INDEX UQC_ANP_SEGMENTNO_ANAHTARID on PRCXT(
> > ANP_SEGMENTNO,ANH_PRC_id)CREATE INDEX findex on PRCXT( ANH_PRC_id)
>
>
> Move the blob to the last column
>
>  CREATE TABLE "PRCXT" (
>    "id" integer primary key autoincrement,
>    "ANP_SEGMENTNO" integer not null,
>    "ANH_PRC_id" bigint,
>    constraint "fk_ANHPRC_ANH_PRC" foreign key ("ANH_PRC_id") references
>   "ANHXT" ("id") on update cascade on delete cascade deferrable
>  initially deferred),
>    "ANP_VALUE" blob
>
>
> even better, use a table specific for the blob:
>
>   CREATE TABLE the_blob (
>     id integer primary key,
>     content blob
>   )
>
> the best for it, use 2 databases, one for metadata (ANHXT PRCXT tables) and
> other for the blobs. Open the metadata and attach the blob.
>
> If you use foreign key constraint to point to blob table, don't use wal
> mode.
> If you want/need wal mode, use an integer column to the blob id table and
> manage updates and deletes yourself.
>
> Set these pragmas for the blob schema before creation:
>
> pragma the_blob.autovacuum = incremental;
> pragma the_blob.page_size = 65536;
> pragma the_blob.cache_size = 1024; -- must set on open/attach too. 64MB
>
> Depending on your insert/update/delete ratios, there are other patterns
> that may fit better.
>
> > On Wed, Jun 6, 2018 at 11:42 AM, Karl Billeter <kbille...@gmail.com>
> wrote:
> >
> > > On Wed, Jun 06, 2018 at 09:27:57AM +0300, Aydin Ozgur Yagmur wrote:
> > >
> > > > I have been using fat32 file system. I have a database which has 4 GB
> > > size.
> > >
> > > Er... I'm surprised there aren't more problems due to 4GB being the max
> > > file
> > > size supported by fat32.  Any chance to change it to exFAT?
> > >
>
> Don't use FAT32. Use your native fs, ntfs, ufs2, hfs+, hammer2, ext4,....
>
> > > K
>
> --
> Eduardo <eduardo.mor...@mobelservices.com>
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to