I created multiple FTS virtual tables and inserted data into specific
tables based on ID. The FTS tables used the content option to specify the
text was stored elsewhere. Something like "insert into FTSBUCKET# (...)
..." where # was computed from a rowid to spread the load around.

This may not be an ideal or workable solution for you, I only suggest it as
an idea. It actually worked out quite well for my use case as I often only
needed a predictable subset of data, but YMMV.
On Apr 29, 2015 2:39 AM, "Artem" <devspec at yandex.ru> wrote:

> > Personally I don't see it as a bug. A limitation, yes. A different
> > algorithm that requires less ram would remove / change the limit.
>
> > I'll be trying some sample data tomorrow  (if time permits) to see if I
> can
> > come up with any ideas.
>
> > I described an approach in a previous email where I divided my FTS index
> > into 53 partitions. It means running more queries, but maybe something
> like
> > that could serve as a temporary solution.
>
> What queries (partitions) do you mean? I tried to create FTS with LIMIT
> clause
> like:
> INSERT INTO pages_fts(title, body) SELECT title, body FROM pages LIMIT
> 0,1000000
> INSERT INTO pages_fts(title, body) SELECT title, body FROM pages LIMIT
> 1000000,2000000
> etc
>
> It was OK until 600 mln rows, after 600 mln rows it failed with
> Segmentation Fault error (on Ubuntu).
>
> > On Apr 29, 2015 1:56 AM, "Artem" <devspec at yandex.ru> wrote:
>
> >> So, is that a real bug of SQLIte and how to fix it?
> >> I really need to create huge FTS-indexes like that,
> >> I have 32GB of memory for that.
> >>
> >> Many thanks for your attention.
> >>
> >> > On 04/29/2015 05:27 AM, Artem wrote:
> >> >> ????????????, Tim.
> >> >>
> >> >> ?? ?????? 29 ?????? 2015 ?., 1:21:00:
> >> >>
> >> >>> On 28 Apr 2015 at 23:14, Artem <devspec at yandex.ru> wrote:
> >> >>>>> How about trying the sqlite3.exe command line utility. put your
> >> >>>>> sql for that operation in a text file, launch the program, open
> >> >>>>> the database, then read in the sql file with the .read command.
> >> >>>>> If the error occurs, then possibly sqlite3. if not then it is
> >> >>>>> probably something else.
> >> >>>> I tried it and failed.
> >> >>>>
> >> >>>> Console log:
> >> >>>>
> >> >>>> f:\Suggests\test>sqlite3.exe single.db
> >> >>>> SQLite version 3.8.9 2015-04-08 12:16:33
> >> >>>> Enter ".help" for usage hints.
> >> >>>> sqlite> .read test.sql
> >> >>>> Error: near line 1: out of memory
> >> >>> That's not a segfault, though, is it.
> >> >> When I did the same in linux version of SQLite - I saw
> >> >> the "Segmentation Fault" error.
> >>
> >> > Maybe something to do with the optimistic allocation strategy Linux
> >> > uses. Perhaps malloc() returned non-NULL but then a segfault occurred
> >> > when it first tried to access the pages. From the man-page:
> >>
> >> >     By default, Linux follows an optimistic memory allocation
> >> >     strategy. This means that when malloc() returns non-NULL
> >> >     there is no guarantee that the memory really is available.
> >>
> >> > Or maybe the OOM killer took out the process. Or something.
> >>
> >> > Dan.
> >>
> >>
> >> > _______________________________________________
> >> > sqlite-users mailing list
> >> > sqlite-users at mailinglists.sqlite.org
> >> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >>
> >>
> >>
> >> --
> >> ? ?????????,
> >>  Artem                          mailto:devspec at yandex.ru
> >>
> >> _______________________________________________
> >> sqlite-users mailing list
> >> sqlite-users at mailinglists.sqlite.org
> >> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >>
> > _______________________________________________
> > sqlite-users mailing list
> > sqlite-users at mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
> _______________________________________________
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>

Reply via email to