Hello!

On Thursday 18 March 2010 20:18:10 P Kishor wrote:
> Alexey's issue is that for the same query, his compressed file is
> slower than the non-compressed file.

The 'file_text_content' table content is compressed and fast. But the virtual 
table
'file_text' is slow for count(*) query.

The SQL command
CREATE VIRTUAL TABLE file_text USING fts3(content, meta, TOKENIZE icu en_US);

produce this database schema:

CREATE VIRTUAL TABLE file_text USING fts3(content, meta, TOKENIZE icu en_US);
CREATE TABLE 'file_text_content'(docid INTEGER PRIMARY KEY, 'c0content' blob, 
'c1meta' blob);
CREATE TABLE 'file_text_segdir'(level INTEGER,idx INTEGER,start_block 
INTEGER,leaves_end_block INTEGER,end_block INTEGER,root BLOB,PRIMARY KEY(level, 
idx));
CREATE TABLE 'file_text_segments'(blockid INTEGER PRIMARY KEY, block BLOB);

Virtual table interface enumerate rows by selecting full table content instead 
of index scan. This solution must be optimized but I don't know how to do it.

Best regards, Alexey Pechnikov.
http://pechnikov.tel/
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to