formed fired by
button press), the first one successfully ended writing.
So is such behavior documented and can be used or should I still avoid
keeping such "live" statements in db-shared enviroment?
Platform: Windows
Thanks
Max
___
sqlite-
s3, my second thread is ordered to call sqlite_interrupt after 1
second sleep and different tests confirms his explanation (including phrase
search and mask search), everything works fine.
Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.o
the fts correct or confirm this?
Max
On Fri, Feb 19, 2010 at 3:18 AM, Simon wrote:
> I am using sqlite3 with an FTS3 index for a software running on an iPod
> Touch.
>
> I have a text field that launches a full text search query at every key
> press.
>
> There is a huge
Hello, Jérôme
Nice to hear you finally joined us with this really interesting discussion )
>
> To Max Vlasov:
>
> > in sorted order to sqlite base other 5 minutes, so about 10 minutes it
> > total. First 5 minutes was possible since we exchange only offsets,
> > not d
elps and that you should take into account not
only the number of rows, but also the total index size (quote from the
http://geomapx.blogspot.com/2009/11/degradation-of-indexing-speed.html: "As
we can see the index creating speed degradation is result of index size more
than SQLite page cache
d index. Still good,
such extreme cache method can work in many cases I think.
Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
a daily basis. So in his case any way that
lead to data placed inside sqlite db indexed properly is ok. The problem is
that on a daily basis couple of hours is a big price to pay. Jerome can
correct me, but he still didn't add anything new to this discussion, hope he
will.
Nice approach wi
data. The db is probably already exclusively locked while
CREATE INDEX is in process so having temporary array accessing and storing
for example file offsets of particular records should not be a problem.
Max
On Sat, Feb 13, 2010 at 5:00 PM, Jérôme Magnin wrote:
> Hi,
>
> This post is
Y was added this
explicit index was used.
As for use usefullness of this trick, I think it really can be useful when
the application wants to load one "virtual" long list of data actually
loading only rowids and optionally query full data record for visible rows.
I think in this c
I don't know what one should do to apply this trick in
complex queries, but I hope it is possible.
Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
e transaction for a long time is not a good idea so
coniser optimizing your write operations
Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
ke
preparations with selects, temp tables and so on. Any UPDATE or INSERT will
lead to RESERVED lock and process B suspension with db access.
Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
xt
you now can access each row by "index" since rowids is guaranteed to be
consequential numbers. So accessing in your binary search is
SELECT Text FROM Table2 WHERE rowid=?
Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sq
>
>
>
> One of many ways would be to precompute the min/max into a separate
> table and then query that table whenever you need the min/max.
>
Only if one have full control about how the data changes what with so
widespread format as sqlite almost always not the case. I mean wi
your new value is lower than 50 (for
example > 20), you will get logic error since the microcode has already
passed values bigger than 20 and there's no way for the microcode to detect
your new requirements and make conditional reset.
So just always follow the pattern of calls described ea
ng any new file) when rollback failed
leaving -journal file existing and getting disk i/o error on the first
sqlite3_prepare after this rollback. But still I'm not sure it's related to
winDelete .
Max
On Thu, Jan 14, 2010 at 9:04 PM, Dave Dyer wrote:
>
> There is a known problem,
y still
dissappear afterwards.
Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
ng one particular
problem related to you, but if you find time to make things more simple, it
will be to everyone's benefit.
Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
his will always work as expected, but
compare this with other "undocumented", "subject to change" variants and
choose the best )
Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
On Tue, Jan 12, 2010 at 5:28 AM, ve3meo wrote:
> Max Vlasov writes:
>
> For 3.5.4 and 3.6.17 which executed in ~240s:
> "order", "from", "detail"
>
> And for 3.6.20 which executed the same query in ~2500s:
> "order","from",
10:1.
>
Is the message posted 11 hours ago about the same issue? (
http://www.mail-archive.com/sqlite-users@sqlite.org/msg49650.html)
Anyway I still suggest the same (see in the thread) - compare VDBE code
sequences
Max
___
sqlite-users mailing li
On Mon, Jan 11, 2010 at 4:17 AM, Hub Dog wrote:
> I think I found a performance regression bug of sqlite 3.6.1 8. A sql will
> cost 1800 seconds to return the query result with sqlite 3.6.18 and with
> the
> previous version it only cost about 170 seconds.
>
I have a suggestion for you. If you'r
On Mon, Jan 11, 2010 at 12:56 AM, D. Richard Hipp wrote:
>
> On Jan 10, 2010, at 4:50 AM, Max Vlasov wrote:
>
> > Documentation says that INTERSECT implemented with temporary tables
> > either
> > in memory or on disk. Is it always the case?
>
> No.
>
> If
;s no such optimization, is it possible to implement it in the sqlite
engine? I know that having very complex query with many selects it would be
a hard task to recognize such a specific case but maybe it is easier than it
seems.
Thanks,
Max
___
sqlite-
Thanks for the answers. At the first place I wanted to use rowid to save
space (since rowids always exist). After reading the replies I changed
declaration of ID to the one without AUTOINCREMENT and manually fill the
consecutive values starting current max(rowid)+1. So rowids still used but
now
On Thu, Jan 7, 2010 at 3:56 PM, Igor Tandetnik wrote:
> Max Vlasov wrote:
> > I have a query "INSERT ... SELECT" and after it performed it I have to to
>
> If by autoincrement you mean a column actually declared with the
> AUTOINCREMENT keyword, then the next ID
I have a query "INSERT ... SELECT" and after it performed it I have to to
store range of rowids (autoincrement) of the inserted rows. While max(rowid)
for right bound seems ok, assuming max(rowid)+1 for the left bound (before
the query) depends on the fact whether there were deletes from
side to choose the right cache size if
the amount of data that I plan to transfer through temp tables is
unpredictable. Should I always DELETE or DROP temp table data asap in order
to increase the probability that the disposed memory would be used for
caching in the next SELECT?
Max
On Fri, Jan 1, 2010 at 8:11 PM, Bert Nelsen wrote:
> So SQLite looks at both the database on the disk and in memory?
> Wouldn't that be difficult???
>
I don't think that only the memory can be used. Imagine you can have a very
big transaction, 1,000,000 inserts. As long as I understand the
archi
sults.
So in the example below
sqlite> Create Table [TestTable] ([Value] INTEGER);
sqlite> Begin transaction;
sqlite> INSERT INTO TestTable (Value) VALUES (11);
sqlite> SELECT Count(*) FROM TestTable;
1
sqlite> SELECT Max(rowid) FROM TestTable;
1
sqlite> SELECT * FROM TestTable;
11
s
cific requirements for the inserts like the first one?
Couldn not to find information about this in the documentation.
Thanks
Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
to "upgrade" anyway. Also is it hard to compile current version
of sqlite3.c to dll with version information in Windows format without
necessity to manually duplicate this information?
Max
On Fri, Dec 25, 2009 at 2:26 AM, Dr. Robert N. Cleaves wrote:
> Thank you very much for your h
e table. Maybe, maybe not, fts1 did something like
> that and it got terribly slow once the fts table had a few tens of
> thousands of documents. The problem was that the tokens were
> distributed across a large portion of the index, so data locality went
> down the drain and every
to
collect data from MATCH, because it may contain irrelevant, non-exising
words (but ironically it would be helpful for collecting "hit" data). If we
know inside xNext that the call from a real data appending then at least we
will be able to maintain a dictionary of words used at least on
WHERE Title LIKE "some%" works as expected (with
case-sensitivity pragma effectively set). I tried to read the technical part
of http://www.sqlite.org/fts3.html document, but could not figure out
whether it is possible to implement this in the cur
On Tue, Dec 22, 2009 at 1:22 PM, Evilsmile wrote:
> Hello,
>
> My sqlite version is 3.5.1 and there are a lot of db corruption in my
> system.
>
>
Please, let us know more about your language/platform
___
sqlite-users mailing list
sqlite-users@sqlite.o
On Fri, Dec 18, 2009 at 6:27 PM, Gianandrea Gobbo wrote:
> I'm using sqlite (2.8) on an embedded product, running a Linux kernel.
> I'm experiencing sometimes a database corruption, and listing some
> tables contents gets me a "SQL error: database disk image is malformed".
> Ok, there can be many
>
> For ex, If I ran for 200,000 inserts, first 20,000 inserts were done in 9
> secs, but last 20,000 inserts (from 180,000th to 200,000) took almost 110
> secs. It is more than 10 times than what it was initially. These results
> were consistent across all iterations of simulation I did.
>
>
I hav
On Wed, Dec 16, 2009 at 9:30 AM, Raghavendra Thodime wrote:
> I did try using batch of transactions with synchronous=OFF PRAGMA set. The
> performance improved slightly. But as db file started to grow larger and
> larger in size, the performance degraded considerably. Is it expected? Or Is
> ther
> You don't need to modify sqlite3 code, but you need to write your own
> code inside your application to do what you want. The sqlite backup
> API, from what I understand, is not designed to solve the problem you
> are trying to solve.
> == Kirshor-- I still don't get it. How can you copy
r ioExtreme. You should
also think about actually turning off filesystem cache on that file (with
increase of sqlite in-memory page cache) and that can require small
adjustments in SQLite sources for you.
-
Best Regards.
Max Kosenko.
--
View this message in context:
http://www.nabble.com/SQL
5. Any benchmarks comparing to native?
Thanks.
Max.
-
Best Regards.
Max Kosenko.
--
View this message in context:
http://www.nabble.com/-ANN--SQLJet-1.0.0-released-tp25458690p25491910.html
Sent from the SQLite mailing list archive at Nabble.com.
_
souvik.datta wrote:
> Update set Flag=1 where Filename=;
Check index presence on Filename column and still make updates in 1 large
transaction.
-
Best Regards.
Max Kosenko.
--
View this message in context:
http://www.nabble.com/Making-Update-fast-tp25269409p25273296.html
Sent from
ytes to store, integer will depend from the
size of value. Compound index will be slower than single.
-
Best Regards.
Max Kosenko.
--
View this message in context:
http://www.nabble.com/Index-performance-using-2-integers-vs.-1-float-tp25165036p25170683.htm
orting your
implementation in ADO.NET Provider. In Silverlight that require some
adjustments to your code (not much) and ADO Provider will need to have parts
of System.Data included as a stubs.
>From your point of view - which parts you haven't yet ported?
-----
Best Regards.
Max Kosenko.
--
V
200 index searches: 29.9375000
iteration through 200 records: 19.7187500
deleting 100 records: 53.7206640
-
Best Regards.
Max Kosenko.
--
View this message in context:
http://www.nabble.com/ANN%3A--SQLite-3.6.16.C--tp24764742p24795746.html
Sent from the SQLite mailing list
200 index searches: 29.9375000
iteration through 200 records: 19.7187500
deleting 100 records: 53.7206640
-
Best Regards.
Max Kosenko.
--
View this message in context:
http://www.nabble.com/ANN%3A--SQLite-3.6.16.C--tp24764742p24795731.html
Sent from the SQLite mailing list
Sorry, test bug in SQLite select test.
http://www.nabble.com/file/p24789308/TestIndex.cs TestIndex.cs
index searches:
20: SQLITE 8.1635400 PERST 3.3406065
200: SQLITE 1:10.6331745 PERST 54.9915975
-
Best Regards.
Max Kosenko.
--
View this message in context:
http
ething that must be much slower (but really
Perst/SQLite so different that we mostly measured different architectures).
-
Best Regards.
Max Kosenko.
--
View this message in context:
http://www.nabble.com/ANN%3A--SQLite-3.6.16.C--tp24764742p2477.html
Sent from the SQLite mailing list archive
Dan Kennedy-4 wrote:
> Are you by any chance the author of the report I'm reading?
I'm not an author of test or McObject staff/representative at all. But I can
give a link to this forum to author (still insisting that this is offtopic
here) to answer himself.
-
Best Regards
license and it's not full-size SQL capable at least),
that was just a sample to show that it's a mistake to think that pure
managed DBs always slower.
-
Best Regards.
Max Kosenko.
--
View this message in context:
http://www.nabble.com/ANN%3A--SQ
native version updates by
checking commits and making similar changes.
-
Best Regards.
Max Kosenko.
--
View this message in context:
http://www.nabble.com/ANN%3A--SQLite-3.6.16.C--tp24764742p24783123.html
Sent from the SQLite mailing list archive at Nabble.com.
ut name change (in case Dr. won't change his mind ;).
-
Best Regards.
Max Kosenko.
--
View this message in context:
http://www.nabble.com/ANN%3A--SQLite-3.6.16.C--tp24764742p24782845.html
Sent from the SQLite mailing list archive at Nabble.com.
__
ono can benefit
from managed SQLite. But I think that managed version can also give an
ability to be more flexible in some tryouts of further improvement and
optimization of SQLite itself.
Max
Tim Anderson-2 wrote:
>
>> I don't know why he insists on that (he actually can answer for
It's a pity news. I hoped Dr. can think about even somehow supporting your
project.
I don't know why he insists on that (he actually can answer for himself
here) while there are a lot of SQLite based projects with that name usage.
May be that's because of your license?
Max.
han SQLite (especially on
embedded platforms). That's not because of C#, but because of different
architecture, but still it shows that there are plenty of room. So don't
froze this project please - there is a high demand on it exist.
Max
Noah Hart wrote:
>
> Max, I missed posting the rem
there is a reason there are no implementations of C#
>> external
>> to the Mickeysoft world :-)
>>
>> Guess if I had a lot of time to kill I could port it to Delphi...
>>
>> BTW, what's the memory footprint?
>>
>> Fred
>>
>> -O
Fred Williams wrote:
> Hummm... Guess there is a reason there are no implementations of C#
> external to the Mickeysoft world :-)
One of the reason is true multiplatform support with Mono for managed world.
Another one is Silverlight DB.
-
Best Regards.
Max Kosenko.
--
Vie
in performance.
Cory Nelson wrote:
>
> On Sat, Aug 1, 2009 at 4:21 AM, Kosenko Max wrote:
>>
>> Seems like I've misunderstood your performance results. And they are
>> 3-5times
>> slower than original...
>>
>
> This could be for a number of reasons. Fo
Seems like I've misunderstood your performance results. And they are 3-5times
slower than original...
-
Best Regards.
Max Kosenko.
--
View this message in context:
http://www.nabble.com/ANN%3A--SQLite-3.6.16.C--tp24764742p24768252.html
Sent from the SQLite mailing list archi
vement that all tests are passing now.
Max.
Noah Hart wrote:
>
> I am pleased to announce that the C# port is done to the point where
> others can look at it.
>
> The project is located at http://code.google.com/p/sqlitecs
>
> Enjoy,
>
> Noah Hart
>
-
Best
f B-Tree to
eleminate single b-tree page shouldn't give any speedup.
If you have proven that this trick still works - I will be glad to see code
sample with benchmarks.
Thanks.
Max.
John Stanton-3 wrote:
>
> Quite wrong. Searching a B-Tree is relatively inexpensive but node
> spli
ccess time. Some of them can easily do 50-100x faster. And that
will give 20-50x times faster inserts.
Thank you.
Max.
John Stanton-3 wrote:
> This technique is used extensively in disk cacheing and in maintaining
> file directories with huge numbers of files..
>
> I would expe
ge size now not less than cluster size and that's most of the
time > 4K.
Thanks. Max.
Jay A. Kreibich-2 wrote:
>
> Assuming we have a huge number of data points and that our operations
> are on random rows, it would be possible to quickly develop the
> situation you descri
Doug Fajardo wrote:
> No, I admit I haven't tried this under SQLITE.
>
> Whether this approach will help for the specific application will depend
> on data usage patterns, which we haven't delved into for this application.
> Call me simple: since the main issue is degraded performance with large
John Stanton-3 wrote:
> Why would it not work? It is just adding an extra top level to the
> index. A tried and true method.
It will work. But won't give performance benefit. And from my undestanding
it will even slow down things.
You can place parts of index in different DB and on different
I forgot to say about hash...
My personal choice will be MurmurHash2 64 bit function
http://murmurhash.googlepages.com/
http://en.wikipedia.org/wiki/MurmurHash2 - lots of implementations here
It's fast (even in managed impls), have good characteristics and free.
Don't use CRC64...
P.S. You stil
Matthew O'Keefe wrote:
> We wanted to post to the mailing list to see if there are any obvious,
> first-order things we can try to improve performance for such a large
> table.
The problem with slow inserts generally speaking lies in the problem of
cache miss.
Imagine that each new insert in ind
Have you ever tested such proposal?
I believe that doesn't works.
Doug Fajardo wrote:
>
> One approach might be to split the big, monolithic table into some number
> of hash buckets, where each 'bucket' is separate table. When doing a
> search, the program calculates the hash and accesses reads
Ribeiro, Glauber wrote:
> If it's all reads, you're fine, but if anyone is writing, all others are
> blocked until that transaction is finished.
And actually SQLite can't read at the exactly same time in several threads
in case it's compiled as a thread safe.
--
View this message in context:
h
Michael Sync wrote:
>
> Is there any way to support SQLLite in Silverlight? Let's say I have a
> SQLLite database in Isolated Storage and want to connect that database
> from
> Silverlight without using any service.
>
> Do we need to create our own database driver? I'm also a developer but I'm
gerpux wrote:
>
> I've heard that the guys at db4o said that, under certain
> circunstances, db4o is 500x faster than sqlite:
> Is this because of the jdbc driver?
> What would be a more realistic measure? (db4o is an object database,
> not a relational one)
> They are using the poleposition ben
Christian Smith wrote:
> Max Barry uttered:
>
> My database is permanently locked, and I've spent two fruitless days
> trying to unlock it.
>
> You haven't said what sort of box this is. I guess a generic Unix. If Linux,
> you'll probably have fu
box I don't have root access and can't reboot it.
What else can I try?
Thanks for any help,
Max.
-
To unsubscribe, send email to [EMAIL PROTECTED]
-
; >
> > Is this still true?
> >
>
> Yes it is.
Thanks a lot - i will check my code again. I am pretty sure the db
connection has not been closed at that time and i dont use a prepared
but finalized statement.
Will report back if i found my bug.
Max
-
To unsubscribe, send email to [EMAIL PROTECTED]
-
all the threats running.
Any idea?
Thanks a lot,
Max
-
To unsubscribe, send email to [EMAIL PROTECTED]
-
ncical support of this work.
P.S. To my surprise, I've failed to subscribe and post message to this
mailling list from my primary mail address [EMAIL PROTECTED] Do
anyone know how to resolve this problem?
--
Max Lapan <[EMAIL PROTECTED]>, +7(0855)296471, ICQ: 233841810
PGP Fingerprint: 0C
> hello freind's ,
> i am using sqlite on terminal ..
> any download version under linux which is availabale in GUI ..
Try this :
http://sourceforge.net/projects/sqlitebrowser/
The homepage is :
sqlitebrowser.sourceforge.
Someone have a link about little software like an address book
write in ansi C with sqlite?
Thanks 1K
Ciao Max
--
Secondo alcuni autorevoli testi di tecnica Aeronautica,
il calabrone non può volare, a causa della forma e del
peso del proprio corpo in rapporto alla superficie alare.
Ma il
he right line command when I use gcc is :
gcc pippo.c -o pippo -L/full/path/to/directory/with/sqlite3lib/ -lsqlite3
Cheers
Ciao Max
--
Secondo alcuni autorevoli testi di tecnica Aeronautica,
il calabrone non può volare, a causa della forma e del
peso del proprio corpo in rapporto alla superfici
> You need to link against the SQLite library:
> $ gcc pippo.c -o pippo -lsqlite
Nothing the result is the same ..
Ciao Max
--
Secondo alcuni autorevoli testi di tecnica Aeronautica,
il calabrone non può volare, a causa della forma e del
peso del proprio corpo in rapporto alla superficie
msg'
/tmp/ccH7jM5Y.o(.text+0x117): undefined reference to `sqlite3_close'
/tmp/ccH7jM5Y.o(.text+0x14c): undefined reference to `sqlite3_exec'
/tmp/ccH7jM5Y.o(.text+0x180): undefined reference to `sqlite3_close'
collect2: ld returned 1 exit status
Why?
Sorry for the question but I&
401 - 482 of 482 matches
Mail list logo