Fwd: [sqlite] assertion failing in pager.c :(

2005-04-11 Thread Vineeth R Pillai
Note: forwarded message attached. Could any body plz look into this matter and provide me with some help :-) I have posted this once but couldnt get any reply.. Hoping for a response from sqlite techies regards vineeth __ Do You Yahoo!?

Re: [sqlite] 50MB Size Limit?

2005-04-11 Thread Gé Weijers
Jonathan Zdziarski wrote: > > D. Richard Hipp wrote: > >> Are you sure your users are not, in fact, filling up their disk >> drives? > > > nope, plenty of free space on the drives. The 50MB limit seems to be > very exact as well...exactly 51,200,000 bytes. I'm stumped too. Assuming your

[sqlite] failed to work when running in non-ASCII directory?

2005-04-11 Thread chan wilson
Hi, I found this problem a long time ago, but cannot figure out why: Everytime I put sqlite(no matter sqlite3.exe/sqlite3.dll/ other wrappers like sqlitedb.dll/ litex dll ) in a directory that contains non-ASCII characters, it failed to construct a connection. But it works well in those

Re: [sqlite] sqlite performance problem

2005-04-11 Thread Maksim Yevmenkin
Robert, > [snip] > > > i said i print these rows to /dev/null too in my perl code. plus the > > perl code does some other things such as joining these rows with other > > hashes and summing the numbers. > > That's fine. I was merely trying to account for the 50% speed difference > between the

Re: [sqlite] sqlite performance problem

2005-04-11 Thread Maksim Yevmenkin
Robert, > > time sqlite3 db 'select n1 from data where a <= 18234721' > /dev/null > > 26.15u 0.59s 0:27.00 99.0% > > > > time sqlite3 db 'select n1 from data where a <= 18234721' > /dev/null > > 26.04u 0.61s 0:26.91 99.0% > > > > time sqlite3 db 'select e from data where a <= 18234721' >

RE: [sqlite] sqlite performance problem

2005-04-11 Thread Robert Simpson
Let's recap ... > time sqlite3 db 'select n1 from data where a <= 18234721' > /dev/null > 26.15u 0.59s 0:27.00 99.0% > > time sqlite3 db 'select n1 from data where a <= 18234721' > /dev/null > 26.04u 0.61s 0:26.91 99.0% > > time sqlite3 db 'select e from data where a <= 18234721' > /dev/null >

Re: [sqlite] sqlite performance problem

2005-04-11 Thread Maksim Yevmenkin
Robert, > > i guess, i can believe this. however its pretty disappointing to get > > 50% improvement on 30 times less dataset :( > > > > but how do you explain this? > > > > sqlite> .schema data > > CREATE TABLE data > > ( > >a INTEGER, > >b INTEGER, > >c CHAR, > >

RE: [sqlite] sqlite performance problem

2005-04-11 Thread Robert Simpson
> -Original Message- > From: Maksim Yevmenkin [mailto:[EMAIL PROTECTED] > Sent: Monday, April 11, 2005 9:59 AM > To: Christian Smith > Cc: sqlite-users@sqlite.org > Subject: Re: [sqlite] sqlite performance problem > > i guess, i can believe this. however its pretty disappointing to get >

Re: [sqlite] 50MB Size Limit?

2005-04-11 Thread Stefan Finzel
What about the os shells limit? Look at commands limit/ulimit/unlimit G. Roderick Singleton wrote: On Mon, 2005-04-11 at 12:05 -0400, Jonathan Zdziarski wrote: D. Richard Hipp wrote: Are you sure your users are not, in fact, filling up their disk drives? nope, plenty of free space

Re: [sqlite] sqlite performance problem

2005-04-11 Thread Maksim Yevmenkin
Christian, thanks for the reply. > >i'm having strange performance problem with sqlite-3.2.0. consider the > >following table > > > > [snip] > > > >now the problem: > > > >1) if i do a select with an idex it takes 27 sec. to get 92 rows > > > >> time sqlite3 db 'select n2 from data where a

RE: [sqlite] 50MB Size Limit?

2005-04-11 Thread Brad DerManouelian
Mail system likely has a quota. Check this link: http://www.webservertalk.com/archive280-2004-6-280358.html -Original Message- From: Jonathan Zdziarski [mailto:[EMAIL PROTECTED] Sent: Monday, April 11, 2005 12:27 PM To: sqlite-users@sqlite.org Subject: Re: [sqlite] 50MB Size Limit?

Re: [sqlite] 50MB Size Limit?

2005-04-11 Thread Jonathan Zdziarski
G. Roderick Singleton wrote: quotas? That crossed my mind, but all of these databases are being stored in system space (/usr/local/var/dspam) and owned by the mail system.

Re: [sqlite] 50MB Size Limit?

2005-04-11 Thread G. Roderick Singleton
On Mon, 2005-04-11 at 12:05 -0400, Jonathan Zdziarski wrote: > D. Richard Hipp wrote: > > Are you sure your users are not, in fact, filling up their disk > > drives? > > nope, plenty of free space on the drives. The 50MB limit seems to be > very exact as well...exactly 51,200,000 bytes. I'm

Re: [sqlite] 50MB Size Limit?

2005-04-11 Thread D. Richard Hipp
On Mon, 2005-04-11 at 11:28 -0400, Jonathan Zdziarski wrote: > Greetings! > > I couldn't find any information on this via google or sqlite.org, so I'm > hoping someone can answer this for me. > > We support SQLite v2.x and v3.x as storage backends in DSPAM. I've had a > lot of users complain

[sqlite] 50MB Size Limit?

2005-04-11 Thread Jonathan Zdziarski
Greetings! I couldn't find any information on this via google or sqlite.org, so I'm hoping someone can answer this for me. We support SQLite v2.x and v3.x as storage backends in DSPAM. I've had a lot of users complain that they get 'Database Full' errors once their file hits 50MB in size, and

Re: [sqlite] High throughput and durability

2005-04-11 Thread Andrew Piskorski
On Mon, Apr 11, 2005 at 03:59:56PM +0200, Thomas Steffen wrote: > I have a problem where I need both a high throughput (10% > write/delete, 90% read) and durability. My transactions are really > simple, usually just a single write, delete or read, but it is > essential that I know when a

Re: [sqlite] beat 120,000 inserts/sec

2005-04-11 Thread Thomas Steffen
On Apr 11, 2005 4:06 PM, Christian Smith <[EMAIL PROTECTED]> wrote: > The test given is clearly CPU bound. All the big numbers are from people > with big CPUs, with equally big RAM performance as well, probably. I have done a few database test recently, and I often found them to be CPU bound, at

Re: [sqlite] High throughput and durability

2005-04-11 Thread Thomas Steffen
On Apr 11, 2005 4:17 PM, Christian Smith <[EMAIL PROTECTED]> wrote: > On Mon, 11 Apr 2005, Thomas Steffen wrote: > >Is it possible to delay the fsync(), so that it > >only occurs after 10 or 100 transactions? > > No. Thought so, because the transaction log seems to happen at a low level, close

Re: [sqlite] High throughput and durability

2005-04-11 Thread Christian Smith
On Mon, 11 Apr 2005, Witold Czarnecki wrote: >rsync could be better. Neither would do a good job if the database contents change while you're copying it. There be pain and corruption. The safest way to take a snapshot is to use the sqlite shell .dump command, and feed the output of that to

Re: [sqlite] High throughput and durability

2005-04-11 Thread Christian Smith
On Mon, 11 Apr 2005, Thomas Steffen wrote: >I have a problem where I need both a high throughput (10% >write/delete, 90% read) and durability. My transactions are really >simple, usually just a single write, delete or read, but it is >essential that I know when a transaction is commited to disk,

Re: [sqlite] High throughput and durability

2005-04-11 Thread Witold Czarnecki
rsync could be better. Best Regards, Witold And is there a way to automatically replicate the database to a second system? Copying the database file should give you an exact replica.

Re: [sqlite] High throughput and durability

2005-04-11 Thread Cory Nelson
On Apr 11, 2005 6:59 AM, Thomas Steffen <[EMAIL PROTECTED]> wrote: > I have a problem where I need both a high throughput (10% > write/delete, 90% read) and durability. My transactions are really > simple, usually just a single write, delete or read, but it is > essential that I know when a

Re: [sqlite] beat 120,000 inserts/sec

2005-04-11 Thread Christian Smith
On Sat, 9 Apr 2005, Al Danial wrote: >On Apr 9, 2005 12:43 AM, Andy Lutomirski <[EMAIL PROTECTED]> wrote: >> Al Danial wrote: >> > The attached C program measures insert performance for populating >> > a table with an integer and three random floating point values with >> > user defined

[sqlite] High throughput and durability

2005-04-11 Thread Thomas Steffen
I have a problem where I need both a high throughput (10% write/delete, 90% read) and durability. My transactions are really simple, usually just a single write, delete or read, but it is essential that I know when a transaction is commited to disk, so that it would be durable after a crash. I