On Thu, Jun 11, 2009 at 1:46 AM, Florian Weimer wrote:
> That's 500 commits per second, right? If you need durability, you can
> get these numbers only with special hardware.
>
Not really, you don't need special hardware (if you don't use SQLite).
The use case that Robel
SQLite is in iPhone since the beginning. I think that it is used in almost
any iPhone application that stores data in a structured way: Contacts, call
history, Google Maps, Safari bookmarks and history, etc, except the iPod
application. At least that's what I remember since I browsed the file
But why don't you use
SELECT x from y WHERE y.x LIKE ? ;
and bind the first parameter to "%SomeText%" instead of "SomeText" like
before?
-Original Message-
From: Slater, Chad [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 06, 2006 6:40 PM
To: sqlite-users@sqlite.org
Subject: [sqlite]
You should embed the inserts into a transaction. Otherwise every insert is a
separate ACID transaction = 2 disk spins.
-Original Message-
From: Thom Ericson [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 06, 2006 10:18 AM
To: sqlite-users@sqlite.org
Subject: [sqlite] Slow performance -
How long does one INSERT take? Do you have long transactions with INSERTs?
If you have one INSERT at a time and it doesn't take too long, and still you
have reader starvation issues with the SELECTs, the only solution that I see
is to queue requests and make sure that they are served on a
sqlite_master table tells you everything about every object in the database
-Original Message-
From: Olaf Beckman Lapré [mailto:[EMAIL PROTECTED]
Sent: Tuesday, April 04, 2006 9:30 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Testing for table existence?
Thanks, I implemented
You're right Darren, but the problem is that we're not in a DB class. We
cannot tell people who have a solution for their problems that "your
solution is wrong. You need to reimplement your stuff to make it right".
Most of SQLite users are practical people, and all they want is their
problem to be
. Especially if this would involve custom sync
mechanisms designed for each file system.
-Original Message-
From: Jay Sprenkle [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 08, 2006 8:18 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] File locking additions
On 3/7/06, Marian Olteanu
I would say that the best way to access a sqlite database mounted from a
remote file server, concurrently with other processes is through a database
server. My opinion is that the overhead of file sync and file locking for a
remote file system is higher than simple TCP/IP communication overhead.
/\n//g' > transformed file
On 2/20/06, Marian Olteanu <[EMAIL PROTECTED]> wrote:
dos2unix, unix2dos tools will do the conversion
dos2unix, unix2dos tools will do the conversion
-Original Message-
From: Randall [mailto:[EMAIL PROTECTED]
Sent: Monday, February 20, 2006 1:33 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] import
PS
I can do vbscript readline/ writeline as a workaround, not too slow...
But I
I would say that the fastest way (CPU cycles and lines of code) to delete
all tables would be to delete the file in which the database is stored.
On Thu, 9 Feb 2006, Xavier Noria wrote:
In the schema definition I would like to avoid the combo
delete if exists table_name;
create
Probably the best solution would be to have the standard implementation
activated by a PRAGMA command. This way, you don't steal functionality
from people who want non-standard implementation and you also don't risk
to break compatibility with existing software over SQLite (you have
backward
On Tue, 7 Feb 2006, [EMAIL PROTECTED] wrote:
Thank you for your answer!
Thanks the rest of you that gave me an answer to my problem!
Marian Olteanu <[EMAIL PROTECTED]> wrote:
Is there a way in SQLite to use real prepared statements? Statements with
variables, that you fill after you c
Is there a way in SQLite to use real prepared statements? Statements with
variables, that you fill after you compile the query and reuse then reuse?
I imagine something like:
prepared_statement ps = db.prepare( "select * from tbl where c = %q' );
for( int i = 0 ; i < 100 ; i++ )
{
There might be two possible causes for this to happen:
- query optimization - for example, complex queries are better optimized
by MS SQL Server. I don't know about MySql. Could you post some
problematic queries?
- concurency. SQLite is not that great about concurency. But... there was
before
Something related, but that doesn't really answer the question: if you
want to populate a database with so many rows, to speed up things a lot
you should embed them into a transaction (or in a small number of
transactions). This way, if sqlite works synchronously, it doesn't need to
flush data
The localization problem is a complex problem. Indeed, any big database
system _should_ implement it. And yes, it can be implemented in sqlite,
and it can be activated through a PRAGMA directive. But implementing it
into sqlite (localization is not limited to numbers) would increase the
size
Thank you very much!
I'll try to compile it also in Linux. If it works, I'm set. If it doesn't,
back to square one.
On Tue, 31 Jan 2006, Tim Anderson wrote:
-Original Message-
From: Marian Olteanu [mailto:[EMAIL PROTECTED]
Sent: 31 January 2006 05:14
To: sqlite-users@sqlite.org
19 matches
Mail list logo