On Oct 22, 2009, at 11:37 AM, John Crenshaw wrote:
> An open cursor will block. I've watched it. It was a major problem,
> and
> I spent many many hours stepping through SQLite before I finally
> figured
> it out. Once I carefully closed out cursors, the problem went away.
> (In
> my case
An open cursor will block. I've watched it. It was a major problem, and
I spent many many hours stepping through SQLite before I finally figured
it out. Once I carefully closed out cursors, the problem went away. (In
my case I had a long running write process trying to commit a
transaction so it
Roger,
Thank you for your answer. I knew from old days that va_* things are
very fragile (not only from the portability point of view).
In the "duct tape programming" situation where I currently am, the best
I can came up with is by fixing the max # of arguments to 32 and using
a _ugly_
On Oct 22, 2009, at 5:21 AM, Tom Broadbent wrote:
> if thread 1 opens a read cursor in read uncommitted mode it can
> block a write lock? i thought the read happens w/o a lock?
If using read-uncommitted mode, a reader thread will not block a
writer thread that is using the same shared-cache.
Try to EXPLAIN the query and verify that the index is actually used.
There are a lot of reasons why this query would probably NOT be using
the index.
John
-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Ralf
Sent: Wednesday,
Read sort of does happen without a lock, but write requires a lock, and
SQLite can't grab a write lock if another connection has open cursors
(notwithstanding the fact that they technically don't have a lock.) It's
complicated. Just trust me. You won't get that write lock while cursors
are open,
On 21 Oct 2009, at 11:34pm, Sylvain Pointeau wrote:
> if your "book" contains all lines (a,b,c,t,d)and you create an index
> on
> (a,b,c,t)
I assume you meant to add ',d'in there.
> then your index is as fat as your book, isn't it?
Yes. And it still isn't as useful for any SELECT that
On Thu, Oct 22, 2009 at 12:34:26AM +0200, Sylvain Pointeau wrote:
> if your "book" contains all lines (a,b,c,t,d)and you create an index on
> (a,b,c,t)
>
> then your index is as fat as your book, isn't it?
Depends on the size of d.
Also, if you add a constraint declaring t, a, b, and c (you
if your "book" contains all lines (a,b,c,t,d)and you create an index on
(a,b,c,t)
then your index is as fat as your book, isn't it?
cheers,
Sylvain
On Wed, Oct 21, 2009 at 11:52 PM, Simon Slavin wrote:
>
> On 21 Oct 2009, at 9:19pm, Sylvain Pointeau wrote:
>
> > Thank
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Jean-Christophe Deschamps wrote:
> I feel the need to wrap an SQLite printf-like function into a scalar
> function.
You can just do that sort of thing in your application. There is no need to
do it via a SQL function.
> My question is slightly off
if thread 1 opens a read cursor in read uncommitted mode it can block a write
lock? i thought the read happens w/o a lock?
-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org]
On Behalf Of John Crenshaw
Sent: Wednesday, October 21, 2009
I feel the need to wrap an SQLite printf-like function into a scalar
function.
I wish to use it as in:
select printf(format_string, list of arguments);
My question is slightly off topic but there are experienced users here
who have probably done it before.
In the scalar function
On 21 Oct 2009, at 9:19pm, Sylvain Pointeau wrote:
> Thank you for your answers.
> knowing that I have a table T (a,b,c,d,t)
> where d is a value
> a,b,c some dimensions
> and t the time
>
> where I need to make a subset with a "group by" like
>
> select a,b,c,sum(d)
> from T
> where t>x1 and t
Hello Forum,
[>> ]
I have a select that joins 15 Tables the where clause consist of 8 like
relations (all fields are indexed), this is to implement a sort of "search
engine".
The performance is awful. It takes around 10sec.
Is this how it should be or is there anything I can do?
If you need
On Wed, 21 Oct 2009 17:35:41 +0100 (BST), Keith Roberts
wrote:
>On Wed, 21 Oct 2009, Unabashed wrote:
>
>> To: sqlite-users@sqlite.org
>> From: Unabashed
>> Subject: [sqlite] Slow SELECTs in application
>>
>>
>> Hello!
>> I'm using SQLite as DB in my
On 21 Oct 2009, at 5:58pm, Jan wrote:
> thx bruce, but I am addicted to open-source.
If you want an open source server/client SQL engine, designed from the
ground up to work correctly when accessed from many computers at once,
I recommend MySQL. It is easier, faster and more efficient to
Thank you for your answers.
knowing that I have a table T (a,b,c,d,t)
where d is a value
a,b,c some dimensions
and t the time
where I need to make a subset with a "group by" like
select a,b,c,sum(d)
from T
where t>x1 and twrote:
> Actually, I thought exactly what you said when I saw the
Thanks for your answer.
I wrote a simple example to show the difference
between last and previous version of sqlite.
The problem is that I have some
prepared statement which can be used either with wildchars or not, depending on
user input; more complex (and less readable) code will needed if
I saw that DRH had added the following comment to the ticket:
"2009-Oct-20 16:49:55 by drh:
When ever a prepared statement runs, it first checks to see if the schema has
changed. If the schema has changed, then the statement must be recompiled and
retried. This loop of checking for schema
-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of ge...@iol.it
Sent: Wednesday, October 21, 2009 2:03 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] Like do not use index as previous version
Hi all,
it seems that in last
very good. i don't anticipate multiple writers so this should be pretty simple.
-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org]
On Behalf Of John Crenshaw
Sent: Wednesday, October 21, 2009 9:15 AM
To: General Discussion of SQLite
hmm.. okay. i'll have to refactor a bit (currently two separate processes).
this is still very helpful. thanks.
-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org]
On Behalf Of John Crenshaw
Sent: Wednesday, October 21, 2009 9:19 AM
To:
Hi all,
it seems that in last versions on sqlite3 LIKE clause stopped to use
indexes; I created a new empty database with SQLIte 3.6.13 and I run these
statements :
CREATE TABLE TEST
(TEXT_1 text PRIMARY KEY, TEXT_2 text, TEXT_3
text COLLATE NOCASE);
CREATE INDEX TEST_IDX_2 ON TEST (TEXT_2);
Regarding:
"I NEED HELP! I have SQLite3 DB (contacts from iPhone), it
contains 1 record, but when I open it with NOTEPAD.EXE, I see more than 1
record.
I need to repair all records from it"
С уважением,
I received your db (via private email) and the good news is
On 21 Oct 2009, at 2:47pm, Unabashed wrote:
> I'm using SQLite as DB in my application. My problem consists of two
> aspects. First , it works great, but on large data SELECTs are very
> slow
> (10-20s!). Queries are in one transaction. My table structure is:
> CREATE TABLE mgWords (
> id
I'm addicted to amazingly powerful solutions done in minutes; using
the one-stop-shopping Lowe's Hardware of data solutions has its own
addictive powers.
Depends on how you value your time.
On Oct 21, 2009, at 9:58 AM, Jan wrote:
> thx bruce, but I am addicted to open-source.
>
> Bruce
thx bruce, but I am addicted to open-source.
Bruce Robertson schrieb:
> You might try Filemaker. That's what is was designed for. Make
> everything so easy.
>
> On Oct 21, 2009, at 8:10 AM, Jan wrote:
>
>> Thank you John. Seems postgres might be a better choice. Although it
>> is
>> so nice
It isn't just speed. That is probably the cause of the insert error. A
PRIMARY KEY column is implied UNIQUE and NOT NULL but the insert doesn't
specify a value for id. Since it isn't aliased to rowid (and therefore
doesn't autoincrement) it would raise an error.
John
-Original Message-
On Wed, 21 Oct 2009, Unabashed wrote:
> To: sqlite-users@sqlite.org
> From: Unabashed
> Subject: [sqlite] Slow SELECTs in application
>
>
> Hello!
> I'm using SQLite as DB in my application. My problem consists of two
> aspects. First , it works great, but on large data
Someone correct me if I'm wrong, but I don't think that UNIQUE
(id_norm,word,wform) is going to have the desired result. Won't that
create a single tricolumn unique index? I suspect this table needs a
separate index for each. Just put the UNIQUE keyword (with no arguments)
after the type on each
My understanding is that the shared cache allows table level locking for
multiple threads in a single process, and can do so efficiently because
the threads all share the same memory space, but if multiple processes
attempt to access the database, they will each use the original (full
database
Yes, you have to call sqlite3_enable_shared_cache before opening any
database connections, then execute "PRAGMA read_uncommitted = true;" on
each connection. Blocking can still happen in some situations, but you
can handle it as I described in my original reply.
John
-Original Message-
It is likely that the file can't be opened for a very predictable
reason. For example, perhaps the specified path doesn't exist. (A common
variation of this would be a hard coded string with single backslashes,
most languages require you to escape backslashes in strings.) Perhaps
the file is read
it sounds like this is the feature you recommend using:
"A database connection in read-uncommitted mode _does not attempt to obtain
read-locks before reading_ from database tables as described above. This can
lead to inconsistent query results if another database connection modifies a
table
reading up on shared cache mode and found this:
"The locking protocol used to arbitrate between multiple shared-caches or
regular database users is described _elsewhere_."
where is this described?
From: sqlite-users-boun...@sqlite.org
fascinating... one can feed it a file of commands, or even a string of
commands, and the "Apply SQL" action will run against a specified or a
newly created SQLite database and format the output as plain text,
HTML or CSV.
--
Puneet Kishor http://www.punkish.org
Carbon Model
On Wed, 21 Oct 2009, ?? wrote:
> I deployed a django app on my laptop, the whole environment is like this:
> the OS is UBUNTU904, the web server is Apache, and the database is
> sqlite3. The deployment is success, but when I try to write some data into
> the database, I get the HTTP 500 error.
Hi sqlite-users,
I deployed a django app on my laptop, the whole environment is like this:
the OS is UBUNTU904, the web server is Apache, and the database is sqlite3.
The deployment is success, but when I try to write some data into the
database, I get the HTTP 500 error. And I check the error
Thank you John. Seems postgres might be a better choice. Although it is
so nice to work with sqlite cause non of this user/administration "crap"
is necessary.
John Elrick schrieb:
> Jan wrote:
>> Hi,
>>
>> Although I read in a recent post by drh that it is not recommended to
>> use sqlite in a
Jan wrote:
> Hi,
>
> Although I read in a recent post by drh that it is not recommended to
> use sqlite in a local network I would like to ask if the following
> approach would work:
>
> A database in the local network needs to be accessed by about 20 people.
> I suppose the max. number at the
Really does depend on the query (sql) you are running.
To investigate start by looking at the explain plan
http://www.sqlite.org/lang_explain.html
http://www.razorsql.com/docs/sqlite_explain_plan.html
Classic "slow query" problems are table scans - where the engine has to
scan the entire
> It seems, that the problem is in sql. How to optimize table to have fast
> selects?
Different sql queries require different things to be the fastest
possible. There's no one magic pill for all.
> INSERT INTO mgWords (id_norm,word,wform) VALUES (0,'aword','awform')
> it clauses error.
Which
Pavel Ivanov schreef:
>> Currently this means adding
>> the new columns to my C-structures, writing access functions, and
>> recompiling. I don't want to do that, because this means my appl *must*
>> be replaced on every database change, and I'd like to be able to
>> run different versions of it
Pavel Ivanov schrieb:
I'm not sure you want to do that.
No, actually not. Thanks for clarification.
jan
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Are you sure you run it with the real database file, not with
in-memory or temporary database? Temporary database is used if you run
sqlite3 command line tool without any file name given in the command
line.
Pavel
On Wed, Oct 21, 2009 at 1:32 AM, Saurabh Agarwal wrote:
>
Disregarding network drive issue "PRAGMA locking_mode=EXCLUSIVE" means
that once your writer requires PENDING or EXCLUSIVE lock nobody else
will be able to read the database anymore until writer closes
connection to it. In other words without this pragma with relatively
small transactions in
Hi. I'm from Belarus. Sorry for my English.
I NEED HELP! I have SQLite3 DB (contacts from iPhone), it contains 1
record, but when I open it with NOTEPAD.EXE, I see more than 1 record.
I need to repair all records from it. Maybe it help: when I use
'VACUUM' command file size decreases
Hi. I'm from Belarus. Sorry for my English.
I NEED HELP! I have SQLite3 DB (contacts from iPhone), it contains 1
record, but when I open it with NOTEPAD.EXE, I see more than 1 record.
I need to repair all records from it. Maybe it help: when I use
'VACUUM' command file size decreases
Hi, i am facing a problem, when I download and install the sqlite3.6.18 and
compile it and then execute it, it works fine.. But if I quit it and again
restart it then all the previous tables are lost..
Can Someone help me..
Saurabh Agarwal
BE(Hons) Computer Science
BITS,Pilani
Hi,
Although I read in a recent post by drh that it is not recommended to
use sqlite in a local network I would like to ask if the following
approach would work:
A database in the local network needs to be accessed by about 20 people.
I suppose the max. number at the same time will be 4-5.
Actually, I thought exactly what you said when I saw the question. When
I saw your answer though I realized I'd been wrong, there are ways I
could slow indexing down, and therefore, ways to speed it up.
Splitting across transactions is about the insertion of data, not the
creation of the index.
51 matches
Mail list logo