On Tue, 5 Feb 2013 12:54:13 +
Jason Gauthier wrote:
> Hey Everyone,
>
> I am a fairly new user of sqlite, but not particularly new to SQL
> principles. I am developing an application that will run on a low end system.
> Not quite embedded, but not quite a PC. In my
Hi,
I am trying to use sqlite_trace and sqlite_profile to trace what SQL
statements have been executed.
In the callback function of sqlite_trace/profile I simply print out the SQL
statement which triggers the callback. And in sqlite_open_v2() I have:
sqlite3_trace(*ppDb, print_sql_callback,
(Sorry if this gets posted twice - our damn mail server rewrites outgoing mails
so I had to unsubscribe and re-subscribe under a different Email address)
Hello list. I'd like to ask someone with more SQLite experience than me a
simple question. First, some background;
Distribution: Scientific
Hi! Is there any difference between to use "$" or "$nnn" into a
placeholder parameter? Or, one is better than the another one?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
On Thu, Feb 7, 2013 at 8:40 AM, Jonas wrote:
> Hi! Is there any difference between to use "$" or "$nnn" into a
> placeholder parameter? Or, one is better than the another one?
>
The first is a syntax error and the second is an example of a dubious use
of a number as a
I mean "?" and "?nnn"
Thanks!
El 07/02/13 13:44, Richard Hipp escribió:
On Thu, Feb 7, 2013 at 8:40 AM, Jonas wrote:
Hi! Is there any difference between to use "$" or "$nnn" into a
placeholder parameter? Or, one is better than the another one?
The first is a syntax error
Nothing to do with SQLite. NFS won't use cache by default. You have to
mount it with the "fsc" option.
https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/ht
ml/Storage_Administration_Guide/fscachenfs.html
-Original Message-
From: sqlite-users-boun...@sqlite.org
On 7 Feb 2013, at 9:56am, James Vanns wrote:
> We have a single process that, given some data, does some processing and
> writes it all to a single SQLite DB file. This is a write-once process. When
> this task is finished, the file itself is marked as read only
I'm afraid I couldn't disagree more. Everything else works fine and uses the
*page cache*. We run 1000s of machines that do. You are getting confused
between fs-cache and the kernels page cache.
Jim
- Original Message -
From: "Michael Black"
To: "james vanns"
Hi Simon. Yes, the connection is closed - the process that writes the DB file
isn't memory resident (meaning, it isn't a daemon). The connection is
implicitly closed (and transaction committed?) by the process terminating. The
problem only arises when the file is hosted by an NFS server -
On 7 Feb 2013, at 2:27pm, James Vanns wrote:
> Hi Simon. Yes, the connection is closed - the process that writes the DB file
> isn't memory resident (meaning, it isn't a daemon). The connection is
> implicitly closed (and transaction committed?) by the process
> I would be interested to know if handing a sequential file over the
> same NFS connection shows the same behaviour. This would use
> fread() which should trigger any caching that the operating system
> and file system implement for that type of connection. You could
> test this using a text
> Don't know if the index is updated after each row delete or after the whole
> delete transaction is commited. For the first you can try:
>
> time sqlite3 trip.db "PRAGMA automatic_index= FALSE; delete from trip where
> key<=1400;PRAGMA automatic_index= TRUE; reindex trip"
PRAGMA
On Thu, 7 Feb 2013 09:56:27 + (GMT)
James Vanns wrote:
> Hello list. I'd like to ask someone with more SQLite experience than me a
> simple question. First, some background;
>
> Distribution: Scientific Linux 6.3
> Kernel: 2.6.32-279.9.1.el6.x86_64
> SQLite
> Anyone out there know how to correct this undesirable behaviour?
>
> PS. This only happens over NFS - local DB files behave as expected and fill
> the OS page cache.
Don't write your database to NFS. I'd guess that your problem is that
NFS driver for some reason thinks that the file was
> Don't write your database to NFS. I'd guess that your problem is that
> NFS driver for some reason thinks that the file was changed on the
> server (could be as easy as rounding of file modification time) and
> thus re-reads it from NFS server. And it has nothing to do with
> SQLite.
Nope.
I fear I must correct myself. SQLite appears to 2nd guess/avoid the Linux
kernel page cache both when the file is local and when it is remote. I'd wager
that it's own internal cache (an LRU of somesort?) only ever ensures that there
are n pages in RAM and therefore it is only these pages that
> You should use these pragmas too :
>
> PRAGMA temp_store = MEMORY;
> PRAGMA read_uncommited = TRUE;
I'll look in to those too. Thanks.
> If not, a big select with a big sort could try to use temporal files
> on your nfs server. As you aren't doing any write, no need to wait
> for write
On Thu, Feb 7, 2013 at 10:30 AM, James Vanns wrote:
> I fear I must correct myself. SQLite appears to 2nd guess/avoid the Linux
> kernel page cache both when the file is local and when it is remote.
SQLite makes no effort to game the OS page cache. SQLite just calls
Would it be any use to you to have a separate process which mmaps the file?
Seems to me that would probably keep all the pages in cache constantly.
I just did a local test on my NFS setup and the file appears to cache just
fine.
Does yours behave differently?
#include
#include
int main(int
On Thu, Feb 07, 2013 at 04:11:18PM +0100, Eduardo Morras scratched on the wall:
> If you need cache being persistent between process on the same server,
> you can build a ram disk, write the db there and use it from any
> process. This way you read the db only once from nfs. Even better, you
>
On 02/07/2013 10:30 PM, James Vanns wrote:
I fear I must correct myself. SQLite appears to 2nd guess/avoid the Linux
kernel page cache both when the file is local and when it is remote. I'd wager
that it's own internal cache (an LRU of somesort?) only ever ensures that there
are n pages in
I re-ran my test with a 33MB database. Using the shell to .dump the file
doesn't fill the cache.
But my testro program does. If you open the database with the shell it
clears the cache again (it's opening it read/write).
ls -l insert.db
-rw-r--r-- 1 mblack users 35016704 Feb 7 10:54 insert.db
And I can confirm here that, over NFS, using just the sqlite3 CLI the Linux
page cache is cleared every time - all the damn pages that were read in from
disk are read in all over again;
sqlite3 /nfs/file.db
$ select * from big_table # (vmtouch reports 163MB read and resident)
(don't even have
Quoth James Vanns , on 2013-02-07 16:32:49 +:
> And I can confirm here that, over NFS, using just the sqlite3 CLI
> the Linux page cache is cleared every time - all the damn pages that
> were read in from disk are read in all over again;
>
> sqlite3 /nfs/file.db
> $
> If you don't even have to close the SQLite shell for that to happen,
> I'm guessing it's an interaction with POSIX/fcntl file locking, which
> theoretically works over NFS but as I recall has some oddities. What
> happens if you do this?
>
> pragma locking_mode = exclusive;
> select * from
On 02/08/2013 12:21 AM, James Vanns wrote:
If you don't even have to close the SQLite shell for that to
happen, I'm guessing it's an interaction with POSIX/fcntl file
locking, which theoretically works over NFS but as I recall has
some oddities. What happens if you do this?
pragma locking_mode
> If you do "sqlite3 $file -vfs unix-none $stmt" I think all the
> fcntl() calls to lock and unlock the db file are omitted. Does
> that stop the pages from being evicted when sqlite opens the
> db file?
I'm not sure I can :( My version doesn't appear to offer that option;
sqlite3: Error:
On Thu, Feb 7, 2013 at 12:34 PM, James Vanns wrote:
> > If you do "sqlite3 $file -vfs unix-none $stmt" I think all the
> > fcntl() calls to lock and unlock the db file are omitted. Does
> > that stop the pages from being evicted when sqlite opens the
> > db file?
>
>
Anybody know if the connect() call accepts the underlying SQLite URI format?
http://docs.python.org/2/library/sqlite3.html
http://www.sqlite.org/c3ref/open.html
http://www.sqlite.org/uri.html
I can find no further Python DBI/sqlite3 API documentation to suggest either
way.
I tried it and it
> If you do "sqlite3 $file -vfs unix-none $stmt" I think all the
> fcntl() calls to lock and unlock the db file are omitted. Does
> that stop the pages from being evicted when sqlite opens the
> db file?
Bingo! I upgraded and now, running this over and over again all
pages in the kernel page
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 07/02/13 10:00, James Vanns wrote:
> I can find no further Python DBI/sqlite3 API documentation to suggest
> either way. I tried it and it fails I guess that's it :(
A flag needs to be set in the open call which pysqlite doesn't do. Heck
the
Yes it works fine. You may need to update the sqlite shared library and
ensure that the library is built to use uri names.
You also might want to take a look at APSW which is anothet sqlite api wrapper
that is more sqlite-like. It is not exactly dbi compatible though but is close
enough to
Dear,
I'm trying to make a mystrstr function to use it as a sub-function of
"like".
mystrstr function is designed to use boyer-moore algorithm to improve
string search performance.
(Only for simple substing search, but not a complex search pattern)
But, the problem is occurred when this mystrstr
my table has a table like this:
c5 c6 c7
9806062380 9806062380 9806062380
9806062480 9806062680 9806062680
9806063280 9806068980 9806068980
9806062980 9806067980 9806067980
9806062280 9806067880 9806067880
9806734080 9806068280
On 2/8/2013 12:08 AM, YAN HONG YE wrote:
my table has a table like this:
c5 c6 c7
9806062380 9806062380 9806062380
9806062480 9806062680 9806062680
9806063280 9806068980 9806068980
9806062980 9806067980 9806067980
9806062280 9806067880
Select * from mytable where c5 like '9806067880' OR c6 like '9806067880' OR
c7 like '9806067880';
AFAIK, like function should called for each column.
Regards,
Yongil Jang.
2013/2/8 YAN HONG YE
> my table has a table like this:
> c5 c6 c7
> 9806062380
On 02/08/2013 12:08 PM, Yongil Jang wrote:
Dear,
I'm trying to make a mystrstr function to use it as a sub-function of
"like".
mystrstr function is designed to use boyer-moore algorithm to improve
string search performance.
(Only for simple substing search, but not a complex search pattern)
Thank you, Dan!
It really works well as you commented!
Thank you again!
2013/2/8 Dan Kennedy
> On 02/08/2013 12:08 PM, Yongil Jang wrote:
>
>> Dear,
>>
>> I'm trying to make a mystrstr function to use it as a sub-function of
>> "like".
>> mystrstr function is designed
39 matches
Mail list logo