Re: [sqlite] SQLite really SLOW on Solaris?

2010-11-20 Thread Lynton Grice
Hi Eric,

I will be trying it on real machines next week but for now I have one Debian
and one Solaris VM on my laptop, each allocated the same memory etc

I will try profiling what is really going on and will let you know, but I
hope it is just a VM thing

Thanks ;-)

Lynton

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Eric Smith
Sent: 20 November 2010 09:13 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] SQLite really SLOW on Solaris?

Lynton Grice wrote:

> Many thanks for your feedback, much appreciated ;-)
> 
> But why would that happen on Solaris and not Debian?

Did you try it on real OSs & not VMs?  Are you sure the host OS is
giving the same compute & disk slices to the VMs?  Any other differences
in those mappings?

Are the VMs running on the same host box?  If not, are they running on
the same hardware and host OS?

Along the lines of Roger's comment, some VM implementations have 
OS clocks that simply don't work right -- they run too fast or too slow 
(I have personally observed differences of up to 3x), because they 
assume they are on real hardware.  Have you tested that?

Did you try compiling it with profiling data turned on to see where the
bottlenecks are (using gcc's -pg flag plus gprof(1))?

Or, as a simpler test, you could run your app from strace(1) (assuming
solaris has such a thing) to see if you are spending loads of time in
system calls as Roger hypothesized.

> Also, how would I pass the -D flags? Do I do that when I compile SQLite?

Yes -- those are arguments to GCC that are similar to writing a #define 
compiler directive into your code.  So saying

gcc -DFOO=bar ... 

would be like writing

#define FOO bar

into the code.

The sqlite.org docs describe what they do pretty well iirc.

Eric

--
Eric A. Smith

You can observe a lot by watching.
-- Yogi Berra
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] page cache vs OS cache

2010-11-20 Thread Pavel Ivanov
I'd say the performance of a local in-process cache can be comparable
with performance of OS cache only in one case: if your process is the
only process in the system and no other files are read from disk. In
all other (common) cases OS caches much larger set of data than is
used by SQLite and naturally it will be harder for OS to find your
data. Another concerns is if there is some other process
reading/writing a lot from disk it will evict all your data from cache
and you will have to read it from disk again. And one more
observation: if you want to rely on OS cache every time you need your
data you will have to make system call for reading. It's pretty
expensive operation by itself, plus kernel does a lot more
housekeeping operations than your application would do and kernel has
a lot of different code paths dedicated for different use cases. If
your application doesn't need all those use cases, those paths will
just slow it down without giving any benefit.

So considering all that I'd say keeping your own local cache is always
better than relying on something more general and intended for
everybody to use (not just you).


Pavel

On Fri, Nov 19, 2010 at 9:44 PM, Yang  wrote:
> I wonder why we need page cache, since we already have OS file cache.
>
> in other words, if we simply set OS cache to be very large, would it
> have the same effect as page cache?
>
> thanks
> Yang
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite really SLOW on Solaris?

2010-11-20 Thread Eric Smith
Lynton Grice wrote:

> Many thanks for your feedback, much appreciated ;-)
> 
> But why would that happen on Solaris and not Debian?

Did you try it on real OSs & not VMs?  Are you sure the host OS is
giving the same compute & disk slices to the VMs?  Any other differences
in those mappings?

Are the VMs running on the same host box?  If not, are they running on
the same hardware and host OS?

Along the lines of Roger's comment, some VM implementations have 
OS clocks that simply don't work right -- they run too fast or too slow 
(I have personally observed differences of up to 3x), because they 
assume they are on real hardware.  Have you tested that?

Did you try compiling it with profiling data turned on to see where the
bottlenecks are (using gcc's -pg flag plus gprof(1))?

Or, as a simpler test, you could run your app from strace(1) (assuming
solaris has such a thing) to see if you are spending loads of time in
system calls as Roger hypothesized.

> Also, how would I pass the -D flags? Do I do that when I compile SQLite?

Yes -- those are arguments to GCC that are similar to writing a #define 
compiler directive into your code.  So saying

gcc -DFOO=bar ... 

would be like writing

#define FOO bar

into the code.

The sqlite.org docs describe what they do pretty well iirc.

Eric

--
Eric A. Smith

You can observe a lot by watching.
-- Yogi Berra
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite really SLOW on Solaris?

2010-11-20 Thread Lynton Grice
Hi Roger,

Many thanks for your feedback, much appreciated ;-)

But why would that happen on Solaris and not Debian?

Also, how would I pass the -D flags? Do I do that when I compile SQLite?

Thanks again

Lynton

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Roger Binns
Sent: 20 November 2010 07:31 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] SQLite really SLOW on Solaris?

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/20/2010 07:12 AM, Lynton Grice wrote:
> On Debian it processes a couple hundred messages per second, on Solaris
> using the SAME code it does like ONE message per second.

One possible cause is the busy handler but this only applies if you have
concurrent access to the file and have set a busy timeout.

The default SQLite busy handler keeps trying again and if it fails sleeps,
repeating the process.  It will sleep for longer and longer periods until
the overall busy timeout period has expired.

If you do not provide any -D flags then the call used to do the sleeping has
a granularity of one second - ie even if SQLite wants to sleep for 50ms it
will sleep for one second.  If you have the usleep() function and supply
- -DHAVE_USLEEP then microsecond resolution sleeps are available.

These are the sleep intervals in milliseconds that sqliteDefaultBusyCallback
tries when USLEEP is available:

  { 1, 2, 5, 10, 15, 20, 25, 25,  25,  50,  50, 100 }

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkzoBeIACgkQmOOfHg372QTXHQCfQ3HIoXF5FcqP9AttZ7gNdPEZ
IKYAoMVuIqoSa4e08nsbVQk5ABWZdq1G
=+TOO
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] joining two sequences?

2010-11-20 Thread Jim Morris
Slightly better version:

select distinct f1 .key as foo_key, b1.key as bar_key
from bar b1
inner join foo f1 on f1 .value = b1.value
where
not exists
(
-- Values for a particular key in foo
select f3.value from foo f3 WHERE f3.key= f1.key
union
-- Values for a particular key in bar
select b3.value from bar b3 WHERE b3.key = b1.key
except
-- Values common to both foo key and bar key
select f2.value from foo f2 inner join bar b2 on b2.value = f2.value 
WHERE b2.key = b1.key AND f2.key= f1.key
);

On 11/19/2010 6:40 PM, Jim Morris wrote:
> This should return a the equivalent keys in the two maps.  The basic
> idea is to compare the values in each key in foo(left outer join foo)
> with the values for each key in bar where there are any matching
> values(left outer join bar) and only select those with a complete match(
> inner join).  Not sure this is the most efficient way.
>
> select distinct f1 .key as foo_key, b1.key as bar_key
> from bar b1
> inner join foo f1 on f1 .value = b1.value
> where
> not exists
> (
> -- Values for a particular key in foo
> select f3.value from foo f3 left outer join bar b3 on b3.value= f3.value
> WHERE f3.key= f1.key
> union
> -- Values for a particular key in bar
> select f3.value from bar b3 left outer join foo f3 on b3.value= f3.value
> WHERE b3.key = b1.key
> except
> -- Values common to both foo key and bar key
> select f2.value from foo f2 inner join bar b2 on b2.value = f2.value
> WHERE b2.key = b1.key AND f2.key= f1.key
> );
>
>
>
> On 11/19/2010 1:03 PM, Petite Abeille wrote:
>> Hello,
>>
>> Given two tables describing sequences of key value pairs, what would be a 
>> reasonable way to join them?
>>
>> For example, assuming two table foo and bar with identical structure:
>>
>> create temporary table foo
>> (
>>   key integer not null,
>>   value   text not null,
>>   constraint  foo_pk primary key( key, value )
>> );
>>
>> create temporary table bar
>> (
>>   key integer not null,
>>   value   text not null,
>>   constraint  bar_pk primary key( key, value )
>> );
>>
>> And a set of sequences in each of the table:
>>
>> insert into foo values( 1, 'a' );
>>
>> insert into foo values( 2, 'a' );
>> insert into foo values( 2, 'b' );
>>
>> insert into foo values( 3, 'a' );
>> insert into foo values( 3, 'b' );
>> insert into foo values( 3, 'c' );
>>
>> insert into bar values( 4, 'a' );
>> insert into bar values( 4, 'b' );
>>
>> What would be a good way to join foo( 2, 'a', )( 2, 'b' ) to bar( 4, 'a', )( 
>> 4, 'b' )? In other words, join the sequences with the same values?
>>
>> Right now, I'm using group_concat to flatten the sequences:
>>
>> select  *
>> from(
>>   select  key,
>>   group_concat( value ) as value
>>   fromfoo
>>
>>   group bykey
>>   )
>> as  foo
>>
>> join(
>>   select  key,
>>   group_concat( value ) as value
>>   frombar
>>
>>   group bykey
>>   )
>> as  bar
>> on  bar.value = foo.value
>>
>> Which results in:
>>
>> key|value|key|value
>> 2|a,b|4|a,b
>>
>> All good, if perhaps clunky.
>>
>> But the documentation for group_concat mention that the order of the 
>> concatenated elements is arbitrary [1]. Which perhaps would preclude 
>> group_concat from being reliably use as a join predicate, no?
>>
>> Could someone think of a nice alternative to group_concat to join such data 
>> structure?
>>
>> Thanks in advance.
>>
>> Cheers,
>>
>> PA.
>>
>> [1] http://www.sqlite.org/lang_aggfunc.html
>>
>>
>>
>>
>>
>>
>>
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite really SLOW on Solaris?

2010-11-20 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/20/2010 07:12 AM, Lynton Grice wrote:
> On Debian it processes a couple hundred messages per second, on Solaris
> using the SAME code it does like ONE message per second.

One possible cause is the busy handler but this only applies if you have
concurrent access to the file and have set a busy timeout.

The default SQLite busy handler keeps trying again and if it fails sleeps,
repeating the process.  It will sleep for longer and longer periods until
the overall busy timeout period has expired.

If you do not provide any -D flags then the call used to do the sleeping has
a granularity of one second - ie even if SQLite wants to sleep for 50ms it
will sleep for one second.  If you have the usleep() function and supply
- -DHAVE_USLEEP then microsecond resolution sleeps are available.

These are the sleep intervals in milliseconds that sqliteDefaultBusyCallback
tries when USLEEP is available:

  { 1, 2, 5, 10, 15, 20, 25, 25,  25,  50,  50, 100 }

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkzoBeIACgkQmOOfHg372QTXHQCfQ3HIoXF5FcqP9AttZ7gNdPEZ
IKYAoMVuIqoSa4e08nsbVQk5ABWZdq1G
=+TOO
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to disable and enable constraints in SQLite ?

2010-11-20 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/20/2010 08:33 AM, Chris Wolf wrote:
> I tried building your APSW wrapper.  It built fine, but running "python
> setup.py test" failed:
> 
> ERROR: Verify VFS functionality
> --
> Traceback (most recent call last):
>   File "/Users/cwolf/src/apsw-3.6.23.1-r1/tests.py", line 3526, in testVFS
> vfs=ObfuscatedVFS()
>   File "/Users/cwolf/src/apsw-3.6.23.1-r1/tests.py", line 3521, in __init__
> apsw.VFS.__init__(self, self.vfsname, self.basevfs)
> ValueError: Base vfs implements version 2 of vfs spec, but apsw only
> supports version 1
> 
> I am building apsw-3.6.23.1-r1  against sqlite3-3.7.3 (amalgamation
> pulled down via fetch) on MacOS 10.5.
> 
> Does this mean I won't be able to do virtual table from Python?

The failing test is for VFS functionality which deals with SQLite's
interface to the underlying operating system - eg file read and write(*).
It has no effect on support for virtual tables which will work fine with any
combination of APSW and SQLite after they introduced virtual tables.

In any event you are using APSW from the beginning of April which predates
SQLite introducing version 2 VFS.  Try a more recent APSW:

  http://code.google.com/p/apsw/

Unfortunately Google's left and right hands have issues talking to each
other so searching for APSW documentation brings up old pages from a Google
code site I can't delete and ignores the more recent documentation also on
Google code.  The links from the page above are however correct.

(*) And in particular the issue is with functionality in APSW that lets you
"inherit" from an existing VFS such as if you wanted to augment or override
a few methods rather than write an entire one from scratch.  When doing this
it is a requirement that the VFS API versions match hence the message.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkzoAy8ACgkQmOOfHg372QSCYwCeKjU0JeBJ5/6jhk/h1Z7tnzt5
HEYAoMSklYJ7Hyr0ZnNBoNHuOhTa25RH
=ReHP
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Command line client and Tcl interface behaves differently?

2010-11-20 Thread Fredrik Karlsson
Hi,

Sorry, never mind this post. I haven't found the problem actually yet,
but a "bare minimum" example fed directly to the interpreter through
the command line works correctly:

% package require sqlite3
3.7.2
% set inf [open 1_schema.sql r]
file6
% set sql [read $inf]
[... the contents of the read file is dumped ..]
% close $inf
% sqlite3 db :memory:
% db eval $sql
% puts [db eval {SELECT sqlite_version(), sqlite_source_id()}]
3.7.2 {2010-08-23 18:52:01 42537b60566f288167f1b5864a5435986838e3a3}
% db eval {select * from SQLITE_MASTER} values {parray values}
[... data concerning all tables are dumped... ]

So, the error is somewhere else in the code. Sorry about that. :-/

/Fredrik

-- 
"Life is like a trumpet - if you don't put anything into it, you don't
get anything out of it."
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to disable and enable constraints in SQLite ?

2010-11-20 Thread Chris Wolf
Roger,

I tried building your APSW wrapper.  It built fine, but running "python
setup.py test" failed:

ERROR: Verify VFS functionality
--
Traceback (most recent call last):
  File "/Users/cwolf/src/apsw-3.6.23.1-r1/tests.py", line 3526, in testVFS
vfs=ObfuscatedVFS()
  File "/Users/cwolf/src/apsw-3.6.23.1-r1/tests.py", line 3521, in __init__
apsw.VFS.__init__(self, self.vfsname, self.basevfs)
ValueError: Base vfs implements version 2 of vfs spec, but apsw only
supports version 1



I am building apsw-3.6.23.1-r1  against sqlite3-3.7.3 (amalgamation
pulled down via fetch) on MacOS 10.5.

Does this mean I won't be able to do virtual table from Python?

Thanks,

-Chris

Roger Binns wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 11/19/2010 05:52 AM, Andreas Mucha wrote:
>   
>> i have to copy data from some database into an SQLite database.
>> The fasted way for me would be to copy simply all tables.
>> To do so i have to disable all SQLite - constraints, copy data
>> and finally enable the constraints again.
>>
>> How can this be done ?
>> 
>
> Disabling normal constraints doesn't make any sense.  For example if you
> have a 'not null' constraint then disabling while loading would allow null
> values in the data.
>
> Foreign keys can be disabled with "PRAGMA foreign_keys=OFF;" and you would
> have to do so if you had any.
>
> There are basically two ways of getting at the data.  One would be to
> implement virtual tables where the backend talks to your existing database
> system.  Copies can then be done 'insert into ... select ... from ...' or
> 'create table ... as select ... from ...'.  This approach would also let you
> only copy some of the data into SQLite while letting the rest remain in your
> existing database system and you could transparently use both at the same 
> time.
>
> The second approach is to create a dump - a text file with raw SQL
> statements in it.  This is easy to develop as you keep reading the SQL text
> until you are happy with it.  That text is also a good way of doing backups,
> comparisons between databases etc.  The SQLite shell includes a .dump
> command so you can get some idea of what they should look like.  If you are
> just doing this copy data process once then you'll do great.  If you will be
> repeating the process many times then you'll need to make your dump more 
> robust.
>
> Here is what my dump code outputs when dumping a SQLite database to give you
> an idea:
>
> - - Disables foreign keys if they are used (this must be done outside a
> transaction)
> - - "BEGIN TRANSACTION"
>
> For each table:
> - - "DROP TABLE IF EXISTS name"
> - - "CREATE TABLE name ..."
> - - INSERT for each row data
> - - Creates any indices
> - - Creates any triggers
>
> - - Creates any views last as they could involve multiple tables
> - - Populates the sqlite_sequence table if any tables have autoincrement
> primary keys
> - - Runs analyze for any tables that had that before
>
> - - "COMMIT TRANSACTION"
> - - Enable foreign keys
>
> If you are happy with Python as a language then I'll toot the benefits of my
> APSW wrapper around SQLite.  You can very easily implement virtual tables.
> It also has a builtin shell that produces nicer dumps than the SQLite shell
> (eg there are comments in it) and you can easily add your own commands if
> you want to provide a more interactive experience doing the data importing.
>
>  http://apidoc.apsw.googlecode.com/hg/vtable.html
>  http://apidoc.apsw.googlecode.com/hg/shell.html
>
> Roger
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.10 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
>
> iEYEARECAAYFAkzn6U0ACgkQmOOfHg372QStPQCeLzOanb7LHlqLtBO8qV33+3j/
> Xt4An0SKbgdGWqJ43FUr8B09V3XP8JK6
> =AZUr
> -END PGP SIGNATURE-
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>   
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to disable and enable constraints in SQLite ?

2010-11-20 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/19/2010 05:52 AM, Andreas Mucha wrote:
> i have to copy data from some database into an SQLite database.
> The fasted way for me would be to copy simply all tables.
> To do so i have to disable all SQLite - constraints, copy data
> and finally enable the constraints again.
> 
> How can this be done ?

Disabling normal constraints doesn't make any sense.  For example if you
have a 'not null' constraint then disabling while loading would allow null
values in the data.

Foreign keys can be disabled with "PRAGMA foreign_keys=OFF;" and you would
have to do so if you had any.

There are basically two ways of getting at the data.  One would be to
implement virtual tables where the backend talks to your existing database
system.  Copies can then be done 'insert into ... select ... from ...' or
'create table ... as select ... from ...'.  This approach would also let you
only copy some of the data into SQLite while letting the rest remain in your
existing database system and you could transparently use both at the same time.

The second approach is to create a dump - a text file with raw SQL
statements in it.  This is easy to develop as you keep reading the SQL text
until you are happy with it.  That text is also a good way of doing backups,
comparisons between databases etc.  The SQLite shell includes a .dump
command so you can get some idea of what they should look like.  If you are
just doing this copy data process once then you'll do great.  If you will be
repeating the process many times then you'll need to make your dump more robust.

Here is what my dump code outputs when dumping a SQLite database to give you
an idea:

- - Disables foreign keys if they are used (this must be done outside a
transaction)
- - "BEGIN TRANSACTION"

For each table:
- - "DROP TABLE IF EXISTS name"
- - "CREATE TABLE name ..."
- - INSERT for each row data
- - Creates any indices
- - Creates any triggers

- - Creates any views last as they could involve multiple tables
- - Populates the sqlite_sequence table if any tables have autoincrement
primary keys
- - Runs analyze for any tables that had that before

- - "COMMIT TRANSACTION"
- - Enable foreign keys

If you are happy with Python as a language then I'll toot the benefits of my
APSW wrapper around SQLite.  You can very easily implement virtual tables.
It also has a builtin shell that produces nicer dumps than the SQLite shell
(eg there are comments in it) and you can easily add your own commands if
you want to provide a more interactive experience doing the data importing.

 http://apidoc.apsw.googlecode.com/hg/vtable.html
 http://apidoc.apsw.googlecode.com/hg/shell.html

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkzn6U0ACgkQmOOfHg372QStPQCeLzOanb7LHlqLtBO8qV33+3j/
Xt4An0SKbgdGWqJ43FUr8B09V3XP8JK6
=AZUr
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Strange Corruption

2010-11-20 Thread Pirmin Walthert
No, this is definitely not the reason in my case as I can reproduce this 
issue on every 3.7.2/3.7.3 machine I've tested after copying the 
database file (and only the database file) to these machines.

Am 15.11.2010 15:41, schrieb Kirk Clemons:
> Not sure if it helps but I would see this quite frequently when an old 
> journal file would be left behind in the same directory as the backup 
> database. This could be why making a change to the database such as vacuum 
> would prevent the corruption.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite really SLOW on Solaris?

2010-11-20 Thread Lynton Grice
Hi there,

 

I compiled SQLite on Debain (VM) first using the following:

 

/usr/local/bin/gcc -c -fPIC sqlite3.c

/usr/local/bin/gcc -shared -o libsqlite3.so sqlite3.o

 

And then I created my own shared lib using SQLite and when I use the lib on
Debian it is SUPER FAST.

 

Then I installed SQLIte on Solaris (VM) using the same compile commands
above. I copied my shared lib code from Debian to Solaris and ran the same
"test code".and it is SO SLOWlike ridiculously slow.

 

On Debian it processes a couple hundred messages per second, on Solaris
using the SAME code it does like ONE message per second.

 

What could be wrong here? Must I change the compile options on Solaris? 

 

Any ideas?

 

Chat later 

 

Lynton 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Command line client and Tcl interface behaves differently?

2010-11-20 Thread Fredrik Karlsson
On Sat, Nov 20, 2010 at 2:43 PM, Richard Hipp  wrote:
> On Sat, Nov 20, 2010 at 8:25 AM, Fredrik Karlsson wrote:
>
>> Dear list,
>>
>> I am having a silly problem, and need your expertise. I just want to
>> initiate a SQLite database using a schema file in Tcl, but I just get
>> an empty database whatever I do. I asked this question on the Tcl
>> list, and got a reply which I interpret to mean that this is not a
>> problem in my Tcl knowledge - but in my SQLite implementation.
>>
>> Anyway, loading from the same SQL file within the "sqlite3" command
>> line client is no problem
>>
>> Here is what I am doing:
>>
>> proc init_db {dbDir {dbFile db.sqlite3} {force 1} } {
>>
>>        set dbFile [file join [file normalize $dbDir] $dbFile ]
>>        if { $force == 1 && [file exists $dbFile]} {
>>                file delete $dbFile
>>        }
>>        sqlite3 db $dbFile
>>        set schemaFile [file normalize [file join  .. setup 1_schema.sql] ]
>>
>>        if {! [file exists $schemaFile] } {
>>                return -code error "Unable to open schema file $schemaFile"
>>        }
>>        set inf [open $schemaFile r]
>>        set sql [read $inf]
>>        close $inf
>>
>> Add here:   puts $sql
> Let's see what file you are really loading
>
>
>>        db eval $sql
>>
>
> And here:  puts [db eval {SELECT sqlite_version(), sqlite_source_id()}]
>
>
>>
>>        db close
>>        return [file normalize $dbFile ]
>>
>> }
>>
>> Does the Tcl interface and the "sqlite3" binary behave differently
>> when parsing instructions from a file?
>>
>> The SQL code in the 1_schema.sql" file is just table definitions and
>> comments using the /* */ syntax which loads ok in the command like
>> client, so what could be wrong?
>>
>> Thankful for all the help I could get on this.
>>
>> /Fredrik
>>
>>
>> --
>> "Life is like a trumpet - if you don't put anything into it, you don't
>> get anything out of it."
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>

Thank you Rickard for the quick response. This is really code that is
part of a benchmarking framework (which used to work, oddly enough) so
the output is a bit verbose.

-
$ tclsh8.6 run_all_benchmarks.tcl
[Sat Nov 20 14:50:09 CET 2010] [bench] [debug] '::bench::run -errors 1
-iters 10 -match Emu*simple*one* /usr/local/bin/tclsh8.6
./emuquery.benchmark'
[Sat Nov 20 14:50:09 CET 2010] [bench] [info] 'Benchmark
/usr/local/bin/tclsh8.6'
[Sat Nov 20 14:50:09 CET 2010] [bench] [info] 'emuquery.benchmark'
[Sat Nov 20 14:50:09 CET 2010] [bench] [info] 'Sourcing ./emuquery.benchmark'
[Sat Nov 20 14:50:10 CET 2010] [bench] [info] 'Running '
[Sat Nov 20 14:50:10 CET 2010] [bench] [info] 'Copied 1 files to
/private/var/folders/N8/N8aqQyE1FE8Bb0ONohLfsk+++TM/-Tmp-'
[Sat Nov 20 14:50:10 CET 2010] [bench] [info]
'/private/var/folders/N8/N8aqQyE1FE8Bb0ONohLfsk+++TM/-Tmp-'
[Sat Nov 20 14:50:10 CET 2010] [bench] [info] 'CREATE TABLE utterances ('
[Sat Nov 20 14:50:10 CET 2010] [bench] [info] 'id INTEGER PRIMARY KEY
AUTOINCREMENT,'
[Sat Nov 20 14:50:10 CET 2010] [bench] [info] 'name TEXT,'
[Sat Nov 20 14:50:10 CET 2010] [bench] [info] 'filelength REAL,'
[Sat Nov 20 14:50:10 CET 2010] [bench] [info] 'updated_at TEXT,'
[Sat Nov 20 14:50:10 CET 2010] [bench] [info] 'checksum_algorithm TEXT,'
[Sat Nov 20 14:50:10 CET 2010] [bench] [info] 'checksum TEXT,'
[Sat Nov 20 14:50:10 CET 2010] [bench] [info] 'UNIQUE(name) ON CONFLICT FAIL'
[Sat Nov 20 14:50:10 CET 2010] [bench] [info] ');'
[Sat Nov 20 14:50:10 CET 2010] [bench] [info] 'CREATE TABLE levels ('
[Sat Nov 20 14:50:10 CET 2010] [bench] [info] 'id INTEGER PRIMARY KEY
AUTOINCREMENT,'
[Sat Nov 20 14:50:10 CET 2010] [bench] [info] 'name TEXT UNIQUE ON
CONFLICT FAIL'
[Sat Nov 20 14:50:10 CET 2010] [bench] [info] ',weight FLOAT UNIQUE'
[Sat Nov 20 14:50:10 CET 2010] [bench] [info] ');'
[Sat Nov 20 14:50:10 CET 2010] [bench] [info] 'CREATE TABLE level_level ('
[Sat Nov 20 14:50:10 CET 2010] [bench] [info] 'parent_id INTEGER
REFERENCES levels(id),'
[Sat Nov 20 14:50:10 CET 2010] [bench] [info] 'child_id INTEGER
REFERENCES levels(id),'
[Sat Nov 20 14:50:10 CET 2010] [bench] [info] 'PRIMARY KEY(parent_id,child_id)'
[Sat Nov 20 14:50:10 CET 2010] [bench] [info] ');'
[Sat Nov 20 14:50:10 CET 2010] [bench] [info] 'CREATE TABLE tc_level_level ('
[Sat Nov 20 14:50:10 CET 2010] [bench] [info] 'ancestor_id INTEGER
REFERENCES levels(id),'
[Sat Nov 20 14:50:10 CET 2010] [bench] [info] 'descendant_id INTEGER
REFERENCES levels(id),'

Re: [sqlite] Command line client and Tcl interface behaves differently?

2010-11-20 Thread Richard Hipp
On Sat, Nov 20, 2010 at 8:25 AM, Fredrik Karlsson wrote:

> Dear list,
>
> I am having a silly problem, and need your expertise. I just want to
> initiate a SQLite database using a schema file in Tcl, but I just get
> an empty database whatever I do. I asked this question on the Tcl
> list, and got a reply which I interpret to mean that this is not a
> problem in my Tcl knowledge - but in my SQLite implementation.
>
> Anyway, loading from the same SQL file within the "sqlite3" command
> line client is no problem
>
> Here is what I am doing:
>
> proc init_db {dbDir {dbFile db.sqlite3} {force 1} } {
>
>set dbFile [file join [file normalize $dbDir] $dbFile ]
>if { $force == 1 && [file exists $dbFile]} {
>file delete $dbFile
>}
>sqlite3 db $dbFile
>set schemaFile [file normalize [file join  .. setup 1_schema.sql] ]
>
>if {! [file exists $schemaFile] } {
>return -code error "Unable to open schema file $schemaFile"
>}
>set inf [open $schemaFile r]
>set sql [read $inf]
>close $inf
>
> Add here:   puts $sql
Let's see what file you are really loading


>db eval $sql
>

And here:  puts [db eval {SELECT sqlite_version(), sqlite_source_id()}]


>
>db close
>return [file normalize $dbFile ]
>
> }
>
> Does the Tcl interface and the "sqlite3" binary behave differently
> when parsing instructions from a file?
>
> The SQL code in the 1_schema.sql" file is just table definitions and
> comments using the /* */ syntax which loads ok in the command like
> client, so what could be wrong?
>
> Thankful for all the help I could get on this.
>
> /Fredrik
>
>
> --
> "Life is like a trumpet - if you don't put anything into it, you don't
> get anything out of it."
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Command line client and Tcl interface behaves differently?

2010-11-20 Thread Fredrik Karlsson
Dear list,

I am having a silly problem, and need your expertise. I just want to
initiate a SQLite database using a schema file in Tcl, but I just get
an empty database whatever I do. I asked this question on the Tcl
list, and got a reply which I interpret to mean that this is not a
problem in my Tcl knowledge - but in my SQLite implementation.

Anyway, loading from the same SQL file within the "sqlite3" command
line client is no problem

Here is what I am doing:

proc init_db {dbDir {dbFile db.sqlite3} {force 1} } {

set dbFile [file join [file normalize $dbDir] $dbFile ]
if { $force == 1 && [file exists $dbFile]} {
file delete $dbFile
}
sqlite3 db $dbFile
set schemaFile [file normalize [file join  .. setup 1_schema.sql] ]

if {! [file exists $schemaFile] } {
return -code error "Unable to open schema file $schemaFile"
}
set inf [open $schemaFile r]
set sql [read $inf]
close $inf

db eval $sql

db close
return [file normalize $dbFile ]

}

Does the Tcl interface and the "sqlite3" binary behave differently
when parsing instructions from a file?

The SQL code in the 1_schema.sql" file is just table definitions and
comments using the /* */ syntax which loads ok in the command like
client, so what could be wrong?

Thankful for all the help I could get on this.

/Fredrik


-- 
"Life is like a trumpet - if you don't put anything into it, you don't
get anything out of it."
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to disable and enable constraints in SQLite ?

2010-11-20 Thread Simon Slavin

On 19 Nov 2010, at 1:52pm, Andreas Mucha wrote:

> i have to copy data from some database into an SQLite database.
> The fasted way for me would be to copy simply all tables.
> To do so i have to disable all SQLite - constraints, copy data
> and finally enable the constraints again.

There's no easy way to do that in SQLite, I'm afraid.  The fastest non-custom 
way to fill a table with data is usually the following:

1) CREATE TABLE with the variable definitions you want it to have
2) BEGIN TRANSACTION
3) INSERT all the data
4) END TRANSACTION
5) CREATE INDEX for each index
6) CREATE TRIGGER for each trigger

You can either write your conversion software to do this, or write your 
conversion software to create a big text file with the appropriate SQL 
commands, and use the '.read' command of the command-line tool to turn it into 
a database file:

http://www.sqlite.org/sqlite.html

Using a long text file is often /far/ easier to debug.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to disable and enable constraints in SQLite ?

2010-11-20 Thread Andreas Mucha
Hello all,

i have to copy data from some database into an SQLite database.
The fasted way for me would be to copy simply all tables.
To do so i have to disable all SQLite - constraints, copy data
and finally enable the constraints again.

How can this be done ?

Thanx in advance.
Andreas
-- 
GMX DSL Doppel-Flat ab 19,99 /mtl.! Jetzt auch mit 
gratis Notebook-Flat! http://portal.gmx.net/de/go/dsl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] page cache vs OS cache

2010-11-20 Thread Max Vlasov
On Sat, Nov 20, 2010 at 5:44 AM, Yang  wrote:

> I wonder why we need page cache, since we already have OS file cache.
>
> in other words, if we simply set OS cache to be very large, would it
> have the same effect as page cache?
>
>
Yang,
sqlite operates on some level of abstration. For example, it doesn't know
anything about the cost of reading one particular page. The reading request
routes through its own VFS interface (virtual file system) where enryption
or compression for example could be implemented, the following reading
request(s) goes to OS that could have or could have not hard disk cache
enabled, and finally there would be a ram disk, harddisk, network drive etc
on the final stage. But it surely knows that if it saves a page just read in
its own cache, it will retrieve it with the fastest speed possible. And even
if you as a developer know that you're on a desktop OS with system cache
enabled and the base is on a fast harddisk, the call for page reading is
still costs a little more.

Another observation. It is well known that on some systems (windows for
example) system cache is very hard to control. Even if you use some tricky
utility, the system would probably use your suggestion as a hint. Also, the
logic behind the system cache is sometimes very complex of just hidden.

So if you really want to control caching, pragma cache_size is a better way
imho. Speaking of the resource consumption one can expect that making sqlite
cache big forces many pages cached twice, one time - in system cache, one -
in sqlite', but I'm sure the system cache usually has a sane logic and never
gives one single file very much on a constant basis. Although on some system
file opening can be made with a special flag disabling system cache for this
particular file. If it's implemented in sqlite, this will give the developer
the way to rely fully on the sqlite cache and save some resources.

Max Vlasov,
maxerist.net
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users