Re: [sqlite] SQLite caching

2009-02-23 Thread Jonas Sandman
Isn't it more likely that your database is pulled up into the OS disk cache?
Try rebooting the computer between runs and the cache should be cleared.

On Mon, Feb 23, 2009 at 8:44 AM, manohar s  wrote:
> Hi,
>  I am doing some performance analysis on my SQLite queries. The problem is
> SQLite seems to be caching query results.
> I tried restarting my program, that is not helping. only if i don't access
> that database for 2 days then it is giving proper profile data.
> *
> Question*
> 1) How can I disable query result caching in SQLite?
>
> Regards,
> Manohar.S
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite caching

2009-02-23 Thread Dan

On Feb 23, 2009, at 2:44 PM, manohar s wrote:

> Hi,
> I am doing some performance analysis on my SQLite queries. The  
> problem is
> SQLite seems to be caching query results.
> I tried restarting my program, that is not helping. only if i don't  
> access
> that database for 2 days then it is giving proper profile data.
> *
> Question*
> 1) How can I disable query result caching in SQLite?

Assuming you are using SQLite directly, SQLite does not cache query
results. It does cache pages of the database file in memory to reduce
the amount of data loaded from the disk. This cache is just in regular
heap memory, so it is destroyed when the process using sqlite exits.

Probably what you are seeing is that the OS caches recently accessed
file-system sectors. The first time you run the SQLite query data must
be loaded from the magnetic disk. The second and subsequent runs
retrieve data from the OS cache. Much faster.

Dan.





> Regards,
> Manohar.S
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Newb-ish performance questions

2009-02-23 Thread Kim Boulton
Hehe, probably a combination of rubbish grep (i used regex function in a 
text editor) and vaccuming a 4GB table at the same time.

@echo off
setlocal
set starttime=%time%
egrep --count 
"(W[CEF][SZ]|..W[CEF]S|...W[CEF]S|W[3CEF]S[25]..|W3S..|.11[CEF]S.)," 
my-30-million-rows-of-data.txt
set stoptime=%time%
echo Started: %starttime%
echo Ended: %stoptime%

results in:
24561
Started:  9:00:58.82
Ended:  9:01:34.29

36-ish seconds. obviously the regex needs a bit of work as there are 
supposed to be around 200,000 matches.

interesting nonetheless, never used grep before...useful.

k

John Machin wrote:
> On 23/02/2009 5:14 PM, Kim Boulton wrote:
>   
>> Hello,
>>
>> Thanks
>>
>> The grep regex on the text file found around 10,000 lines over 5 minutes 
>> (out of a total possible 200,000 rows), at which time I stopped it, 
>> interesting experiment anyway :-)
>> 
>
> Uh-huh ... so you'd estimate that it would take 5 minutes * (200K rows / 
> 10k rows) = 100 minutes to get through the lot, correct?
>
> I tried an experiment on a 161Mb CSV file with about 1.1M 
> name-and-address-etc rows in it. Because none of the patterns in your 
> query are likely to match my data, I added an extra pattern that would 
> select about 22% of the records (ended up with 225K output rows), 
> putting it at the end to ensure it got no unfair advantage from a regex 
> engine that tested each pattern sequentially.
>
> BTW, I had to use egrep (or grep -E) to get it to work.
>
> Anyway, it took about 6 seconds. Scaling up by number of input records: 
> 6 * 30M / 1M = 180 seconds = 3 minutes. Scaling up by file size: 6 * 500 
> / 161 = 19 seconds. By number of output rows: 6 * 200 / 225 ... forget 
> it. By size of output rows: ... triple forget it.
>
> Conclusion: something went drastically wrong with your experiment. 
> Swapping? Other processes hogging the disk or the CPU? A really duff grep??
>
> Anyway, here's my environment: 2.0 GHz single-core AMD Turion (64 bit 
> but running 32-bit Windows XP SP3), using GNU grep 2.5.3 from the 
> GnuWin32 project; 1 GB memory.
>
> Cheers,
> John
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>   
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Newb-ish performance questions

2009-02-23 Thread John Machin
On 23/02/2009 8:14 PM, Kim Boulton wrote:
> Hehe, probably a combination of rubbish grep (i used regex function in a 
> text editor) and vaccuming a 4GB table at the same time.

google("scientific method") :-)

> 
> @echo off
> setlocal
> set starttime=%time%
> egrep --count 
> "(W[CEF][SZ]|..W[CEF]S|...W[CEF]S|W[3CEF]S[25]..|W3S..|.11[CEF]S.)," 
> my-30-million-rows-of-data.txt
> set stoptime=%time%
> echo Started: %starttime%
> echo Ended: %stoptime%
> 
> results in:
> 24561
> Started:  9:00:58.82
> Ended:  9:01:34.29
> 
> 36-ish seconds. obviously the regex needs a bit of work as there are 
> supposed to be around 200,000 matches.

Probably a big contributing factor is that my regex is based on you 
getting rid of the commas in the part number. If the above input file is 
in your original format, you need to sprinkle commas about madly; the 
first subpattern would become:
.,.,.,.,W,[CEF],[SZ]

Note that your average record size is 16 to 17 bytes. If you lose 6 
commas, it will be 10 to 11 bytes per record i.e. it can be reduced from 
about 500Mb to about 320Mb ... quite a useful saving in processing time 
as well as disk space.

> 
> interesting nonetheless, never used grep before...useful.

Sure is.


Cheers,
John
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite caching

2009-02-23 Thread manohar s
Thanks for your quick replies, although restarting my machine, disc cache is
cleared, I am trying to find an utility which could do the job without
requiring a restart.

Regards,
Manohar.S

On Mon, Feb 23, 2009 at 1:47 PM, Dan  wrote:

>
> On Feb 23, 2009, at 2:44 PM, manohar s wrote:
>
> > Hi,
> > I am doing some performance analysis on my SQLite queries. The
> > problem is
> > SQLite seems to be caching query results.
> > I tried restarting my program, that is not helping. only if i don't
> > access
> > that database for 2 days then it is giving proper profile data.
> > *
> > Question*
> > 1) How can I disable query result caching in SQLite?
>
> Assuming you are using SQLite directly, SQLite does not cache query
> results. It does cache pages of the database file in memory to reduce
> the amount of data loaded from the disk. This cache is just in regular
> heap memory, so it is destroyed when the process using sqlite exits.
>
> Probably what you are seeing is that the OS caches recently accessed
> file-system sectors. The first time you run the SQLite query data must
> be loaded from the magnetic disk. The second and subsequent runs
> retrieve data from the OS cache. Much faster.
>
> Dan.
>
>
>
>
>
> > Regards,
> > Manohar.S
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
hope is the only thing which keeps us all happy
http://sqlyog.wordpress.com/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Sqlite optim

2009-02-23 Thread Jérôme Loyet
Hello everyone,

I'm doing some web logs analysis. I decided to use a sqlite database
because the application has to be easily portable to another
workstation or server and there is absolutly no need of multi-clients.
Moreover, I don't want to spend my time dealing with the database.

What I want to do is quite simple. I want to parse the logs from a
high traffic apache website (loadbalanced on 8 front servers) and
integrated them in a database. That's the easy part. In apache
configuration I added the mod_usertrack in order to be able to follow
a user in the logs.

Then I want to create parent / child relations between requests from a
unique user (id by the mod_usertrack). The final goal is to see all
users. Then by user we can see the requests issued from a click on a
webpage (manual human interface --> parent). Finaly for each parent
requests (click) I want to have all sub requests issued from the
parent page (automatic fetch (images, css, javascript, ...)).

The global aim of all this is to analyse the behaviour of visitors on
our website and to extract different populations of users and make
statistics.

Here are the tables I use:

CREATE TABLE IF NOT EXISTS requests (
 id integer primary key autoincrement,
 date date not null,
 request varchar not null,
 method varchar not null,
 code integer not null,
 referer varchar,
 response_time integer not null,
 vhost varchar DEFAULT NULL,
 cookie not null REFERENCES cookies(id))

CREATE TABLE IF NOT EXISTS cookies (
 id integer primary key autoincrement,
 cookie varchar not null UNIQUE)

CREATE TABLE IF NOT EXISTS siblings (
 parent not null REFERENCES requests(id),
 child not null REFERENCES requests(id))

I ran a test on 2 hours logs from the 18th of february (from 10:00AM
to 11:59AM).

The first step is to parse the logs in order to populate the
`requests` and `cookies` tables. It took about 2min to parse and
insert 238304 cookies and 1163728 requests. After this step the
database is 317Mo sized.

Then I have to populate the `siblings` table. I use the following request:

INSERT INTO siblings (parent, child) SELECT a.id, b.id FROM requests a
INNER JOIN requests b ON a.cookie = b.cookie WHERE a.page = b.referer
AND b.date BETWEEN a.date AND a.date + 20

It took about 3hours and 30 minutes. At the end there is 7158001 rows
in the `siblings` table and the database is about 600Mo sized.

My questions:
1- Is there a better way to populate the `siblings` table ? (optimize
my sql request)
2- What can I do to optimize the all process ? (play with some PRAGMA
parameters maybe)
3- Is sqlite a good choice for this case ? Would mysql or postgresql a
better choice ?

Thanks in advance for your time.

++ Jerome
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite question: group by column with multiple tags?

2009-02-23 Thread Yuzem

Thanks for the answer.

Igor Tandetnik wrote:
> This monstrosity gives the correct answer in your specific example, but 
> it relies on there being exactly two tags per folder.
It can be any number of tags per folder.

Igor Tandetnik wrote:
> 
> Consider normalizing your database. Split into two tables - folders and 
> tags - with a one-to-many relationship between them. Then the solution 
> would be trivial.
How can I do that considering that it can be any numbers of tags?


Igor Tandetnik wrote:
> 
> select
> (select count(*) from (select distinct folder from t1)),
> (select count(*) from (
> select replace(rtrim(tags,'abcdefghijklmnopqrstuvwxyz'),',','') 
> from t1
> union
> select replace(ltrim(tags,'abcdefghijklmnopqrstuvwxyz'),',','') 
> from t1));
> 
> Same caveat applies.
Ok, this answer my question at least for normal columns. Thanks.

How should I do to store tags in my database and been able to count them? Do
you know how other programs like firefox do this?
-- 
View this message in context: 
http://www.nabble.com/Sqlite-question%3A-group-by-column-with-multiple-tags--tp22153722p22159627.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Windows XP: How Can I Get Higher CPU usage from SQLite

2009-02-23 Thread jose isaias cabrera

Funny...

"Fred Williams" wrote...

> Since the dawn of digital computers the CPU has been waiting on the I/O.
> Want to go faster?  Get a faster mass storage device.  Then your CPU usage
> will most likely jump all the way up to 9% - 14%!
>
> You can't believe what a 300 card per minute 80 column card reader does to
> throughput when you use it to replace a 100 card per minute reader!  I 
> know,
> been there, done that! :-)
>
> Sit back and watch the flashing lights.  BUT, keep your hands in your
> pockets!

That is funny.  Ok, I just thought that there could be something that I can 
do.  Thanks.


>
> Fred
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org]on Behalf Of jose isaias cabrera
> Sent: Sunday, February 22, 2009 1:57 PM
> To: General Discussion of SQLite Database
> Subject: [sqlite] Windows XP: How Can I Get Higher CPU usage from SQLite
>
>
>
> Greetings!
>
> I have a very small database, well, 62.1 MB (65,150,976 bytes), and when I
> do a search on it, the CPU only goes to 7% and 12% at the most.  Sometimes
> it takes a few seconds to return with the results.  I am not complaining
> about the 5-10 seconds of the returning of the result, but I believe that 
> if
> the CPU would be used more, the result would return in 1-4 seconds.
>
> I have left everything default, so I have not set any PRAGMA settings.
>
> Any ideas on how to make SQLite use the CPU more since it is sitting there
> with System Idle Process 93%.
>
> Any help would be greatly appreciated.
>
> thanks,
>
> josé
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Windows XP: How Can I Get Higher CPU usage from SQLite

2009-02-23 Thread jose isaias cabrera

"P Kishor" wrote...

> On Sun, Feb 22, 2009 at 2:14 PM, Fred Williams  
> wrote:
>> Since the dawn of digital computers the CPU has been waiting on the I/O.
>> Want to go faster?  Get a faster mass storage device.  Then your CPU 
>> usage
>> will most likely jump all the way up to 9% - 14%!
>>
>> You can't believe what a 300 card per minute 80 column card reader does 
>> to
>> throughput when you use it to replace a 100 card per minute reader!  I 
>> know,
>> been there, done that! :-)
>>
>> Sit back and watch the flashing lights.  BUT, keep your hands in your
>> pockets!
>
> Very funny and illuminating as well.
>
> Here is another way to look at it. Moving at the speed of light, that
> is, at 186,000 miles per second, electrons take about
> 0.101824698598782 seconds to travel about 12 inches, say, the
> distance between the data on the hard disk platter and the CPU. Pretty
> damn fast, I say.
>
> Except, in that time, my lowly Macbook CPU flipping around wildly at
> 2.4 GHz, has already been waiting, twiddling its thumbs for almost 2.5
> cycles.
>
> Work on improving your db, your application and your hard disk. Or buy
> enough ram to put everything in etherspace, then go get a cup of
> coffee.
>

Yeah, I would say that he was funny.  Thanks.  I get it now. :-)

josé


>> Fred
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org]on Behalf Of jose isaias cabrera
>> Sent: Sunday, February 22, 2009 1:57 PM
>> To: General Discussion of SQLite Database
>> Subject: [sqlite] Windows XP: How Can I Get Higher CPU usage from SQLite
>>
>>
>>
>> Greetings!
>>
>> I have a very small database, well, 62.1 MB (65,150,976 bytes), and when 
>> I
>> do a search on it, the CPU only goes to 7% and 12% at the most. 
>> Sometimes
>> it takes a few seconds to return with the results.  I am not complaining
>> about the 5-10 seconds of the returning of the result, but I believe that 
>> if
>> the CPU would be used more, the result would return in 1-4 seconds.
>>
>> I have left everything default, so I have not set any PRAGMA settings.
>>
>> Any ideas on how to make SQLite use the CPU more since it is sitting 
>> there
>> with System Idle Process 93%.
>>
>> Any help would be greatly appreciated.
>>
>> thanks,
>>
>> josé
>>
>
>
>
> -- 
> Puneet Kishor http://www.punkish.org/
> Nelson Institute for Environmental Studies http://www.nelson.wisc.edu/
> Open Source Geospatial Foundation (OSGeo) http://www.osgeo.org/
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Windows XP: How Can I Get Higher CPU usage from SQLite

2009-02-23 Thread jose isaias cabrera

"Alexey Pechnikov" wrote...

> Hello!
>
> On Sunday 22 February 2009 22:56:36 jose isaias cabrera wrote:
>> I have left everything default, so I have not set any PRAGMA settings.
>
> Try this:
> pragma cache_size=1;
> pragma page_size=16384;
> vacuum;

Wow, thanks.  That did help. Cool, so there was something that I could do. 
Here is a question: this DB is shared by other folks, do they each need to 
set this when they open or if I set it, it would apply to everyone that 
connects to it?

thanks,

josé 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Is sqlite the good choice in my case ? And how can I optimize my SQL requests

2009-02-23 Thread Jérôme Loyet
Hello everyone,

I'm doing some web logs analysis. I decided to use a sqlite database
because the application has to be easily portable to another
workstation or server and there is absolutly no need of multi-clients.
Moreover, I don't want to spend my time dealing with the database.

What I want to do is quite simple. I want to parse the logs from a
high traffic apache website (loadbalanced on 8 front servers) and
integrated them in a database. That's the easy part. In apache
configuration I added the mod_usertrack in order to be able to follow
a user in the logs.

Then I want to create parent / child relations between requests from a
unique user (id by the mod_usertrack). The final goal is to see all
users. Then by user we can see the requests issued from a click on a
webpage (manual human interface --> parent). Finaly for each parent
requests (click) I want to have all sub requests issued from the
parent page (automatic fetch (images, css, javascript, ...)).

The global aim of all this is to analyse the behaviour of visitors on
our website and to extract different populations of users and make
statistics.

Here are the tables I use:

CREATE TABLE IF NOT EXISTS requests (
  id integer primary key autoincrement,
  date date not null,
  request varchar not null,
  method varchar not null,
  code integer not null,
  referer varchar,
  response_time integer not null,
  vhost varchar DEFAULT NULL,
  cookie not null REFERENCES cookies(id))

CREATE TABLE IF NOT EXISTS cookies (
  id integer primary key autoincrement,
  cookie varchar not null UNIQUE)

CREATE TABLE IF NOT EXISTS siblings (
  parent not null REFERENCES requests(id),
  child not null REFERENCES requests(id))

I ran a test on 2 hours logs from the 18th of february (from 10:00AM
to 11:59AM).

The first step is to parse the logs in order to populate the
`requests` and `cookies` tables. It took about 2min to parse and
insert 238304 cookies and 1163728 requests. After this step the
database is 317Mo sized.

Then I have to populate the `siblings` table. I use the following request:

INSERT INTO siblings (parent, child) SELECT a.id, b.id FROM requests a
INNER JOIN requests b ON a.cookie = b.cookie WHERE a.page = b.referer
AND b.date BETWEEN a.date AND a.date + 20

It took about 3hours and 30 minutes. At the end there is 7158001 rows
in the `siblings` table and the database is about 600Mo sized.

My questions:
1- Is there a better way to populate the `siblings` table ? (optimize
my sql request)
2- What can I do to optimize the all process ? (play with some PRAGMA
parameters maybe)
3- Is sqlite a good choice for this case ? Would mysql or postgresql a
better choice ?

Thanks in advance for your time.

++ Jerome
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Newb-ish performance questions

2009-02-23 Thread Thomas Briggs
   Why do you need the 7 single-column indexes?  Do you ever do a
lookup on a single column?  Bear in mind that only 1 index is used per
query, so having seven separate indexes on seven separate columns
means that six are always unused.

   I'm curious why the UNION is faster than the OR'ed version... can
you post the EXPLAINs of the two?  (Feel free to send them to me
personally... I dunno if the rest of the list cares :)

   -T

On Mon, Feb 23, 2009 at 1:43 AM, Kim Boulton  wrote:
>
> Sorry, I forgot to mention that I was running the query twice, first to
> cache it, second to measure the speed.
>
> Yeah it's a horrible query but produces the fastest results in Mysql.
> maybe using OR instead works better on Sqlite, working on that one.
>
> Problem I have with indexes in sqlite is that the table balloons 400MB
> in size with every index i add. At the moment it's 4GB with one on each
> column, if i add more indexes across other columns i'm thinking it'll
> get too big to cache up.
>
> thanks
>
> Thomas Briggs wrote:
>>For starters, I think that loading the index into the cache in
>> MySQL is biasing your performance measures.  SQLite will automatically
>> load pages of any necessary indexes into memory as part of executing
>> the query, but doing so takes time.  By preloading the index on MySQL,
>> you're removing that time from the amount of time MySQL spends
>> executing that query, so you're not really comparing apples to apples.
>>  Said another way: how long does the query take in MySQL if you don't
>> pre-load the index?
>>
>>Secondly, the page size needs to be set before the database is
>> created, i.e. before you execute the CREATE TABLE statement.
>>
>>Thirdly, that's a pretty horrible query. :)  Collapsing the
>> different queries into fewer queries - even if it isn't a single query
>> - will speed things up.  Each SELECT is a separate query that requires
>> that the table data be read; reducing the number of SELECTs will thus
>> reduce query time.
>>
>>Lastly, your indexes need to reflect how the data is accessed.  For
>> example, if you're going to do "WHERE c5 = x AND c6 = y AND c7 = 7"
>> then you need one index on (c5, c6, c7) - not three individual indexes
>> on c5, c6, and c7.
>>
>>Performance tuning rule #1 - problems are almost always in your use
>> of the database, not the database itself. :)
>>
>>-Tom
>>
>> On Sun, Feb 22, 2009 at 3:48 AM, Kim Boulton  wrote:
>>
>>> Thanks for the hints so far.
>>>
>>> Here is my existing Mysql setup and what I've tried with Sqlite
>>>
>>> *Mysql Table structure:*
>>> CREATE TABLE  `mydb`.`mytable` (
>>>  `c1`
>>> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>>> NOT NULL,
>>>  `c2`
>>> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>>> NOT NULL,
>>>  `c3`
>>> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>>> NOT NULL,
>>>  `c4`
>>> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>>> NOT NULL,
>>>  `c5`
>>> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>>> NOT NULL,
>>>  `c6`
>>> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>>> NULL,
>>>  `c7`
>>> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>>> NULL,
>>>  `price` smallint(4) NOT NULL,  KEY `c1` (`c1`),  KEY `c2` (`c2`),  KEY
>>> `c3` (`c3`),  KEY `c4` (`c4`),  KEY `c5` (`c5`),  KEY `c6` (`c6`),  KEY
>>> `c7` (`c7`),  KEY `price` (`price`) ) ENGINE=MyISAM DEFAULT
>>> CHARSET=latin1 PACK_KEYS=1 ROW_FORMAT=COMPRESSED;
>>>
>>> *CSV Import Data e.g.:*
>>> 1,A,P,0,0,X,X,300
>>> 1,A,P,0,0,X,P,
>>> A,A,P,0,0,,,2000
>>> B,3,Y,0,1,X,,300
>>>  approx 30 million row, 500MB csv, text file
>>>
>>> If I compress the table it is approximately 900MB in size with an index
>>> of approximately 550MB, which i can load into memory via LOAD INDEX INTO
>>> CACHE
>>>
>>> *Mysql Query example:*
>>> SELECT * FROM mytable WHERE (c5 = 'W' AND c6 IN ('C','E','F') AND c7 IN
>>> ('S','Z')) UNION ALL
>>> SELECT * FROM mytable WHERE (c3 = 'W' AND c4 IN ('C','E','F') AND c5 =
>>> 'S' AND c6 IS NULL) UNION ALL
>>> SELECT * FROM mytable WHERE (c4 = 'W' AND c5 IN ('C','E','F') AND c6 =
>>> 'S' AND c7 IS NULL) UNION ALL
>>> SELECT * FROM mytable WHERE (c1 = 'W' AND c2 IN ('3','C','E','F') AND c3
>>> IN ('2','5') ) UNION ALL
>>> SELECT * FROM mytable 

Re: [sqlite] Newb-ish performance questions

2009-02-23 Thread D. Richard Hipp

On Feb 23, 2009, at 2:42 AM, Kim Boulton wrote:
>
> But OR is a lot slower than using UNION ALL on both Sqlite and Mysql


The optimizer was significantly enhanced for version 3.6.8 in order to  
better handle OR in WHERE clauses.  What version of SQLite did you  
test this with?


D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sqlite3_open expensive?

2009-02-23 Thread Christoph Schreiber
hi!
how expensive is sqlite3_open? i'm working on a small 
(one-thread-per-connection) server. is it smart (in terms of cpu-time and 
memory usage) to use sqlite3_open for every connection or is it better to 
create a pool of pre-opened sqlite-connections?
regards, luky

ps: sqlite3_enable_shared_cache is turned on.


  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLITE_THREADSAFE=1 and in-memory databases

2009-02-23 Thread Marco Bambini
Hello,

I have compiled compiled sqlite with SQLITE_THREADSAFE=1 and inside my  
application I have 2 concurrent threads that need to open some in- 
memory databases that must be completely independents like unique file  
on disk. I open the in-memory dbs with: sqlite3_open(":memory:",  
) ... but from the errors I received during the app usage I wonder  
if the combination of compile option and the function used to open the  
db give me some sort of shared in-memory db. Should use  
sqlite3_open_v2 with the SQLITE_OPEN_FULLMUTEX flag set?

In other words ... I need to open in-memory db in a multithreaded  
environment and they must be unique.
Any help would be really appreciate.

Thanks.
-- 
Marco Bambini
http://www.sqlabs.com
http://www.creolabs.com/payshield/





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite optim

2009-02-23 Thread Alexey Pechnikov
Hello!

On Monday 23 February 2009 13:09:58 Jérôme Loyet wrote:
> My questions:
> 1- Is there a better way to populate the `siblings` table ? (optimize
> my sql request)

You may use compose index on (cookie,referer,date) and REAL datatype for 
dates. 

> 2- What can I do to optimize the all process ? (play with some PRAGMA
> parameters maybe)

pragma page_size=4096;
pragma default_cache_size=20;
pragma cache_size=20;

(cache with this options is 200 000 * 4k=800 Mb).

> 3- Is sqlite a good choice for this case ? Would mysql or postgresql a
> better choice ?

SQLite is good choice but data flow parser is more better then sql queries for 
realtime analyze. You can write demon for prepare data to load in database and 
inserting data periodically with transactions.

Best regards.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Xcode project for amalgamation

2009-02-23 Thread P Kishor
I am tinkering with (aka learning) Xcode 3.1.2, and am trying to set
up an Xcode project for SQLite amalgamation. What could be simpler,
no?

My Xcode project is throwing errors. Could someone directly (offlist)
email me their Xcode project file so I may learn from it?

Many thanks,

-- 
Puneet Kishor http://www.punkish.org/
Nelson Institute for Environmental Studies http://www.nelson.wisc.edu/
Open Source Geospatial Foundation (OSGeo) http://www.osgeo.org/
Sent from: Madison Wisconsin United States.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sqlite cache question

2009-02-23 Thread Marcus Grimm
Hi all,

I tried to scan the list and doc pages to understand better
the sqlite's concept of caching but I'm still not really
sure how to change the cache parameters to get the best
performance:

Assuming I change the cache by pages size = 4096 and
cache_size = 1 sqlite will then use appx. 40 MB cache memory
per db connection.

I've learned in a recent reply here that sqlite will reload the cache
for every db connection when one of the connection did a write
to the db to avoid that the other connections have wrong cache data.
Doesn't this mean that increasing the cache size will actually slow
down the database read/write operation in a multi threaded / multi connection
application, because now in the above example each connection will
reload 40MB prior going ahead ?

Also: I think it is dangerous to add to much cache in a multi/connection
application since each connection will allocate the cache and by having e.g. 20
connections running it will allready allocate 1GB RAM, am I right ?
I've seen here much higher cache size recommendation (like 400MB) and I'm 
wondering
how that can work in a multi connection application when I assume a maximum
of 2GB RAM (on a Win-32 system).

I know that there is the shared cache mode but I got the impression that
this option is not very often used or recommended and I skip it for now.

--
In the man pages for the cache pragma I read:
"If you are doing UPDATEs or DELETEs that change many rows
of a database and you do not mind if SQLite uses more memory, you can 
increase... "

So actually the cache is only useful for sql statements that alter the
database and not for simple select statements ?
I guess the man page is not really correct, right ?
Otherwise I don't see why to change the cache size because write
operation do wait for disk IO anyhow and I think that will be overruled
by any cache issue.
--

I'm not saying that I have a performance problem, sqlite is doing great! -- I 
just
want to setup my application and the default cache sizes in the best way to fit 
in
also when the db file growths in the future.

Thank you

Marcus Grimm

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Windows XP: How Can I Get Higher CPU usage from SQLite

2009-02-23 Thread Fred Williams
That's the reason in memory databases are so fast.  If a DB is small enough
you can spool the whole thing into RAM on open and spool it back out on
close.

If you are attempting to use the DB in any kind of multi user environment
client system large cache sizes are going to play H--- with concurrency.

Fred

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org]on Behalf Of jose isaias cabrera
Sent: Sunday, February 22, 2009 10:41 PM
To: pechni...@sandy.ru; General Discussion of SQLite Database
Subject: Re: [sqlite] Windows XP: How Can I Get Higher CPU usage from
SQLite



"Alexey Pechnikov" wrote...

> Hello!
>
> On Sunday 22 February 2009 22:56:36 jose isaias cabrera wrote:
>> I have left everything default, so I have not set any PRAGMA settings.
>
> Try this:
> pragma cache_size=1;
> pragma page_size=16384;
> vacuum;

Wow, thanks.  That did help. Cool, so there was something that I could do.
Here is a question: this DB is shared by other folks, do they each need to
set this when they open or if I set it, it would apply to everyone that
connects to it?

thanks,

josé

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Alpha numeric collation

2009-02-23 Thread John Elrick
I understand this problem can be solved with a custom collation, 
however, if at all possible I'd prefer to not reinvent the wheel.  I was 
curious if there were any pre-existing solutions my Google searches and 
reading of the documentation failed to detect.

The problem: Is there any existing collation solution that respects 
numeric values intermixed with alpha?  As an example, if we have the 
following rows:

1
a
10
4
51
9

traditional alpha-numeric sorting would sort them as follows:

1
10
4
51
9
a

What the customer desires is for the sorting to treat strictly numeric 
values as though they were indeed numerics:

1
4
9
10
51
a

Thanks for any feedback, including "nope, you have to roll your own".


John Elrick
Fenestra Technologies
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Xcode project for amalgamation

2009-02-23 Thread William Kyngesburye
On Feb 23, 2009, at 9:15 AM, P Kishor wrote:

> Ok, thanks William. I started again with BSD Dynamic Library template,
> and this time I got only one error... a warning about an unused
> variable 'err' on line 26510 of sqlite3.c. That line reads
>
> 26509> /* if mkdir fails, handle as lock file creation failure */
> 26510>   int err = errno;
>
> (hence, cc-ing this message to the sqlite list)
>
unused vars - harmless.

> But, it did build. Now, of course, since I asked for just a BSD
> Dynamic Library, I got a libsqlite.dylib in the Products folder. I
> didn't get my usual sqlite3 shell binary, so does that mean I have to
> create another Xcode project with the Standard Command Line Tool
> template for that?

No, just make a new target in the project, BSD Shell Tool (and note  
the naming inconsistency - the other template called it "Standard  
Command Line Tool").  The template doesn't limit you to just the kind  
of binary it starts with.

Add the sqlite executable source to this one, and drag in the sqlite  
library from the Products group to link them.  For dependency sanity,  
drag the library target to the executable target also.  This way all  
you need to build is the executable target, and that will  
automatically build the library target.

Something like:

libsqlite
 compile
 sqlite3.c
sqlite
 libsqlite (target dependency)
 compile
 shell.c
 link
 libsqlite.dylib

> Also, is there an equivalent of 'sudo make install' for Xcode that
> actually files all the sqlite3 bits and bobs in the correct places?
>
I don't think so.  There are some variables and steps mentioning  
installation, but I think that's only for: 1) linking (ie the  
install_name of a library) and 2) "installation" into the build  
products folder during compilation.

Xcode is geared towards building Mac software, where it's a package,  
so "installation" is simply dragging the package to its destination.   
unix features seem to be for supporting Mac software.

> Of course, I can do all this without any problem from the command
> line, so this is more an exercise in learning Xcode.
>
Though doing it in Xcode means you can use the Xcode debugger, if  
you're interested in that.  You can attach external executables to a  
project for use in the debugger, but they won't have a connection to  
their sources for easy access to fix a problem.

Xcode has its uses in unix-based software, but it's hard to maintain  
synchronization with the source, especially on big projects like GRASS  
or where sources are added and deleted a lot.  Optional components of  
a project can't be automated easily, there are no conditional targets  
in Xcode.  Only code that is conditionalized on a macro can be made  
optional.

I went Xcode crazy for a while with my frameworks and GRASS and Qgis,  
but quickly gave it up.  Though I just became the maintainer of a new  
Xcode project in the Qgis source, we'll see how that goes...

-
William Kyngesburye 
http://www.kyngchaos.com/

Earth: "Mostly harmless"

- revised entry in the HitchHiker's Guide to the Galaxy


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Alpha numeric collation

2009-02-23 Thread John Elrick
Simon Davies wrote:
> If your data is in an integer column then an order by on the data
> column gives what you ask for:
>   
SNIP
> sqlite> select data from tst2 order by case cast( data as integer
> )=data when 1 then cast( data as integer ) else data end;
> 1
> 4
> 9
> 10
> 51
> a

Thanks very much for your assistance Simon.  The first case may indeed 
work, the field is currently varchar but there is nothing preventing me 
from making it integer, and the second case gives me some insights I 
hadn't considered.

Appreciate your time,


John Elrick
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite caching

2009-02-23 Thread Griggs, Donald
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of manohar s
Sent: Monday, February 23, 2009 4:59 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] SQLite caching

Thanks for your quick replies, although restarting my machine, disc
cache is cleared, I am trying to find an utility which could do the job
without requiring a restart.

Regards,
Manohar.S 


Greetings, Manohar,

You can do something like close your sqlite application, copy a large
unrelated file to the nul device, then restart your app -- that should
clear the cache of the sqlite data (if the o.s. uses a "least recently
used" cache instead of "least frequently used" cache -- someone more
knowledgeable may be able to say for a particular O.S.)

However, most any well-performing database is going to depend on a good
RAM cache -- either from the operating system or within it's own code --
or both.

And it's already been mentioned that what you're seeing is not caching
of your RESULTS but instead caching of portions of the database itself.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Conditional expressions and evaluation

2009-02-23 Thread John Elrick
A clarification question...given the query:

create table x (x_id integer, f varchar);
create table y (y_id integer, x_id integer, f varchar);

insert into x values(1, 'wibble');
insert into y values(1, 1, 'foo');
insert into y values(1, 1, 'bar');

select y.y_id, case when y.f = 'foo' then 'very good' else (select x.f 
from x where x.x_id = y.x_id) end from y

The documentation states that:

"If the SELECT expression does contain variables from the outer query, 
then the SELECT is reevaluated every time it is needed."

Am I correct in assuming that if (using the above example) y.f = 'foo' 
that the sub-select is considered "not needed" and therefore is not 
executed?


John Elrick
Fenestra Technologies

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite caching

2009-02-23 Thread Simon
In all cases, copying a large file in a ramdisk (on linux: /dev/shm)
would clear all cache (ie no space for it)... just make sure you dont
fill the ram (have some swap space, cache is only held in ram).

Simon

On Mon, Feb 23, 2009 at 1:29 PM, Griggs, Donald
 wrote:
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of manohar s
> Sent: Monday, February 23, 2009 4:59 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] SQLite caching
>
> Thanks for your quick replies, although restarting my machine, disc
> cache is cleared, I am trying to find an utility which could do the job
> without requiring a restart.
>
> Regards,
> Manohar.S
>
> 
> Greetings, Manohar,
>
> You can do something like close your sqlite application, copy a large
> unrelated file to the nul device, then restart your app -- that should
> clear the cache of the sqlite data (if the o.s. uses a "least recently
> used" cache instead of "least frequently used" cache -- someone more
> knowledgeable may be able to say for a particular O.S.)
>
> However, most any well-performing database is going to depend on a good
> RAM cache -- either from the operating system or within it's own code --
> or both.
>
> And it's already been mentioned that what you're seeing is not caching
> of your RESULTS but instead caching of portions of the database itself.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
When Earth was the only inhabited planet in the Galaxy, it was a
primitive place, militarily speaking.  The only weapon they had ever
invented worth mentioning was a crude and inefficient nuclear-reaction
bomb for which they had not even developed the logical defense. -
Asimov
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Conditional expressions and evaluation

2009-02-23 Thread D. Richard Hipp

On Feb 23, 2009, at 1:39 PM, John Elrick wrote:

> A clarification question...given the query:
>
> create table x (x_id integer, f varchar);
> create table y (y_id integer, x_id integer, f varchar);
>
> insert into x values(1, 'wibble');
> insert into y values(1, 1, 'foo');
> insert into y values(1, 1, 'bar');
>
> select y.y_id, case when y.f = 'foo' then 'very good' else (select x.f
> from x where x.x_id = y.x_id) end from y
>
> The documentation states that:
>
> "If the SELECT expression does contain variables from the outer query,
> then the SELECT is reevaluated every time it is needed."
>
> Am I correct in assuming that if (using the above example) y.f = 'foo'
> that the sub-select is considered "not needed" and therefore is not
> executed?
>

I think that is what it does.  It is clearly what it ought to do.  You  
can double check by doing an EXPLAIN and examining the generated code  
to make sure the subquery is skipped completely when y.f=='foo'.


>
> John Elrick
> Fenestra Technologies
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite cache question

2009-02-23 Thread Dave Toll
Hi Marcus

My understanding of SQLite caching is that in your scenario, 40MB is the
*maximum* cache memory that will be used per connection - this memory is
not immediately pre-allocated when you open a new connection. Using the
default memory allocator (memsys1) SQLite will allocate from the heap as
and when it needs to, and will simply stop caching if it cannot allocate
any more memory. Using sqlite3_soft_heap_limit() or alternative memory
allocators (memsys3 or memsys5) you can place an upper-bound on the
total memory that SQLite consumes. Have a look at
http://www.sqlite.org/malloc.html for more details.

Personally I can vouch for shared-cache mode, I've found it to be
essential for multiple DB connections on an embedded system with limited
memory.

Cheers,
Dave.


-Original Message-
From: Marcus Grimm [mailto:mgr...@medcom-online.de] 
Sent: 23 February 2009 06:07
To: General Discussion of SQLite Database
Subject: [sqlite] sqlite cache question

Hi all,

I tried to scan the list and doc pages to understand better
the sqlite's concept of caching but I'm still not really
sure how to change the cache parameters to get the best
performance:

Assuming I change the cache by pages size = 4096 and
cache_size = 1 sqlite will then use appx. 40 MB cache memory
per db connection.

I've learned in a recent reply here that sqlite will reload the cache
for every db connection when one of the connection did a write
to the db to avoid that the other connections have wrong cache data.
Doesn't this mean that increasing the cache size will actually slow
down the database read/write operation in a multi threaded / multi
connection
application, because now in the above example each connection will
reload 40MB prior going ahead ?

Also: I think it is dangerous to add to much cache in a multi/connection
application since each connection will allocate the cache and by having
e.g. 20
connections running it will allready allocate 1GB RAM, am I right ?
I've seen here much higher cache size recommendation (like 400MB) and
I'm wondering
how that can work in a multi connection application when I assume a
maximum
of 2GB RAM (on a Win-32 system).

I know that there is the shared cache mode but I got the impression that
this option is not very often used or recommended and I skip it for now.

--
In the man pages for the cache pragma I read:
"If you are doing UPDATEs or DELETEs that change many rows
of a database and you do not mind if SQLite uses more memory, you can
increase... "

So actually the cache is only useful for sql statements that alter the
database and not for simple select statements ?
I guess the man page is not really correct, right ?
Otherwise I don't see why to change the cache size because write
operation do wait for disk IO anyhow and I think that will be overruled
by any cache issue.
--

I'm not saying that I have a performance problem, sqlite is doing great!
-- I just
want to setup my application and the default cache sizes in the best way
to fit in
also when the db file growths in the future.

Thank you

Marcus Grimm


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Newb-ish performance questions

2009-02-23 Thread Kim Boulton

As the topic goes I am new to Sqlite. The gui I'd been using was 
compiled with an older version, than the 3.6.10 cli I downloaded. So 
things were a bit choppy. I'll stick to using just the downloaded Sqlite 
cli, now I (sort of) know what I'm doing with it.

I'll post the results once I've finished trying all the options, it's 
looking good so far though :-)

k


D. Richard Hipp wrote:
>
> On Feb 23, 2009, at 2:42 AM, Kim Boulton wrote:
>>
>> But OR is a lot slower than using UNION ALL on both Sqlite and Mysql
>
>
> The optimizer was significantly enhanced for version 3.6.8 in order to 
> better handle OR in WHERE clauses.  What version of SQLite did you 
> test this with?
>
>
> D. Richard Hipp
> d...@hwaci.com
>
>
>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Xcode project for amalgamation

2009-02-23 Thread Alexey Pechnikov
Hello!

On Monday 23 February 2009 16:48:09 P Kishor wrote:
> I am tinkering with (aka learning) Xcode 3.1.2, and am trying to set
> up an Xcode project for SQLite amalgamation. What could be simpler,
> no?

Use full SQLite source tree. Amalgamation will be created automatically by 
build system.

> My Xcode project is throwing errors. Could someone directly (offlist)
> email me their Xcode project file so I may learn from it?

Debian lenny repository: 
deb http://mobigroup.ru/debian/ lenny main contrib non-free
deb-src http://mobigroup.ru/debian/ lenny main contrib non-free

Direct links (will be changed after next build):
http://mobigroup.ru/debian/pool/main/s/sqlite3/sqlite3_3.6.11-
mobigroup.1.diff.gz
http://mobigroup.ru/debian/pool/main/s/sqlite3/sqlite3_3.6.11.orig.tar.gz

You can load orig.tar.gz and patch it by diff.gz.

Best regards.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SAVEPOINT : Seems don't work

2009-02-23 Thread REPKA_Maxime_NeufBox
Hello,

I am quite new to use SQLite

I tried to use SAVEPOINT command but didn't succeed

Does this command work ? how ?
MaxMax14
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about Referential IntegrityRE: Foreign key support

2009-02-23 Thread Yves Goergen
On 12.02.2009 06:23 CE(S)T, Roger Binns wrote:
> It is true that triggers can be used to achieve referential integrity.
> However you don't have to hand craft them.  The front page of the wiki
> links to the document explaining it:
> 
>   http://www.sqlite.org/cvstrac/wiki?p=ForeignKeyTriggers
> 
> But the SQLite team has already done all the work for you.  The SQLite
> source includes a program named 'genfkey' that will create the triggers.

Wow, didn't know that. Maybe because my last visit on that page is some
years ago.

So the next step is probably that the SQLite engine does all that on its
own, how would that be? :-)

-- 
Yves Goergen "LonelyPixel" 
Visit my web laboratory at http://beta.unclassified.de
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Conditional expressions and evaluation

2009-02-23 Thread John Elrick
D. Richard Hipp wrote:
> On Feb 23, 2009, at 1:39 PM, John Elrick wrote:
>
>   
>> A clarification question...given the query:
>>
>> create table x (x_id integer, f varchar);
>> create table y (y_id integer, x_id integer, f varchar);
>>
>> insert into x values(1, 'wibble');
>> insert into y values(1, 1, 'foo');
>> insert into y values(1, 1, 'bar');
>>
>> select y.y_id, case when y.f = 'foo' then 'very good' else (select x.f
>> from x where x.x_id = y.x_id) end from y
>>
>> The documentation states that:
>>
>> "If the SELECT expression does contain variables from the outer query,
>> then the SELECT is reevaluated every time it is needed."
>>
>> Am I correct in assuming that if (using the above example) y.f = 'foo'
>> that the sub-select is considered "not needed" and therefore is not
>> executed?
>>
>> 
>
> I think that is what it does.  It is clearly what it ought to do.  You  
> can double check by doing an EXPLAIN and examining the generated code  
> to make sure the subquery is skipped completely when y.f=='foo'.
>   


EXPLAIN indicates that your assumption is correct.  Thanks for 
clarifying that for me.  It is especially reassuring to have indication 
that the lazy evaluation is intentional and therefore can be relied upon 
in the future.


John Elrick
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] question on creating/populating table with varchar/clob data types

2009-02-23 Thread anjela patnaik
Hello all,
 
I have a TCL application that writes database records to an Oracle db. This 
database has grown to contain 8000 records. There are 2 columns of varchar with 
4k chars and 1 column of clob which contains 1Mg max. The data in this database 
are scripts.
 
Now, I need to write another application in TK that works offline, but 
retrieves the data in this database and presents it to the user. I am not going 
to be able to access this Oracle db so I was wondering if I could export all 
the data to a SQLite db and then access it from TCL/TK. 
 
Does SQLite handle 4k Varchar fields and clobs? Is there a limit on # of rows? 
Can I create a db on the fly as part of a TCL/TK application ?
 
My other alternative was to offload the Oracle data to a XML/Xcel file and then 
write TCL code to parse it.
 
Thank you


  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite cache question

2009-02-23 Thread Marcus Grimm
Thanks Dave,

I guess I'll give the shared cache a try... actually, when
I started I switched it on but removed afterward because
I was not sure if it is a kind of sqlite standard usage.

I'll also look into the heap_limit function.

Marcus

> Hi Marcus
>
> My understanding of SQLite caching is that in your scenario, 40MB is the
> *maximum* cache memory that will be used per connection - this memory is
> not immediately pre-allocated when you open a new connection. Using the
> default memory allocator (memsys1) SQLite will allocate from the heap as
> and when it needs to, and will simply stop caching if it cannot allocate
> any more memory. Using sqlite3_soft_heap_limit() or alternative memory
> allocators (memsys3 or memsys5) you can place an upper-bound on the
> total memory that SQLite consumes. Have a look at
> http://www.sqlite.org/malloc.html for more details.
>
> Personally I can vouch for shared-cache mode, I've found it to be
> essential for multiple DB connections on an embedded system with limited
> memory.
>
> Cheers,
> Dave.
>
>
> -Original Message-
> From: Marcus Grimm [mailto:mgr...@medcom-online.de]
> Sent: 23 February 2009 06:07
> To: General Discussion of SQLite Database
> Subject: [sqlite] sqlite cache question
>
> Hi all,
>
> I tried to scan the list and doc pages to understand better
> the sqlite's concept of caching but I'm still not really
> sure how to change the cache parameters to get the best
> performance:
>
> Assuming I change the cache by pages size = 4096 and
> cache_size = 1 sqlite will then use appx. 40 MB cache memory
> per db connection.
>
> I've learned in a recent reply here that sqlite will reload the cache
> for every db connection when one of the connection did a write
> to the db to avoid that the other connections have wrong cache data.
> Doesn't this mean that increasing the cache size will actually slow
> down the database read/write operation in a multi threaded / multi
> connection
> application, because now in the above example each connection will
> reload 40MB prior going ahead ?
>
> Also: I think it is dangerous to add to much cache in a multi/connection
> application since each connection will allocate the cache and by having
> e.g. 20
> connections running it will allready allocate 1GB RAM, am I right ?
> I've seen here much higher cache size recommendation (like 400MB) and
> I'm wondering
> how that can work in a multi connection application when I assume a
> maximum
> of 2GB RAM (on a Win-32 system).
>
> I know that there is the shared cache mode but I got the impression that
> this option is not very often used or recommended and I skip it for now.
>
> --
> In the man pages for the cache pragma I read:
> "If you are doing UPDATEs or DELETEs that change many rows
> of a database and you do not mind if SQLite uses more memory, you can
> increase... "
>
> So actually the cache is only useful for sql statements that alter the
> database and not for simple select statements ?
> I guess the man page is not really correct, right ?
> Otherwise I don't see why to change the cache size because write
> operation do wait for disk IO anyhow and I think that will be overruled
> by any cache issue.
> --
>
> I'm not saying that I have a performance problem, sqlite is doing great!
> -- I just
> want to setup my application and the default cache sizes in the best way
> to fit in
> also when the db file growths in the future.
>
> Thank you
>
> Marcus Grimm
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Removing duplicate records

2009-02-23 Thread Gary O'Brien
Greetings,

I'd like to know if anyone has an elegant solution to
the problem stated below.  I know it could be brute
forced but it seems that there should be an elegant
SQL solution to this problem.

Given the following table and data, I'd like to remove
all items with duplicate types within a container,
keeping the most recent (largest item_id) item of that
type within each container.

Note that the column other_data is included only as a
reminder that the rows aren't necessarily identical.

Any help would be appreciated.

Regards,
Gary O'Brien


DROP TABLE IF EXISTS item;

CREATE TABLE item (
   item_id INTEGER PRIMARY KEY,
   type INTEGER NOT NULL DEFAULT 0,
   other_data INTEGER NOT NULL DEFAULT 0,
   container_id INTEGER NOT NULL
);

INSERT INTO item (type, container_id) VALUES (0, 1);
INSERT INTO item (type, container_id) VALUES (1, 1);
INSERT INTO item (type, container_id) VALUES (0, 2);
INSERT INTO item (type, container_id) VALUES (1, 2);
INSERT INTO item (type, container_id) VALUES (0, 3);
INSERT INTO item (type, container_id) VALUES (1, 3);

INSERT INTO item (type, container_id) VALUES (0, 1);
INSERT INTO item (type, container_id) VALUES (1, 1);
INSERT INTO item (type, container_id) VALUES (0, 2);
INSERT INTO item (type, container_id) VALUES (1, 2);

INSERT INTO item (type, container_id) VALUES (0, 1);
INSERT INTO item (type, container_id) VALUES (1, 1);


Before the delete operation:

SELECT item_id, type, container_id FROM item;

1|0|1
2|1|1
3|0|2
4|1|2
5|0|3
6|1|3
7|0|1
8|1|1
9|0|2
10|1|2
11|0|1
12|1|1

After the delete operation:

SELECT item_id, type, container_id from item;

5|0|3
6|1|3
9|0|2
10|1|2
11|0|1
12|1|1
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] question on creating/populating table with varchar/clob data types

2009-02-23 Thread D. Richard Hipp

On Feb 23, 2009, at 3:29 PM, anjela patnaik wrote:

> Hello all,
>
> I have a TCL application that writes database records to an Oracle  
> db. This database has grown to contain 8000 records. There are 2  
> columns of varchar with 4k chars and 1 column of clob which contains  
> 1Mg max. The data in this database are scripts.
>
> Now, I need to write another application in TK that works offline,  
> but retrieves the data in this database and presents it to the user.  
> I am not going to be able to access this Oracle db so I was  
> wondering if I could export all the data to a SQLite db and then  
> access it from TCL/TK.
>
> Does SQLite handle 4k Varchar fields and clobs? Is there a limit on  
> # of rows? Can I create a db on the fly as part of a TCL/TK  
> application ?

SQLite is, in fact, a TCL extension that escaped into the wild.  It is  
specifically designed to integrate well with Tcl/Tk.

SQLite accepts VARCHAR and BLOB and CLOB fields up to 1GiB in size.   
The database can be up to 64 Tebibytes in size.  There is no limit on  
the number of rows (you will hit the file size limit first.)

SQLite is way faster than Oracle in a single-user applications.


>
>
> My other alternative was to offload the Oracle data to a XML/Xcel  
> file and then write TCL code to parse it.
>

That would be a silly thing to do.


D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] question on creating/populating table with varchar/clob data types

2009-02-23 Thread P Kishor
On Mon, Feb 23, 2009 at 2:46 PM, D. Richard Hipp  wrote:
>
..
>
> SQLite is, in fact, a TCL extension that escaped into the wild.  It is
> specifically designed to integrate well with Tcl/Tk.
>
..

Did you ever tell that story anywhere? Would be fun to read it.

-- 
Puneet Kishor http://www.punkish.org/
Nelson Institute for Environmental Studies http://www.nelson.wisc.edu/
Open Source Geospatial Foundation (OSGeo) http://www.osgeo.org/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite vs. Oracle (parallelized)

2009-02-23 Thread python
Dr. Hipp,

When you say "SQLite is way faster than Oracle in a single-user
applications" do you mean that SQLite can be faster than Oracle even
when Oracle's parallel processing features are being used? For example
Oracle's support for parallelization can speed up table loading from an
external data source, certain SQL selects, and certain indexing
operations.

Are there any plans to enhance SQLite to support some of Oracle's
parallel processing or partitioning capabilities?

Thank you,
Malcolm
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite vs. Oracle (parallelized)

2009-02-23 Thread Billy Gray
>
> Are there any plans to enhance SQLite to support some of Oracle's
> parallel processing or partitioning capabilities?
>

Malcolm,

I realized that you're asking Richard, and not the peanut gallery, but I
figured I might as well ask out of curiosity: why do you want to see these
features in SQLite?

Cheers,
Billy

On Mon, Feb 23, 2009 at 3:54 PM,  wrote:

> Dr. Hipp,
>
> When you say "SQLite is way faster than Oracle in a single-user
> applications" do you mean that SQLite can be faster than Oracle even
> when Oracle's parallel processing features are being used? For example
> Oracle's support for parallelization can speed up table loading from an
> external data source, certain SQL selects, and certain indexing
> operations.
>
> Are there any plans to enhance SQLite to support some of Oracle's
> parallel processing or partitioning capabilities?
>
> Thank you,
> Malcolm
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Billy Gray
wg...@zetetic.net
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SAVEPOINT : Seems don't work

2009-02-23 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

REPKA_Maxime_NeufBox wrote:
> I tried to use SAVEPOINT command but didn't succeed
> 
> Does this command work ? how ?

It certainly works in my testing and works in the SQLite team testing -
http://sqlite.org/testing.html

Chances are you haven't got your code correct, haven't called
sqlite3_backup_finish or aren't checking error returns.  There is
complete documentation including example code at:

  http://www.sqlite.org/c3ref/backup_finish.html
  http://www.sqlite.org/backup.html

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEARECAAYFAkmjE/EACgkQmOOfHg372QSK8ACdGutrAfFeJiiDCrW36rluhpgj
HroAoJfPsFICIZTZase+x96RNcNVvw8T
=YRlb
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite vs. Oracle (parallelized)

2009-02-23 Thread D. Richard Hipp

On Feb 23, 2009, at 3:54 PM, pyt...@bdurham.com wrote:

> Dr. Hipp,
>
> When you say "SQLite is way faster than Oracle in a single-user
> applications" do you mean that SQLite can be faster than Oracle even
> when Oracle's parallel processing features are being used? For example
> Oracle's support for parallelization can speed up table loading from  
> an
> external data source, certain SQL selects, and certain indexing
> operations.

I don't run Oracle and have no way of verifying the following.  But I  
conjecture that from a cold start, you and launch an application that  
uses SQLite, have it do a dozen or so queries, print out the answer,  
and shut down, all before the Oracle server has even booted up to the  
point where it will accept connections.  Correct me if I am wrong.

Perhaps Oracle will run a gazillion more transactions per second,  
given enough memory and CPUs, and once you get it up and going.  I  
have no way of knowing.  But then again, that isn't really the point  
of SQLite.


>
>
> Are there any plans to enhance SQLite to support some of Oracle's
> parallel processing or partitioning capabilities?

Remember:  SQLite is not trying to replace Oracle.  SQLite is trying  
to replace fopen().

For people who are using Oracle as a replacement for fopen() (as  
apparently Angela is) they will likely find that SQLite makes a far  
superior replacement.  Or to put it another way, people who are using  
Oracle for a single-user application (low concurrency) will likely  
find that SQLite works much better for them.  It has been my  
experience that old-time Oracle users are incredulous at this  
statement, until they actually see a live demonstration.  So I won't  
try to argue the point.  It is merely my observation.

On the other hand, nobody things that SQLite is a suitable database  
when you have 1000 separate connections beating on the database all at  
once.



>
>
> Thank you,
> Malcolm
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] mysql_num_rows equivalent in SQLite

2009-02-23 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

bezpla...@jabse.com wrote:
>   Can you told me, how to find number of rows, of already
> prepared SELECT query.

SQLite doesn't work that way.  It calculates the next result row when
you ask for it, rather than calculating all of them up front.  To find
out how many rows there are is as much work as calculating them all so
it is bad form to write code that way.  If you absolutely have to then
use 'select count(*) from  your original query ...' to get the count
and then run the original query.  This will do the work twice.  Unless
you wrap the whole thing in a transaction it is also possible for the
number of rows between the two to differ.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEARECAAYFAkmjFZ0ACgkQmOOfHg372QRerACeIjrzgoKRpok5RWeMSW5t79bT
FXIAn1SjdFjAM1AKmQjp7i2ZyoDZzIuO
=rp2t
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite vs. Oracle (parallelized)

2009-02-23 Thread Ken
I fully agree with DRH regarding SQLITE and single user performance. If you 
need to replace  fopen then sqlite is a really really great product. Even if 
you have some mild concurrency sqlite still does pretty darned good.

Oracle excels when you have many users that require concurrent database 
changes. 

Oracle performs row level locking and Multi-Versioning on data blocks to 
achieve concurrency.

Oracle can perform Parallel queries. But the best usage of parallel query is 
for full table scans where the entire table is read. And there are equally 
sized segments for scanning by the query slaves. Other wise parallel query 
probably won't help much. 



--- On Mon, 2/23/09, D. Richard Hipp  wrote:

> From: D. Richard Hipp 
> Subject: Re: [sqlite] SQLite vs. Oracle (parallelized)
> To: "General Discussion of SQLite Database" 
> Date: Monday, February 23, 2009, 3:28 PM
> On Feb 23, 2009, at 3:54 PM, pyt...@bdurham.com wrote:
> 
> > Dr. Hipp,
> >
> > When you say "SQLite is way faster than Oracle in
> a single-user
> > applications" do you mean that SQLite can be
> faster than Oracle even
> > when Oracle's parallel processing features are
> being used? For example
> > Oracle's support for parallelization can speed up
> table loading from  
> > an
> > external data source, certain SQL selects, and certain
> indexing
> > operations.
> 
> I don't run Oracle and have no way of verifying the
> following.  But I  
> conjecture that from a cold start, you and launch an
> application that  
> uses SQLite, have it do a dozen or so queries, print out
> the answer,  
> and shut down, all before the Oracle server has even booted
> up to the  
> point where it will accept connections.  Correct me if I am
> wrong.
> 
> Perhaps Oracle will run a gazillion more transactions per
> second,  
> given enough memory and CPUs, and once you get it up and
> going.  I  
> have no way of knowing.  But then again, that isn't
> really the point  
> of SQLite.
> 
> 
> >
> >
> > Are there any plans to enhance SQLite to support some
> of Oracle's
> > parallel processing or partitioning capabilities?
> 
> Remember:  SQLite is not trying to replace Oracle.  SQLite
> is trying  
> to replace fopen().
> 
> For people who are using Oracle as a replacement for
> fopen() (as  
> apparently Angela is) they will likely find that SQLite
> makes a far  
> superior replacement.  Or to put it another way, people who
> are using  
> Oracle for a single-user application (low concurrency) will
> likely  
> find that SQLite works much better for them.  It has been
> my  
> experience that old-time Oracle users are incredulous at
> this  
> statement, until they actually see a live demonstration. 
> So I won't  
> try to argue the point.  It is merely my observation.
> 
> On the other hand, nobody things that SQLite is a suitable
> database  
> when you have 1000 separate connections beating on the
> database all at  
> once.
> 
> 
> 
> >
> >
> > Thank you,
> > Malcolm
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> >
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> D. Richard Hipp
> d...@hwaci.com
> 
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_finalize removes registered custom functions?

2009-02-23 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Anil Madhavapeddy wrote:
> The custom functions are registered against the database handle, but  
> when prepared statements are garbage collected (and hence call  
> sqlite3_finalize() on the statement handle), the custom functions  
> appear to disappear also.

That is certainly not how it works under the hood.  The functions are
stored against the database handle which you can see in the SQLite
source code.

> Any subsequent use of that function in
> other prepared statements result in a MISUSE error.

I recommend you run the whole thing in Valgrind.  My best guess is that
you are finalizing the statements multiple times or perhaps also closing
the database handle.  Or perhaps the GC is freeing stuff it shouldn't.
Also compile the SQLite code with -DSQLITE_DEBUG

In my Python wrapper (which uses reference counting for GC) the
statements are wrapped by a 'cursor' object.  The existence of a cursor
increments the reference count on the database handle which ensures all
cursors must be GC before the database is.

> The source to the bindings are up at 
> http://github.com/avsm/ocaml-sqlite3-agg/ 
>   with a testcase in tests/test_agg.ml

Looking at the C code, it seems like error codes aren't tracked very
well and instead only the strings are looked at for return to the user.
 The error codes are far more important and the error string could have
corresponded to an earlier error.  I'd recommend you change the various
raise routines to include the error code, especially getting it from the
routine that called raise in the first place.

The error handling around functions is also all shot.  (I am assuming
that caml_raise_with_string doesn't do a longjmp.)  You can register the
same function name with different numbers of arguments (eg "foo" with 2
args is distinct from "foo" with 3 args).  You will get errors
unregistering or registering if there oustanding executing statements
(ie finalize uncalled) using that function.

Using caml_sqlite3_delete_function (at the bottom of sqlite3_stubs.c) as
an example, I see at least the following problems:

- - check_db should return on error (unless caml_raise does longjmp)
- - Doesn't specify/keep track of nargs and so unregisters version of
function with any number of args
- - If rc is not SQLITE_OK then whatever value rc has is discarded
- - The if(rc!=SQLITE_OK) bit should be on a different line than the raise
bit which means that can run coverage testing and will be able to see
that both paths have been taken
- - unregister_user_function is called no matter what which would cause
memory corruption if for example the function was in use

If caml_raise_* does use longjmp (I don't believe it does, but if it
does then it changes control flow from what is written to effectively
jumping out of executing functions) then things are different.

If not then I see a whole bunch of problems, especially very sloppy
error handling and a lack of testing.  I strongly recommend the testing
also measure code coverage.  Just the process of testing calls
succeeding and failing will uncover numerous issues.

As a second issue, you need to give up on the idea of being able to free
functions except when the database is closed.  The reason is that you
cannot reliably tell when a function is no longer in use.  For example
it can be replaced by one with the same name and same number of args,
but what about case?  Then throw in some unicode to make life more
interesting.  Unless you duplicate the SQLite code, you can never be
sure.  The api for collations and modules did add a "destructor"
callback which makes life far easier.  I have asked for the same for
functions on the mailing list and ticket without an answer:

  http://www.sqlite.org/cvstrac/tktview?tn=3141

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEARECAAYFAkmjITwACgkQmOOfHg372QT6oACgmR9X8h7ad87XQnlvEM2XYmmM
vhkAn2/6uDJfUNb/9ikccvngBDcNhnHd
=SeK/
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Removing duplicate records

2009-02-23 Thread Lawrence Chitty
Gary O'Brien wrote:
> Greetings,
>
> I'd like to know if anyone has an elegant solution to
> the problem stated below.  I know it could be brute
> forced but it seems that there should be an elegant
> SQL solution to this problem.
>
> Given the following table and data, I'd like to remove
> all items with duplicate types within a container,
> keeping the most recent (largest item_id) item of that
> type within each container.
>
> Note that the column other_data is included only as a
> reminder that the rows aren't necessarily identical.
>
> Any help would be appreciated.
>
> Regards,
> Gary O'Brien
>
>
> DROP TABLE IF EXISTS item;
>
> CREATE TABLE item (
>item_id INTEGER PRIMARY KEY,
>type INTEGER NOT NULL DEFAULT 0,
>other_data INTEGER NOT NULL DEFAULT 0,
>container_id INTEGER NOT NULL
> );
>
> INSERT INTO item (type, container_id) VALUES (0, 1);
> INSERT INTO item (type, container_id) VALUES (1, 1);
> INSERT INTO item (type, container_id) VALUES (0, 2);
> INSERT INTO item (type, container_id) VALUES (1, 2);
> INSERT INTO item (type, container_id) VALUES (0, 3);
> INSERT INTO item (type, container_id) VALUES (1, 3);
>
> INSERT INTO item (type, container_id) VALUES (0, 1);
> INSERT INTO item (type, container_id) VALUES (1, 1);
> INSERT INTO item (type, container_id) VALUES (0, 2);
> INSERT INTO item (type, container_id) VALUES (1, 2);
>
> INSERT INTO item (type, container_id) VALUES (0, 1);
> INSERT INTO item (type, container_id) VALUES (1, 1);
>
>
> Before the delete operation:
>
> SELECT item_id, type, container_id FROM item;
>
> 1|0|1
> 2|1|1
> 3|0|2
> 4|1|2
> 5|0|3
> 6|1|3
> 7|0|1
> 8|1|1
> 9|0|2
> 10|1|2
> 11|0|1
> 12|1|1
>
> After the delete operation:
>
> SELECT item_id, type, container_id from item;
>
> 5|0|3
> 6|1|3
> 9|0|2
> 10|1|2
> 11|0|1
> 12|1|1
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>   
The following works for me

delete from item where rowid not in (
select rowid from item
group by type, container_id
having max(item_id)
)

Regards
Lawrence Chitty

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] UPDATE Field based on matching value in different Table

2009-02-23 Thread Greg Robertson
I have two tables each with two fields:

TableA: Field1, Field2

TableB: Field3, Field4


I would like to set Field2 in TableA to the value in Field4 in TableB
where TableA.Field3=TableB.Field4

Sounds simple enough but I can't figure out how to write the UPDATE
SQLite for this.

Can someone help?

Thanks

Greg
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Variables in Triggers

2009-02-23 Thread David Moorhouse
I'd like to have a condition in a trigger that examines a variable (set
outside the trigger) and acts accordingly.

Are there any SET style statements for environmental variables in SQLite ?

Thanks

D

-
David Moorhouse
Development Director
Moorhouse Works ltd
phone 027 236 5415
www.moorhouse.co.nz
-

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE Field based on matching value in different Table

2009-02-23 Thread Clark Christensen

I've been meaning to ask this myself.

I've been writing it like

update TableA 
set Field2 = (select Field4 from TableB where Field3 = TableA.Field1)
where Field1 = ?;  --perhaps

But it feels wrong.

 -Clark




- Original Message 
From: Greg Robertson 
To: sqlite-users@sqlite.org
Sent: Monday, February 23, 2009 3:34:15 PM
Subject: [sqlite] UPDATE Field based on matching value in different Table

I have two tables each with two fields:

TableA: Field1, Field2

TableB: Field3, Field4


I would like to set Field2 in TableA to the value in Field4 in TableB
where TableA.Field3=TableB.Field4

Sounds simple enough but I can't figure out how to write the UPDATE
SQLite for this.

Can someone help?

Thanks

Greg
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE Field based on matching value in different Table

2009-02-23 Thread P Kishor
On Mon, Feb 23, 2009 at 5:34 PM, Greg Robertson  wrote:
> I have two tables each with two fields:
>
> TableA: Field1, Field2
>
> TableB: Field3, Field4
>
>
> I would like to set Field2 in TableA to the value in Field4 in TableB
> where TableA.Field3=TableB.Field4
>
> Sounds simple enough but I can't figure out how to write the UPDATE
> SQLite for this.

You probably meant TableA.Field1

Here is one way

UPDATE TableA
SET Field2 = (SELECT Field4 FROM TableB WHERE Field3 = TableA.Field1)
WHERE EXISTS (SELECT Field4 FROM TableB WHERE Field3 = TableA.Field);


>
> Can someone help?
>
> Thanks
>
> Greg
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Puneet Kishor http://www.punkish.org/
Nelson Institute for Environmental Studies http://www.nelson.wisc.edu/
Open Source Geospatial Foundation (OSGeo) http://www.osgeo.org/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Variables in Triggers

2009-02-23 Thread Igor Tandetnik
"David Moorhouse" 
wrote in message news:vue6q453ln9isiut2m808u76qecq9ao...@4ax.com
> I'd like to have a condition in a trigger that examines a variable
> (set outside the trigger) and acts accordingly.
>
> Are there any SET style statements for environmental variables in
> SQLite ?

Create a one-row-one-column table, put the value there, query the table 
in the trigger. If you have multiple variables, you can perhaps have a 
two-column table storing name/value pairs.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE Field based on matching value in different Table

2009-02-23 Thread Igor Tandetnik
"Greg Robertson" 
wrote in message
news:151e70a00902231534h37d10202m469a176d704d6...@mail.gmail.com
> I have two tables each with two fields:
>
> TableA: Field1, Field2
>
> TableB: Field3, Field4
>
>
> I would like to set Field2 in TableA to the value in Field4 in TableB
> where TableA.Field3=TableB.Field4

I'm confused which field is in which table. Your description is 
self-contradictory. Anyway, you want something like this (columns 
renamed in an obvious way for clarity):

update TableA set A2 =
(select B2 from TableB where A1=B1)
where A1 in (select B1 from TableB);

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE Field based on matching value in different Table

2009-02-23 Thread Greg Robertson
That did it.

Thanks

Greg

On Mon, Feb 23, 2009 at 7:05 PM, Griggs, Donald
 wrote:
> Hi Greg,
>
> Someone on the list may give a better reply, and I'm sending this to you
> directly, but I think the following will work:
>
> Update tableA
>  set Field2 = ( select Field4 from tableB
>where TableA.Field3=TableB.Field4 );
>
> Regards,
>   Donald
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Greg Robertson
> Sent: Monday, February 23, 2009 6:34 PM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] UPDATE Field based on matching value in different
> Table
>
> I have two tables each with two fields:
>
> TableA: Field1, Field2
>
> TableB: Field3, Field4
>
>
> I would like to set Field2 in TableA to the value in Field4 in TableB
> where TableA.Field3=TableB.Field4
>
> Sounds simple enough but I can't figure out how to write the UPDATE
> SQLite for this.
>
> Can someone help?
>
> Thanks
>
> Greg
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Removing duplicate records

2009-02-23 Thread Igor Tandetnik
"Gary O'Brien"  wrote in message
news:93fda2e5d3cb442884be95e1b84fc...@garysldc13y00e
> Given the following table and data, I'd like to remove
> all items with duplicate types within a container,
> keeping the most recent (largest item_id) item of that
> type within each container.

delete from item where exists (
select 1 from item item2
where item2.container_id = item.container_id
and item2.type = item.type
and item2.item_id > item.item_id);

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite vs. Oracle (parallelized)

2009-02-23 Thread python
Dr. Hipp and others,

Thank you for your replies to my question.

Regards,
Malcolm
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE Field based on matching value in different Table

2009-02-23 Thread Igor Tandetnik
"Greg Robertson" 
wrote in message
news:151e70a00902231728j608612b8n491e84b11c70c...@mail.gmail.com
> That did it.
>
> Thanks
>
> Greg
>
> On Mon, Feb 23, 2009 at 7:05 PM, Griggs, Donald
>  wrote:
>> Hi Greg,
>>
>> Someone on the list may give a better reply, and I'm sending this to
>> you
>> directly, but I think the following will work:
>>
>> Update tableA
>>  set Field2 = ( select Field4 from tableB
>>where TableA.Field3=TableB.Field4 );

If there are any rows in tableA without a matching row in tableB, this 
would set Field2 in all such rows to NULL. This may or may not matter in 
your case, of course.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Removing duplicate records

2009-02-23 Thread Igor Tandetnik
"Lawrence Chitty"
 wrote in
message news:49a32a32.6040...@ntlworld.com
> Gary O'Brien wrote:
>> Given the following table and data, I'd like to remove
>> all items with duplicate types within a container,
>> keeping the most recent (largest item_id) item of that
>> type within each container.
>>
> The following works for me
>
> delete from item where rowid not in (
>select rowid from item
>group by type, container_id
>having max(item_id)
> )

This only works by accident. First, HAVING clause is a no-op: it says 
select only those groups that have max(item_id) != 0, which is all of 
them.

Second, when you have a field in a GROUP BY statement not bound by an 
aggregate function, SQLite would select an arbitrary row from the group 
to get the value from - not necessarily the row on which, say, maximum 
is achieved in some invocation of max() in the same statement. It just 
so happens that in this case, SQLite chooses the largest rowid in each 
group.

A similar statement should work though:

delete from item where item_id not in (
select max(item_id) from item
group by type, container_id
);

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] question on creating/populating table with varchar/clob data types

2009-02-23 Thread D. Richard Hipp

On Feb 23, 2009, at 3:54 PM, P Kishor wrote:

> On Mon, Feb 23, 2009 at 2:46 PM, D. Richard Hipp   
> wrote:
>>
> ..
>>
>> SQLite is, in fact, a TCL extension that escaped into the wild.  It  
>> is
>> specifically designed to integrate well with Tcl/Tk.
>>
> ..
>
> Did you ever tell that story anywhere? Would be fun to read it.

The story goes like this:  I was working on a big Tcl/Tk+C app.  (The  
problem to be solved was interesting in its own right.  It turned out  
to be NP complete - equivalent to subgraph homomorphism.  But the  
customer wasn't interested in "why not" - they just wanted a  
solution.  We found some good heuristics, but that is another  
story...)  The application was highly graphical (made extensive use of  
the Tcl/Tk canvas widget) and had to run on legacy hardware.  HPUX.  C  
code to do the heavy computation.  Tcl/Tk for control and display.   
Data from an Informix database.  Worked really well.

But  Every so often the operators would power-cycle the HPUX  
machine.  And when this happened, the Informix database server would  
not reboot successfully all the time.  When it did not, and the  
operators double-clicked on my application, it (obviously) failed with  
a message:  "Cannot connect to database server".  This was not my  
fault.  But because my application painted the error dialog, I'm the  
one who had to take the support call.  Not a good situation.

So I thought what if there was a database engine that would read  
and write directly to the disk without going through a server.  Then  
if the machine was healthy enough to bring up X11, I'd never have  
problems accessing the database.  SQLite 1.0 was born shortly  
thereafter.


>
>
> -- 
> Puneet Kishor http://www.punkish.org/
> Nelson Institute for Environmental Studies http://www.nelson.wisc.edu/
> Open Source Geospatial Foundation (OSGeo) http://www.osgeo.org/
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] question on creating/populating table with varchar/clob data types

2009-02-23 Thread P Kishor
On Mon, Feb 23, 2009 at 8:01 PM, D. Richard Hipp  wrote:
>
> On Feb 23, 2009, at 3:54 PM, P Kishor wrote:
>
>> On Mon, Feb 23, 2009 at 2:46 PM, D. Richard Hipp 
>> wrote:
>>>
>> ..
>>>
>>> SQLite is, in fact, a TCL extension that escaped into the wild.  It
>>> is
>>> specifically designed to integrate well with Tcl/Tk.
>>>
>> ..
>>
>> Did you ever tell that story anywhere? Would be fun to read it.
>
> The story goes like this:  I was working on a big Tcl/Tk+C app.  (The
> problem to be solved was interesting in its own right.  It turned out
> to be NP complete - equivalent to subgraph homomorphism.  But the
> customer wasn't interested in "why not" - they just wanted a
> solution.  We found some good heuristics, but that is another
> story...)  The application was highly graphical (made extensive use of
> the Tcl/Tk canvas widget) and had to run on legacy hardware.  HPUX.  C
> code to do the heavy computation.  Tcl/Tk for control and display.
> Data from an Informix database.  Worked really well.
>
> But  Every so often the operators would power-cycle the HPUX
> machine.  And when this happened, the Informix database server would
> not reboot successfully all the time.  When it did not, and the
> operators double-clicked on my application, it (obviously) failed with
> a message:  "Cannot connect to database server".  This was not my
> fault.  But because my application painted the error dialog, I'm the
> one who had to take the support call.  Not a good situation.
>
> So I thought what if there was a database engine that would read
> and write directly to the disk without going through a server.  Then
> if the machine was healthy enough to bring up X11, I'd never have
> problems accessing the database.  SQLite 1.0 was born shortly
> thereafter.
>
>


Richard,

This is fascinating. This may not be the best place for expanding on
this (on the other hand, this may be as good a place as any), but a
few questions arise --

1. Unless you don't want to, (or *that* client doesn't want to), it
would be interesting to know who was that you were working for.

2. What was the big Tcl/Tk and C app?

3. When was this happening?

4. Weren't there any existing solutions that you considered?

5. When did sqlite actually acquire traction in the open source community?

Shucks, this should go on the sqlite.org/history page.

Thanks for the above insight though. Even this much is great to know.


-- 
Puneet Kishor
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] question on creating/populating table with varchar/clob data types

2009-02-23 Thread python
Dr. Hipp,

> The story goes like this: ...

Great story!

Malcolm
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite vs. Oracle (parallelized)

2009-02-23 Thread python
Hi Billy,

>> Are there any plans to enhance SQLite to support some of Oracle's
>> parallel processing or partitioning capabilities?

> I realized that you're asking Richard, and not the peanut gallery, but 
> I figured I might as well ask out of curiosity: why do you want to 
> see these features in SQLite?

Most computers these days are multi-core. Oracle has done some excellent
work adding support for parallel processing of many database activities.
It would be great to see SQLite be able to exploit the extra processing
power of multiple cores. This is not a request for handling multiple
simultaneous transactions - it is a request to have single transactions
be processed across multiple cores.

Oracle also supports a rich mix of partitioning features. Partitioning
allows one to divide a table and/or index into logical subsets that
allow additional query optimizations. Partitioning is also useful for
quickly dropping a logical subset of data, eg. if you've partitioned
data by month, you can quickly drop your oldest month of data by
dropping its partition vs. performing a massive number of individual
deletes, followed by a vacuum. Finally, partitions can also support
parallelization tasks such as loading large data sets (each partition
can be loaded and optionally indexed independently of others) and for
building partial result sets for SQL selects (each partition can be
queried independently of other partitions).

Another interesting Oracle feature is compression. Oracle's compression
techniques not only compress data, but also speed up many types of
selects.

Thinking-out-loud: I wonder if some of Oracle's parallelization and
partitioning features could be emulated by creating a physical SQLite
database for each logical partition; loading large logical tables
quickly by using a separate process to load each 'partition specific'
SQLite database, and then creating a high-level code library to
translate a high-level SQL commands (insert, update, delete, select)
into multiple, 'partition specific' SQLite commands that get executed in
parallel. In the case of parallel selects, the intermediate results
would be cached to partition specific SQLite databases and then unioned
together by a master controlling process to create a logical cursor for
processing.

Is anyone using similar techniques with very large SQLite
tables/databases?

Malcolm
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite vs. Oracle (parallelized)

2009-02-23 Thread P Kishor
On Mon, Feb 23, 2009 at 9:34 PM,   wrote:
> Hi Billy,
>
>>> Are there any plans to enhance SQLite to support some of Oracle's
>>> parallel processing or partitioning capabilities?
>
>> I realized that you're asking Richard, and not the peanut gallery, but
>> I figured I might as well ask out of curiosity: why do you want to
>> see these features in SQLite?
>
> Most computers these days are multi-core. ..


One of things easy to overlook is that SQLite is not a PC-exclusive
software. About 10 million copies of SQLite run on iPhone. Who knows
how many run on other handhelds, embedded platforms, Vxworks, the
like. SQLite used to fit on a floppy and probably still does.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite vs. Oracle (parallelized)

2009-02-23 Thread python
P Kishor,

>> Most computers these days are multi-core. ..

> One of things easy to overlook is that SQLite is not a PC-exclusive
software. About 10 million copies of SQLite run on iPhone. Who knows
how many run on other handhelds, embedded platforms, Vxworks, the
like. SQLite used to fit on a floppy and probably still does.

Good point!

I lost my perspective on SQLite's intended audience.

Regards,
Malcolm
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SAVEPOINT : Seems don't work

2009-02-23 Thread Dan

On Feb 24, 2009, at 3:02 AM, REPKA_Maxime_NeufBox wrote:

> Hello,
>
> I am quite new to use SQLite
>
> I tried to use SAVEPOINT command but didn't succeed
>
> Does this command work ? how ?

We hope so. What happened to indicate it did not succeed? In what
way did the SAVEPOINT command malfunction?

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] a question on the callback function's return values of sqlite3_exec()

2009-02-23 Thread liubin liu

the question is on the callback function's return values of sqlite3_exec()

when using sqlite3_exec() to do "select * from ...", how to get all the
return values by using the callback function?

struct olt_info
{

int olt_index;
int olt_logo;
char* olt_line;
// int nmber;
};

int my_callback(void *olt_temp, int argc, char *value[], char *name[])
{
struct olt_info *pdata = NULL;
pdata = (struct olt_info *)olt_temp; 

int jj;
for (int i = 0; i < argc; i++)
jj = printf("%s == %s\n", name[i], value[i]);

pdata->olt_index = (int)atoi(value[0]);
pdata->olt_logo = (int)atoi(value[1]);
pdata->olt_line = value[2];

return 0;
}

it could print the result, but couldn't return the values. If do like so,
just one value could be get.
how to get all the values?

-- 
View this message in context: 
http://www.nabble.com/a-question-on-the-callback-function%27s-return-values-of-sqlite3_exec%28%29-tp22176984p22176984.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to use wild characters to select rows?

2009-02-23 Thread Pramoda M. A
Dear All,

 

Please can anybody help me how to use wild characters to select rows?

 

With Regards

Pramoda.M.A

KPIT Cummins Infosystems Limited | Bengaluru | Board: +91 80 30783905

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to use wild characters to select rows?

2009-02-23 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Pramoda M. A wrote:
> Please can anybody help me how to use wild characters to select rows?

http://www.catb.org/~esr/faqs/smart-questions.html

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEARECAAYFAkmjobUACgkQmOOfHg372QR9fACaAu8rmj1Fe0vvM3fILu8Mjn9Y
KPYAoL5e2JdCip1bP+1RfDXVS01u9GdQ
=oo0c
-END PGP SIGNATURE-

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users