Re: [sqlite] Multi table select

2010-08-05 Thread Igor Tandetnik
Guy (Hotmail)  wrote:
> I think that:
> Nevermind.. I figured it out.
> 
> I needed a left join.
> 
> does not cut it as a sollution. We new bees whant a complete solution to
> learn.
> Like the complete sql statement that you used to solve your probleme.

select table1.id, table2.date
from table1 left join table2 on (table1.id = table2.id);

Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multi table select

2010-08-05 Thread Guy (Hotmail)
Hi,
I think that:
Nevermind.. I figured it out.

I needed a left join.

does not cut it as a sollution. We new bees whant a complete solution to 
learn.
Like the complete sql statement that you used to solve your probleme.

thanks
Guy

--
From: "taftech" 
Sent: Thursday, August 05, 2010 8:55 PM
To: 
Subject: Re: [sqlite] Multi table select

>
> Nevermind.. I figured it out.
>
> I needed a left join.
>
>
>
> taftech wrote:
>>
>> I have two tables that I want to select data from
>>
>> For example
>> table1 has one column "id" which has 100 entries
>>
>> table2 has two column "id" and "date"
>> the "id" columns match
>> table2 has 1000 entries, but only 50 of them match with table1
>>
>> is there a way I can do a single select that will pull all 100 columns
>> from table1 and have the data info for the 50 that are in table2?
>>
>> I tried doing select table1.id,table2.date from table1,table2 where
>> table1.id=table2.id
>>
>> but I realized that only selects the records that exist in both databases
>>
>> Thanks
>>
>
> -- 
> View this message in context: 
> http://old.nabble.com/Multi-table-select-tp29356401p29356402.html
> Sent from the SQLite mailing list archive at Nabble.com.
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multi table select

2010-08-05 Thread taftech

Nevermind.. I figured it out.  

I needed a left join.



taftech wrote:
> 
> I have two tables that I want to select data from
> 
> For example 
> table1 has one column "id" which has 100 entries
> 
> table2 has two column "id" and "date"  
> the "id" columns match
> table2 has 1000 entries, but only 50 of them match with table1
> 
> is there a way I can do a single select that will pull all 100 columns
> from table1 and have the data info for the 50 that are in table2?
> 
> I tried doing select table1.id,table2.date from table1,table2 where
> table1.id=table2.id
> 
> but I realized that only selects the records that exist in both databases
> 
> Thanks
> 

-- 
View this message in context: 
http://old.nabble.com/Multi-table-select-tp29356401p29356402.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Multi table select

2010-08-05 Thread taftech

I have two tables that I want to select data from

For example 
table1 has one column "id" which has 100 entries

table2 has two column "id" and "date"  
the "id" columns match
table2 has 1000 entries, but only 50 of them match with table1

is there a way I can do a single select that will pull all 100 columns from
table1 and have the data info for the 50 that are in table2?

I tried doing select table1.id,table2.date from table1,table2 where
table1.id=table2.id

but I realized that only selects the records that exist in both databases

Thanks
-- 
View this message in context: 
http://old.nabble.com/Multi-table-select-tp29356401p29356401.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Substring (LIKE "%key%") searches, would FTS3 with suffix-tree tokenizer be the fast way?

2010-08-05 Thread Simon Slavin

On 5 Aug 2010, at 10:03pm, Sam Roberts wrote:

> But do you think the section would make the counting faster? I think
> I'd have to get the row counts like this, which would still do the
> slow full table scan:
> 
>  select section, count(*) from my_table where name like '%e%' group by 
> section;

But 'group by section' can profit from the index on the section column so it 
should be faster.

As with all these things, the suggestion is to try it and see.  You should try 
six or seven different solutions including shuffling columns and indexes before 
you settle on the one that will be in your final code.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Substring (LIKE "%key%") searches, would FTS3 with suffix-tree tokenizer be the fast way?

2010-08-05 Thread Sam Roberts
On Thu, Aug 5, 2010 at 1:37 PM, Simon Slavin  wrote:
>
> On 5 Aug 2010, at 8:42pm, Sam Roberts wrote:
>
>> select substr(name,1,1), count(*) from my_table where name like '%e%'
>> group by substr(name,1,1);
>
> If you are constantly going to be using the first character of the name like 
> that, give it a column of its own with its own index.

That's a good idea. I think it would help a lot with row fetching if
section was it's own column:

  select * from my_table where section is "g" and name like "%e%"
order by name limit 1 offset 4;

But do you think the section would make the counting faster? I think
I'd have to get the row counts like this, which would still do the
slow full table scan:

  select section, count(*) from my_table where name like '%e%' group by section;

Cheers,
Sam
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Substring (LIKE "%key%") searches, would FTS3 with suffix-tree tokenizer be the fast way?

2010-08-05 Thread Simon Slavin

On 5 Aug 2010, at 8:42pm, Sam Roberts wrote:

> select substr(name,1,1), count(*) from my_table where name like '%e%'
> group by substr(name,1,1);

If you are constantly going to be using the first character of the name like 
that, give it a column of its own with its own index.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Substring (LIKE "%key%") searches, would FTS3 with suffix-tree tokenizer be the fast way?

2010-08-05 Thread Sam Roberts
I'd appreciate any suggestions on good ways to do this, I'm neither an SQL or
sqlite expert, so I might be thinking about it all wrong.

I have something like a (read-only) address book/rolodex, with interactive
searching. As users type into the search box, I need to first know for each
section how many rows match the substring typed so far.  I only display the
rows that are visible on screen.

I have two queries:

(A) I count the rows in a letter group.

If they typed "e":

select substr(name,1,1), count(*) from my_table where name like '%e%'
group by substr(name,1,1);
A|94
B|118
C|131
...

This is too slow, ~3sec, with 2500 rows, and we want to have 1 rows.

Worse, when they type "es", the search is as slow after they type "s" as when
they typed "e", even though the "es" rows are a sub-set of the rows that
matched "e".

FTS3 only searches full terms/words by default, but I think if I built a custom
tokenizer that returned all the suffix trees for a name:

"fu bar" => [ "r", "ar", "bar", " bar", "u bar", "fu bar"]

That I could do rewrite query (A) like this:

select substr(name,1,1), count(*) from my_table where name match 'e*'
group by substr(name,1,1);

Is this a reasonable approach? Is there a better way? Has somebody
else done this?



(B) I access specific rows within a letter group.

For visible rows, I fetch them by offset into a letter group, so row 4 in the
"g" section of names containing "e" would be:

select * from my_table where name like "g%" and name like "%e%" order
by name limit 1 offset 4;

The performance for this is OK, right now, I think it's because the first LIKE
can use the index, so the linear scan is over only a few hundred rows. Or it
could be that the on-screen display of each row is slower than the DB search. I
think it might become a problem, though.

I'm not sure how I would rewrite this to use FTS3 if it turns out to be to slow
for a larger DB, maybe a tokenizer that puts the first letter of  the name as
the first letter of every suffix?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] data race seen in sqlite 3.6.16

2010-08-05 Thread venkat_02

Hi,
   I am using version 3.6.18 of sqlite and see the below data race (as shown
by valgrind/Helgrind tool). Was wondering if this has been captured anywhere
or a fix for it is available in the latest version 3.7 ( I was sort of
considering an upgrade, if it included a fix for this). Any info on this or
pointers in the right direction would be appreciated much.

  6868 ==2454== Possible data race during write of size 4 at 0xfe34fc0 by
thread #125
   6869 ==2454==at 0xFDEA0D0: pthreadMutexAlloc (in
/usr/binos/lib/libsqlite3.so)
   6870 ==2454==by 0xFDE9EA3: sqlite3MutexAlloc (in
/usr/binos/lib/libsqlite3.so)
   6871 ==2454==by 0xFDEAA77: unixEnterMutex (in
/usr/binos/lib/libsqlite3.so)
   6872 ==2454==by 0xFDEB577: unixLock (in /usr/binos/lib/libsqlite3.so)
   6873 ==2454==by 0xFDEA3C3: sqlite3OsLock (in
/usr/binos/lib/libsqlite3.so)
   6874 ==2454==by 0xFDEF90F: pager_wait_on_lock (in
/usr/binos/lib/libsqlite3.so)
   6875 ==2454==by 0xFDEFC57: pager_write_pagelist (in
/usr/binos/lib/libsqlite3.so)
   6876 ==2454==by 0xFDF16BB: sqlite3PagerCommitPhaseOne (in
/usr/binos/lib/libsqlite3.so)
   6877 ==2454==by 0xFDC999F: sqlite3BtreeCommitPhaseOne (in
/usr/binos/lib/libsqlite3.so)
   6878 ==2454==by 0xFE17D4B: vdbeCommit (in
/usr/binos/lib/libsqlite3.so)
   6879 ==2454==by 0xFE18497: sqlite3VdbeHalt (in
/usr/binos/lib/libsqlite3.so)
   6880 ==2454==by 0xFE11723: sqlite3VdbeExec (in
/usr/binos/lib/libsqlite3.so)
   6881 ==2454==  This conflicts with a previous write of size 4 by thread
#131
   6882 ==2454==at 0xFDEA0D0: pthreadMutexAlloc (in
/usr/binos/lib/libsqlite3.so)
   6883 ==2454==by 0xFDE9EA3: sqlite3MutexAlloc (in
/usr/binos/lib/libsqlite3.so)
   6884 ==2454==by 0xFDEAA77: unixEnterMutex (in
/usr/binos/lib/libsqlite3.so)
   6885 ==2454==by 0xFDEBA7F: unixUnlock (in
/usr/binos/lib/libsqlite3.so)
   6886 ==2454==by 0xFDEA3EF: sqlite3OsUnlock (in
/usr/binos/lib/libsqlite3.so)
   6887 ==2454==by 0xFDED6B3: osUnlock (in /usr/binos/lib/libsqlite3.so)

   6888 ==2454==by 0xFDEE333: pager_unlock (in
/usr/binos/lib/libsqlite3.so)
   6889 ==2454==by 0xFDEE407: pagerUnlockAndRollback (in
/usr/binos/lib/libsqlite3.so)


Thanks,
Venkat
-- 
View this message in context: 
http://old.nabble.com/data-race-seen-in-sqlite-3.6.16-tp29356381p29356381.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Check for valid dates?

2010-08-05 Thread Simon Davies
On 5 August 2010 16:28, K Peters  wrote:
> Hi all,
>
> is there a way to check for valid ISO 8601 dates?

Something like:

sqlite> select
case theDate like '%T%' when 1 then
replace( datetime( julianday( theDate ) ),' ','T') else
datetime( julianday( theDate ) )
endliketheDate||'%' is not null as status,
theDate
from (
select '2001-12-12 14:10:12' as theDate union
select '2001-1-31' union
select '2002-12-30T14:40' union
select '2002-12-30 14:30' union
select '2002-12-30 14:2' union
select '202-12-30' union
select '2002-12-30T14:30:22.22' union
select '2002-12-30 14:30.22.1z' union
select '2002-12-30T14.30.22.1e' union
select '2002-12-30 14:30:22Z' ) order by status;
0|2001-1-31
0|2002-12-30 14:2
0|2002-12-30 14:30.22.1z
0|2002-12-30T14.30.22.1e
0|202-12-30
1|2001-12-12 14:10:12
1|2002-12-30 14:30
1|2002-12-30 14:30:22Z
1|2002-12-30T14:30:22.22
1|2002-12-30T14:40

The sqlite date functions use a subset of IS0-8601 date and time
formats, see http://www.sqlite.org/lang_datefunc.html

Date with week number and ordinal dates do not appear to be handled as input.
Also, invalid dates (eg. 30th Feb) are not detected.

>
> Thanks
> Kai

Regards,
Simon
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Check for valid dates?

2010-08-05 Thread Simon Slavin

On 5 Aug 2010, at 4:28pm, K Peters wrote:

> is there a way to check for valid ISO 8601 dates?

Use a date library to turn your date into a number (e.g. epoch, Julian day), 
then turn it back into a string again.  If you get back the same string you 
start with, it's valid.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Check for valid dates?

2010-08-05 Thread K Peters
Hi all,

is there a way to check for valid ISO 8601 dates?

Thanks
Kai
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Docs patch

2010-08-05 Thread Richard Hipp
The mailing list software (Mailman) strips attachments.

On Thu, Aug 5, 2010 at 9:39 AM, Martin Jenkins  wrote:

> Hi,
>
> I sent a patch which fixes a fairly large number of typos in
> the 3.7.0 HTML documentation to the list on the 25th July.
>
> It was sent from an unsubscribed address and apart from a
> message saying it would need to be moderated, it appears
> to have vanished without trace.
>
> Is this the sort  of patch that's wanted or is the moderation
> period likely to take a week or more?
>
> The patch as a context diff is about 88kb so I'm reluctant to
> post it directly to the list.
>
> Martin
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
-
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Docs patch

2010-08-05 Thread Martin Jenkins
Hi,

I sent a patch which fixes a fairly large number of typos in
the 3.7.0 HTML documentation to the list on the 25th July.

It was sent from an unsubscribed address and apart from a
message saying it would need to be moderated, it appears
to have vanished without trace.

Is this the sort  of patch that's wanted or is the moderation
period likely to take a week or more?

The patch as a context diff is about 88kb so I'm reluctant to
post it directly to the list.

Martin
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Repairing a Database.

2010-08-05 Thread Simon Slavin

On 4 Aug 2010, at 10:15pm, Kirk Clemons wrote:

> Also, does this mean that I could take a backup copy of my database and 
> import the data that is retrieved from the .dump command on the corrupt 
> database? 

We cannot tell what .dump will get from your old database because it is 
corrupt.  It might miss out lots of records.  It might appear to be doing all 
the records but actually put the same values in each one.  It might dump the 
entire database perfectly.

> If so how would I do this and get past the PRIMARY KEY/existing table errors?

You can edit the file to replace 'INSERT' with 'INSERT OR IGNORE' or some 
variation on that.  See



Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] EXTERNAL: Process memory space exhausted in 3.7.0

2010-08-05 Thread Black, Michael (IS)
Can you force a WAL checkpoint periodically?  Also...of course test 3.7.0.1
 
PRAGMA database.wal_checkpoint;
 
I assume you're doing your thing in one transaction.  I would've thought WAL 
would notice that and do it's auto-checkpoint anyways.  Can one of the experts 
comment on the expected behavior?  Should this guy's process keep growing like 
this with WAL aqnd one huge transaction?
 
Unless you turn off auto checkpoint?
 
Max process size on windows 32-bit binaries is 2GB by default.  You bump that 
to 3GB with the /3DB switch in boot.ini
http://www.microsoft.com/whdc/system/platform/server/pae/paemem.mspx
You have to recompile with /LARGEADDRESSAWARE too
 
Just to see if this migrates the problem to 3GB instead of 2GB (or maybe it 
finishes OK?).
 
Does the machine have 
 
Michael D. Black
Senior Scientist
Advanced Analytics Directorate
Northrop Grumman Information Systems
 



From: sqlite-users-boun...@sqlite.org on behalf of Victor Morales-Duarte
Sent: Wed 8/4/2010 1:19 PM
To: sqlite-users@sqlite.org
Subject: EXTERNAL:[sqlite] Process memory space exhausted in 3.7.0



Hello,



The windows desktop application that I maintain uses sqlite for some of
its storage. The data volume that we must handle has increased
dramatically over the past 6 months and as it was to be expected update
performance has degraded accordingly. Because of that, I was very quick
to jump onto 3.7 when I read that WAL could be selected with it. The
speed improvements when updating data are indeed very noticeable when
running the application on my laptop's drive (3x faster) although not so
much when running on a fast SSD connected to it via ESATA (only about
20% extra speed);  I guess that the different ratio of improvement was
to be expected given the access characteristics of each. Overall, I have
to say that I believe that WAL was a great addition.



Unfortunately, I've encountered what could potentially be considered a
big problem. When I run a very large update the process space for the
application seems to be exhausted. The way that it manifested itself at
first was that there would be a disc I/O error, but because I test the
application while running perfmon.exe on win xp sp3 to monitor the IO
read bytes/sec, IO write bytes/sec, processor time and virtual bytes I
noticed that the virtual bytes were at the 2GB max process space limit
when the disc I/O error occurred.



In order to rule out the possibility that I was doing something wrong, I
decided to test a similar update using the sqlite3.exe CLI. During the
update, what the application will do is it will iterate over all the
records in a table in a specific order assigning a pair of integers to
two columns (both initially null) of each record, based on domain
specific rules; accordingly, the test with the CLI is the opposite
operation; I take a db file that is about 1.5 GB in size, with over 3.7
million records in the table that needs to be updated and then I proceed
to assign null to one of the columns for all records. After some time of
working, the virtual bytes (as reported by perfmon) hit the max process
space and the disk I/O error is  reported. At that time, the wal file is
over 5.5 GB in size and the shm file is over 10MB in size.



My initial guess is that there is a problem memory mapping files.



I wish that I could make the db available for testing but the data
contained in it cannot be disclosed due to an NDA and the schema is
proprietary information of my employer. First I need to finish a
workaround for this (it seems that by closing and reopening the db
connection, the situation improves somewhat) and then I will write a
small piece of code that will create a dummy database large enough that
the error can be reproduced in it so that I can post it in a reply to
this email.



Thank you!!!



Victor

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum'ing database returns a 'PRIMARY KEY must be unique' error in 3.7.0

2010-08-05 Thread Black, Michael (IS)
What happens if you do this with 3.6.23.1 or 3.7.0.1 ?
 
 
Michael D. Black
Senior Scientist
Advanced Analytics Directorate
Northrop Grumman Information Systems
 



From: sqlite-users-boun...@sqlite.org on behalf of Scott Crosby
Sent: Wed 8/4/2010 8:14 AM
To: sqlite-users@sqlite.org
Subject: EXTERNAL:[sqlite] Vacuum'ing database returns a 'PRIMARY KEY must be 
unique' error in 3.7.0



I decided to vacuum my places.sqlite database (used by Firefox to
store its history and bookmarks) and got a primary key violation.

The database passes the analyzer and dumper with no errors. select
count(*) from the different tables also shows no errors.

However, a dump and attempted restore DOES find some anomalies, please
see the log below.

Scott


 Log

cro...@dragonlight:~/.mozilla/firefox/default.q0z$ sqlite3 places.sqlite
SQLite version 3.7.0
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> vacuum;
Error: PRIMARY KEY must be unique
cro...@functor:~/.mozilla/firefox/default.q0z$ /tmp/sqlite3_analyzer
places.sqlite > /dev/null
cro...@functor:~/.mozilla/firefox/default.q0z$ sqlite3 places.sqlite
.dump > /dev/null
cro...@dragonlight:~/.mozilla/firefox/default.q0z$ sqlite3
places.sqlite .dump | sqlite3 /tmp/test.sqlite3
Error: near line 82900: moz_places.hidden may not be NULL
Error: near line 104665: moz_places.hidden may not be NULL

Those two lines consist of:
INSERT INTO "moz_places" VALUES(0,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL);
INSERT INTO "moz_places" VALUES(0,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL);

>From a year-old backup and dates in visited webpages line #82900, was
created about 18 months ago and altered sometime in the last year.
Adjacent lines before and after show no errors and have consecutive
sequence numbers.


 Schema


CREATE TABLE moz_anno_attributes (id INTEGER PRIMARY KEY,name
VARCHAR(32) UNIQUE NOT NULL);
CREATE TABLE moz_annos (id INTEGER PRIMARY KEY,place_id INTEGER NOT
NULL,anno_attribute_id INTEGER,mime_type VARCHAR(32) DEFAULT
NULL,content LONGVARCHAR, flags INTEGER DEFAULT 0,expiration INTEGER
DEFAULT 0,type INTEGER DEFAULT 0,dateAdded INTEGER DEFAULT
0,lastModified INTEGER DEFAULT 0);
CREATE TABLE moz_bookmarks (id INTEGER PRIMARY KEY,type INTEGER, fk
INTEGER DEFAULT NULL, parent INTEGER, position INTEGER, title
LONGVARCHAR, keyword_id INTEGER, folder_type TEXT, dateAdded INTEGER,
lastModified INTEGER);
CREATE TABLE moz_bookmarks_roots (root_name VARCHAR(16) UNIQUE,
folder_id INTEGER);
CREATE TABLE moz_favicons (id INTEGER PRIMARY KEY, url LONGVARCHAR
UNIQUE, data BLOB, mime_type VARCHAR(32), expiration LONG);
CREATE TABLE moz_historyvisits (id INTEGER PRIMARY KEY, from_visit
INTEGER, place_id INTEGER, visit_date INTEGER, visit_type INTEGER,
session INTEGER);
CREATE TABLE moz_inputhistory (place_id INTEGER NOT NULL, input
LONGVARCHAR NOT NULL, use_count INTEGER, PRIMARY KEY (place_id,
input));
CREATE TABLE moz_items_annos (id INTEGER PRIMARY KEY,item_id INTEGER
NOT NULL,anno_attribute_id INTEGER,mime_type VARCHAR(32) DEFAULT
NULL,content LONGVARCHAR, flags INTEGER DEFAULT 0,expiration INTEGER
DEFAULT 0,type INTEGER DEFAULT 0,dateAdded INTEGER DEFAULT
0,lastModified INTEGER DEFAULT 0);
CREATE TABLE moz_keywords (id INTEGER PRIMARY KEY AUTOINCREMENT,
keyword TEXT UNIQUE);
CREATE TABLE moz_places (id INTEGER PRIMARY KEY, url LONGVARCHAR,
title LONGVARCHAR, rev_host LONGVARCHAR, visit_count INTEGER DEFAULT
0, hidden INTEGER DEFAULT 0 NOT NULL, typed INTEGER DEFAULT 0 NOT
NULL, favicon_id INTEGER, frecency INTEGER DEFAULT -1 NOT NULL,
last_visit_date INTEGER);
CREATE UNIQUE INDEX moz_annos_placeattributeindex ON moz_annos
(place_id, anno_attribute_id);
CREATE INDEX moz_bookmarks_itemindex ON moz_bookmarks (fk,type);
CREATE INDEX moz_bookmarks_itemlastmodifiedindex ON moz_bookmarks (fk,
lastModified);
CREATE INDEX moz_bookmarks_parentindex ON moz_bookmarks (parent,position);
CREATE INDEX moz_historyvisits_dateindex ON moz_historyvisits (visit_date);
CREATE INDEX moz_historyvisits_fromindex ON moz_historyvisits (from_visit);
CREATE INDEX moz_historyvisits_placedateindex ON moz_historyvisits
(place_id, visit_date);
CREATE UNIQUE INDEX moz_items_annos_itemattributeindex ON
moz_items_annos (item_id, anno_attribute_id);
CREATE INDEX moz_places_faviconindex ON moz_places (favicon_id);
CREATE INDEX moz_places_frecencyindex ON moz_places (frecency);
CREATE INDEX moz_places_hostindex ON moz_places (rev_host);
CREATE INDEX moz_places_lastvisitdateindex ON moz_places (last_visit_date);
CREATE UNIQUE INDEX moz_places_url_uniqueindex ON moz_places (url);
CREATE INDEX moz_places_visitcount ON moz_places (visit_count);
CREATE TRIGGER moz_bookmarks_beforedelete_v1_trigger BEFORE DELETE ON
moz_bookmarks FOR EACH ROW WHEN OLD.keyword_id NOT NULL BEGIN DELETE
FROM moz_keywords WHERE id = OLD.keyword_id AND  NOT EXISTS (SELECT id
FROM moz_bookmarks WHERE keyword_id = OLD.keyword_id AND id <> OLD.id
LIMIT 1); END;

Re: [sqlite] How to update many rows efficiently?

2010-08-05 Thread Benoit Mortgat
2010/8/5 Dominique Pellé :
> Using information in previous reply, I can do it with 2 UPDATE queries
> as follows (but I suspect that there is a better solution).
>
>  UPDATE t1 SET l_nm = (SELECT l_nm FROM t2 WHERE t1.ID_PK = t2.ID_FK)
>  WHERE ID_PK IN (SELECT ID_FK FROM t2);
>
>  UPDATE t1 SET r_nm = (SELECT r_nm FROM t2 WHERE t1.ID_PK = t2.ID_FK)
>  WHERE ID_PK IN (SELECT ID_FK FROM t2);

For this kind of statements you can use either:

UPDATE t1 SET l_nm = (SELECT l_nm FROM t2 WHERE t1.ID_PK = t2.ID_FK),
 r_nm = (SELECT r_nm FROM t2 WHERE t1.ID_PK = t2.ID_FK)
WHERE ID_PK IN (SELECT ID_FK FROM t2);

or

INSERT OR REPLACE INTO t1(ID_PK, l_nm, r_nm)
SELECT t1.ID_PK, t2.l_nm, t2.r_nm
FROM t1 INNER JOIN t2 ON ( t2.ID_FK = t1.ID_PK )

Including t1 in the select statement above is necessary in order not
to add rows, but only keep existing ones, and id_pk must be declared
as primary key.

Regards,

-- 
Benoit Mortgat
20, avenue Marcel Paul
69200 Vénissieux, France
+33 6 17 15 41 58
+33 4 27 11 61 23
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] linux ubuntu 3.7.01 installs sqlite3 3.6.22

2010-08-05 Thread Sylvain Pointeau
I learned something today :-)
I added /usr/local/lib to my /etc/ld.so.conf
then run "sudo ldconfig"
then I could use the latest sqlite3 in my /usr/local/ directory
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Repairing a Database.

2010-08-05 Thread Kirk Clemons
Also, does this mean that I could take a backup copy of my database and import 
the data that is retrieved from the .dump command on the corrupt database? 

If so how would I do this and get past the PRIMARY KEY/existing table errors?

~Kirk

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Roger Binns
Sent: Tuesday, August 03, 2010 5:56 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Repairing a Database.

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/02/2010 07:42 AM, Kirk Clemons wrote:
> I have seen some information on repairing a corrupt database and recovering 
> at least some of the data. But I have not had any luck finding out how to do 
> it.

You can use .dump in the shell.  It does a 'select *' on each table
outputting the rows (which will be in rowid order).  If it gets
SQLITE_CORRUPT then it repeats the select, but in reverse rowid order so in
theory you'll get the rows before and after the corruption.

In general you cannot repair or recover a corrupt database because to do so
would mean that information has to be stored redundantly - ie you could use
a duplicate to reconstruct the original.  The one exception is that indices
can be dropped and recreated since they are redundant.

The file format information will be most helpful.  The two documents
describe the same thing but with different styles.

  http://www.sqlite.org/fileformat.html
  http://www.sqlite.org/fileformat2.html

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkxYup8ACgkQmOOfHg372QTYuwCg2GpSzG5qzltWQliyplKyPU2r
sTAAnROPV+qQcmhouUl9/z6RLDMnTMjm
=fvzY
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Process memory space exhausted in 3.7.0

2010-08-05 Thread Victor Morales-Duarte
Hello,

 

The windows desktop application that I maintain uses sqlite for some of
its storage. The data volume that we must handle has increased
dramatically over the past 6 months and as it was to be expected update
performance has degraded accordingly. Because of that, I was very quick
to jump onto 3.7 when I read that WAL could be selected with it. The
speed improvements when updating data are indeed very noticeable when
running the application on my laptop's drive (3x faster) although not so
much when running on a fast SSD connected to it via ESATA (only about
20% extra speed);  I guess that the different ratio of improvement was
to be expected given the access characteristics of each. Overall, I have
to say that I believe that WAL was a great addition.

 

Unfortunately, I've encountered what could potentially be considered a
big problem. When I run a very large update the process space for the
application seems to be exhausted. The way that it manifested itself at
first was that there would be a disc I/O error, but because I test the
application while running perfmon.exe on win xp sp3 to monitor the IO
read bytes/sec, IO write bytes/sec, processor time and virtual bytes I
noticed that the virtual bytes were at the 2GB max process space limit
when the disc I/O error occurred.

 

In order to rule out the possibility that I was doing something wrong, I
decided to test a similar update using the sqlite3.exe CLI. During the
update, what the application will do is it will iterate over all the
records in a table in a specific order assigning a pair of integers to
two columns (both initially null) of each record, based on domain
specific rules; accordingly, the test with the CLI is the opposite
operation; I take a db file that is about 1.5 GB in size, with over 3.7
million records in the table that needs to be updated and then I proceed
to assign null to one of the columns for all records. After some time of
working, the virtual bytes (as reported by perfmon) hit the max process
space and the disk I/O error is  reported. At that time, the wal file is
over 5.5 GB in size and the shm file is over 10MB in size.

 

My initial guess is that there is a problem memory mapping files.

 

I wish that I could make the db available for testing but the data
contained in it cannot be disclosed due to an NDA and the schema is
proprietary information of my employer. First I need to finish a
workaround for this (it seems that by closing and reopening the db
connection, the situation improves somewhat) and then I will write a
small piece of code that will create a dummy database large enough that
the error can be reproduced in it so that I can post it in a reply to
this email. 

 

Thank you!!!

 

Victor

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Re Populating a database from a file

2010-08-05 Thread Kirk Clemons
Also, forgot to mention that binary data will not normally work for csv format. 
It's best to leave the settings at their default and do;

.mode list
.separator |
.output mytable.sql/.txt
.dump mytable
_
.read mytable.sql/.txt

~Kirk
-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of john knops
Sent: Wednesday, August 04, 2010 6:57 AM
To: sqlite-users@sqlite.org
Subject: [sqlite] Re Populating a database from a file

Thanks for the prompt reply.
What form should the file take. I've tried enclosing the data for each 
column in double quotes and separating them with either a comma, tab or 
space but I get an error "expected 5 columns but only found 1. Using | I 
get "datatype mismatch"
Also when I use .dump does it destroy the table or just copies it t a 
text file?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Re Populating a database from a file

2010-08-05 Thread Kirk Clemons
.dump copies the insert statements from the table along with the data by doing 
a 'select *'.
You will want to make sure you import or read the file with the same settings 
as the .dump was done in.

.show will show the settings for mode, separator, etc.
To export to a csv from a table might look something like this;

.mode csv
.separator ,
.output mytable.csv
.dump mytable

When getting ready to read or import the file you will want to basically do the 
same thing;

.mode csv
.separator ,
.read/.import mytable.csv

~Kirk


-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of john knops
Sent: Wednesday, August 04, 2010 6:57 AM
To: sqlite-users@sqlite.org
Subject: [sqlite] Re Populating a database from a file

Thanks for the prompt reply.
What form should the file take. I've tried enclosing the data for each 
column in double quotes and separating them with either a comma, tab or 
space but I get an error "expected 5 columns but only found 1. Using | I 
get "datatype mismatch"
Also when I use .dump does it destroy the table or just copies it t a 
text file?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Populating a database from a file

2010-08-05 Thread Kirk Clemons
There is the .read function which will read in a file that contains insert 
statements. This is done through the shell.

~Kirk
-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of john knops
Sent: Tuesday, August 03, 2010 6:53 AM
To: sqlite-users@sqlite.org
Subject: [sqlite] Populating a database from a file

If I'm using MySQL I can populate a database with "Load data infile 
'/tmp/data.txt' into table table_1"
Is there a similar way of loading data into a sqlite3 database?

Thanks



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Repairing a Database.

2010-08-05 Thread Kirk Clemons
Thank you Roger,
I will look this over and see what I can do.

~Kirk

Kirk Clemons
Technical Support Analyst
Chief Architect(r)
6500 N. Mineral Dr.
Coeur d'Alene, Idaho 83815
Phone: (800)482-4433 
   (208)664-4204
 
Professional Software
www.chiefarchitect.com
 
Consumer Software
www.HomeDesignerSoftware.com
-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Roger Binns
Sent: Tuesday, August 03, 2010 5:56 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Repairing a Database.

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/02/2010 07:42 AM, Kirk Clemons wrote:
> I have seen some information on repairing a corrupt database and recovering 
> at least some of the data. But I have not had any luck finding out how to do 
> it.

You can use .dump in the shell.  It does a 'select *' on each table
outputting the rows (which will be in rowid order).  If it gets
SQLITE_CORRUPT then it repeats the select, but in reverse rowid order so in
theory you'll get the rows before and after the corruption.

In general you cannot repair or recover a corrupt database because to do so
would mean that information has to be stored redundantly - ie you could use
a duplicate to reconstruct the original.  The one exception is that indices
can be dropped and recreated since they are redundant.

The file format information will be most helpful.  The two documents
describe the same thing but with different styles.

  http://www.sqlite.org/fileformat.html
  http://www.sqlite.org/fileformat2.html

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkxYup8ACgkQmOOfHg372QTYuwCg2GpSzG5qzltWQliyplKyPU2r
sTAAnROPV+qQcmhouUl9/z6RLDMnTMjm
=fvzY
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Vacuum'ing database returns a 'PRIMARY KEY must be unique' error in 3.7.0

2010-08-05 Thread Scott Crosby
I decided to vacuum my places.sqlite database (used by Firefox to
store its history and bookmarks) and got a primary key violation.

The database passes the analyzer and dumper with no errors. select
count(*) from the different tables also shows no errors.

However, a dump and attempted restore DOES find some anomalies, please
see the log below.

Scott


 Log

cro...@dragonlight:~/.mozilla/firefox/default.q0z$ sqlite3 places.sqlite
SQLite version 3.7.0
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> vacuum;
Error: PRIMARY KEY must be unique
cro...@functor:~/.mozilla/firefox/default.q0z$ /tmp/sqlite3_analyzer
places.sqlite > /dev/null
cro...@functor:~/.mozilla/firefox/default.q0z$ sqlite3 places.sqlite
.dump > /dev/null
cro...@dragonlight:~/.mozilla/firefox/default.q0z$ sqlite3
places.sqlite .dump | sqlite3 /tmp/test.sqlite3
Error: near line 82900: moz_places.hidden may not be NULL
Error: near line 104665: moz_places.hidden may not be NULL

Those two lines consist of:
INSERT INTO "moz_places" VALUES(0,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL);
INSERT INTO "moz_places" VALUES(0,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL);

>From a year-old backup and dates in visited webpages line #82900, was
created about 18 months ago and altered sometime in the last year.
Adjacent lines before and after show no errors and have consecutive
sequence numbers.


 Schema


CREATE TABLE moz_anno_attributes (id INTEGER PRIMARY KEY,name
VARCHAR(32) UNIQUE NOT NULL);
CREATE TABLE moz_annos (id INTEGER PRIMARY KEY,place_id INTEGER NOT
NULL,anno_attribute_id INTEGER,mime_type VARCHAR(32) DEFAULT
NULL,content LONGVARCHAR, flags INTEGER DEFAULT 0,expiration INTEGER
DEFAULT 0,type INTEGER DEFAULT 0,dateAdded INTEGER DEFAULT
0,lastModified INTEGER DEFAULT 0);
CREATE TABLE moz_bookmarks (id INTEGER PRIMARY KEY,type INTEGER, fk
INTEGER DEFAULT NULL, parent INTEGER, position INTEGER, title
LONGVARCHAR, keyword_id INTEGER, folder_type TEXT, dateAdded INTEGER,
lastModified INTEGER);
CREATE TABLE moz_bookmarks_roots (root_name VARCHAR(16) UNIQUE,
folder_id INTEGER);
CREATE TABLE moz_favicons (id INTEGER PRIMARY KEY, url LONGVARCHAR
UNIQUE, data BLOB, mime_type VARCHAR(32), expiration LONG);
CREATE TABLE moz_historyvisits (id INTEGER PRIMARY KEY, from_visit
INTEGER, place_id INTEGER, visit_date INTEGER, visit_type INTEGER,
session INTEGER);
CREATE TABLE moz_inputhistory (place_id INTEGER NOT NULL, input
LONGVARCHAR NOT NULL, use_count INTEGER, PRIMARY KEY (place_id,
input));
CREATE TABLE moz_items_annos (id INTEGER PRIMARY KEY,item_id INTEGER
NOT NULL,anno_attribute_id INTEGER,mime_type VARCHAR(32) DEFAULT
NULL,content LONGVARCHAR, flags INTEGER DEFAULT 0,expiration INTEGER
DEFAULT 0,type INTEGER DEFAULT 0,dateAdded INTEGER DEFAULT
0,lastModified INTEGER DEFAULT 0);
CREATE TABLE moz_keywords (id INTEGER PRIMARY KEY AUTOINCREMENT,
keyword TEXT UNIQUE);
CREATE TABLE moz_places (id INTEGER PRIMARY KEY, url LONGVARCHAR,
title LONGVARCHAR, rev_host LONGVARCHAR, visit_count INTEGER DEFAULT
0, hidden INTEGER DEFAULT 0 NOT NULL, typed INTEGER DEFAULT 0 NOT
NULL, favicon_id INTEGER, frecency INTEGER DEFAULT -1 NOT NULL,
last_visit_date INTEGER);
CREATE UNIQUE INDEX moz_annos_placeattributeindex ON moz_annos
(place_id, anno_attribute_id);
CREATE INDEX moz_bookmarks_itemindex ON moz_bookmarks (fk,type);
CREATE INDEX moz_bookmarks_itemlastmodifiedindex ON moz_bookmarks (fk,
lastModified);
CREATE INDEX moz_bookmarks_parentindex ON moz_bookmarks (parent,position);
CREATE INDEX moz_historyvisits_dateindex ON moz_historyvisits (visit_date);
CREATE INDEX moz_historyvisits_fromindex ON moz_historyvisits (from_visit);
CREATE INDEX moz_historyvisits_placedateindex ON moz_historyvisits
(place_id, visit_date);
CREATE UNIQUE INDEX moz_items_annos_itemattributeindex ON
moz_items_annos (item_id, anno_attribute_id);
CREATE INDEX moz_places_faviconindex ON moz_places (favicon_id);
CREATE INDEX moz_places_frecencyindex ON moz_places (frecency);
CREATE INDEX moz_places_hostindex ON moz_places (rev_host);
CREATE INDEX moz_places_lastvisitdateindex ON moz_places (last_visit_date);
CREATE UNIQUE INDEX moz_places_url_uniqueindex ON moz_places (url);
CREATE INDEX moz_places_visitcount ON moz_places (visit_count);
CREATE TRIGGER moz_bookmarks_beforedelete_v1_trigger BEFORE DELETE ON
moz_bookmarks FOR EACH ROW WHEN OLD.keyword_id NOT NULL BEGIN DELETE
FROM moz_keywords WHERE id = OLD.keyword_id AND  NOT EXISTS (SELECT id
FROM moz_bookmarks WHERE keyword_id = OLD.keyword_id AND id <> OLD.id
LIMIT 1); END;
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to update many rows efficiently?

2010-08-05 Thread Dominique Pellé
Igor Tandetnik wrote:

> Dominique Pellé  wrote:
>> For example, given 2 tables t1 and t2, both with 2 columns as follows...
>>
>> Table t1:
>>
>>    ID   name
>>    --   
>>    1    
>>    2    NULL
>>    3    NULL
>>    4    
>>    (~1 million records)
>>
>> Table t2:
>>
>>    ID   name
>>    --   
>>    2    
>>    4    
>>    (~500,000 records)
>>
>> ... I would like to update the 'name' column in records of table t1, so that
>> they are the same as the 'name' column in table t2 when the ID columns
>> match.
>
> update t1 set name = (select name from t2 where t1.ID=t2.ID)
> where ID in (select ID from t2);
>
> Or, if t1.ID is a primary key or otherwise has a unique constraint:
>
> insert or replace into t1(ID, name)
> select ID, name from t2;


Thanks Igor.  That helped. I need to brush up my SQL.

I'm still curious: what if I need to update several columns?

Using almost the same example as in previous mail, I'd
like to update columns l_nm and r_nm in table t1 using
information in table t2 as depicted below:

Table t1:

   ID_PK   l_nm   r_nm
   --     
   1      NULL
   2   NULL   NULL
   3   NULL   NULL
   4      NULL
   (~1 million records)

Table t2:

   ID_FK   l_nm  r_nm
   --    
   2     
   4     
   (~500,000 records)


Table t1 (after update):

   ID_PK   l_nm   r_nm
   --  
   1      NULL
   2         <- l_nm & r_nm column updated.
   3   NULL   NULL
   4         <- l_nm & r_nm column updated.
   (~1 million records)

Using information in previous reply, I can do it with 2 UPDATE queries
as follows (but I suspect that there is a better solution).

  UPDATE t1 SET l_nm = (SELECT l_nm FROM t2 WHERE t1.ID_PK = t2.ID_FK)
  WHERE ID_PK IN (SELECT ID_FK FROM t2);

  UPDATE t1 SET r_nm = (SELECT r_nm FROM t2 WHERE t1.ID_PK = t2.ID_FK)
  WHERE ID_PK IN (SELECT ID_FK FROM t2);


I found information about updating multiple columns at...

http://it.toolbox.com/wiki/index.php/How_do_I_update_multiple_fields_with_single_update_statement_using_subqueries%3F

... but SQLite does not accept the kind of queries in above page.

Regards
Dominique
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] linux ubuntu 3.7.01 installs sqlite3 3.6.22

2010-08-05 Thread Sylvain Pointeau
... my fault, the programs are taking the lib from /usr/lib before
/usr/local/lib
it is not doing this way on my macosx, I am searching now how to specify to
take first the /usr/local/lib when executing, even for
/usr/local/bin/sqlite3
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Extension for TCL evaluation by user-defined function inside SQLite

2010-08-05 Thread Alexey Pechnikov
I have a lot of TCL language structures stored in my databases and
want to get access to these from sqlite3 shell. So I write extension
for this:

.load ./libsqlitetcl.so
-- TCL (cmd, argv)
SELECT TCL('info patchlevel');
8.5.8
SELECT TCL('return $argc','abba', 'baba');
2
SELECT TCL('return $argv','abba', 'baba');
abba baba
SELECT TCL('lindex $argv 0','abba', 'baba');
abba
SELECT TCL('lindex $argv 1','abba', 'baba');
baba
SELECT TCL('dict get [lindex $argv 0] mykey','key 1 mykey 2');
2

-- TCLCMD (cmd, args)
SELECT TCLCMD('dict get', 'key 1 mykey 2', 'mykey');
2
SELECT TCLCMD('dict get', 'key 1 mykey 2', 'key');
1
SELECT TCLCMD('lindex', 'key 1 mykey 2', 0);
key
SELECT TCLCMD('lindex', 'key 1 mykey 2', 2);
mykey

This is useful for testing and shell scripting. See sources here:
http://sqlite.mobigroup.ru/dir?name=ext/tcl

-- 
Best regards, Alexey Pechnikov.
http://pechnikov.tel/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to update many rows efficiently?

2010-08-05 Thread Benoit Mortgat
On Thu, Aug 5, 2010 at 01:17, Igor Tandetnik  wrote:
>
> Or, if t1.ID is a primary key or otherwise has a unique constraint:
>
> insert or replace into t1(ID, name)
> select ID, name from t2;

this one is different because it would cause INSERTs into t1 if some
ID exists in t2 and not t1.


-- 
Benoit Mortgat
20, avenue Marcel Paul
69200 Vénissieux, France
+33 6 17 15 41 58
+33 4 27 11 61 23
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users