[sqlite] record size limit ???

2006-09-28 Thread RohitPatel9999

Hi

Q1. Need to know if there is maximum limit for record size ? 

Q2. What are the consequence of using such large record size ? I may need
large record size in two-three tables.

In two or three of my tables, I may need to have 64 columns or more and
maximum record size may reach to approx. 2 MB. for some records if user
fills data fully in all columns. I am using column types INTEGER, REAL, TEXT
only (as shown in sample table below). Not using BLOB in any column of
table. (SQLite 3.3.4, Win32)

Thanks
Rohit

/* CREATE Statement for sample table */
/* Note: VARCHAR will be TEXT in SQlite. VARCHAR and size is just used so it
may work for other databases. */

create table customers (
id INTEGER PRIMARY KEY,
dts REAL,   /* Date Time Stamp (When record updated lastly) 
*/
useridwml INTEGER,  /* id of user who modified last (this record) */
szname VARCHAR(32), /* Customer Name */
blockstatus INTEGER,/* Locked or Not */
bactivestatus INTEGER,  /* Active or Not */
bvisibilitystatus INTEGER,  /* Visible or Not */
obal REAL,  /* Opening Balance */
obalcr INTEGER, /* Cr or Dr Balance */
dtobal REAL,/* Date (of Opening Balance) */
szdesc VARCHAR(32), /* Description / Note / Comment */
szsalu VARCHAR(10), /* Salutation -> Mr./Ms./.. */
szfnm VARCHAR(20),  /* First Name */
szmnm VARCHAR(20),  /* Middle Name */
lnm VARCHAR(20),/* Last Name */
/* Bill_To Address */
sza1 VARCHAR(32),   /* Address Line 1 */
sza2 VARCHAR(32),   /* Address Line 2 */ 
sza3 VARCHAR(32),   /* Address Line 3 */
sza4 VARCHAR(32),   /* Address Line 4 */
szcity VARCHAR(20), /* City */
szzip VARCHAR(12),  /* Zip / Postal Code */
szstate VARCHAR(20),/* State / Province */
szcntry VARCHAR(20),/* Country / Region */
/* Ship_To Address */
sz2a1 VARCHAR(32),  /* Address Line 1 */
sz2a2 VARCHAR(32),  /* Address Line 2 */ 
sz2a3 VARCHAR(32),  /* Address Line 3 */
sz2a4 VARCHAR(32),  /* Address Line 4 */
sz2city VARCHAR(20),/* City */
sz2zip VARCHAR(12), /* Zip / Postal Code */
sz2state VARCHAR(20),   /* State / Province */
sz2cntry VARCHAR(20),   /* Country / Region */
szfullnmcontact VARCHAR(32),/* Name of primary contact person */
szph1 VARCHAR(20),  /* Phone Number 1 */
szph2 VARCHAR(20),  /* Phone Number 2 */
szfax1 VARCHAR(20), /* Fax Number 1 */
szemail1 VARCHAR(32),   /* Email Address 1 */
szwww1 VARCHAR(32), /* WebSite Address 1 */
/* Additional Info */
ltypid INTEGER, 
ltermsid INTEGER,   
dcrlmt REAL,
lpricelvlid INTEGER,
lnum INTEGER,   
szresalenum VARCHAR(20),
bdoc1 INTEGER,  
bdoc2 INTEGER,  
bdoc3 INTEGER,  
szrep VARCHAR(20), 
lprefcorrtypid INTEGER,
lTaxCode INTEGR, 
szit VARCHAR(20),
szvat1 VARCHAR(20),
szvat2 VARCHAR(20),
szst VARCHAR(20),
szst VARCHAR(20),
srt VARCHAR(20),
szcex VARCHAR(20),
szlic VARCHAR(20),
szudcf1 VARCHAR(20),
szudcf2 VARCHAR(20),
szudcf3 VARCHAR(20)
);


-- 
View this message in context: 
http://www.nabble.com/record-size-limit-tf2355094.html#a6559539
Sent from the SQLite mailing list archive at Nabble.com.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: AW: [sqlite] Memory mapped db

2006-09-28 Thread John Stanton

Trevor Talbot wrote:

Michael is referring to a direct map from disk pages to memory pages
(OS notion of pages, not sqlite's), using something like mmap() on
unix or MapViewOfFile() on Windows.  This way memory is directly
backed by the file it refers to, instead of copying the data to
entirely new pages (possibly backed by swap).  It removes one level of
(redundant) cache.

The complications tend to come in when considering I/O control and
locking.  OS pages don't necessarily map to sqlite pages, so there can
be some "odd" boundaries there.  This would be most noticable when
trying to flush data to disk.  (The typical mmap abstraction requires
you force dirty OS pages to disk.  Interactions between file maps
(often copy-on-write) and underlying OS caches can be weird.).

You're also bounded by VM space when trying to map large files.  Most
mapping abstractions use "windows" intended to map several sequential
OS pages, and since sqlite randomly accesses pages, it would probably
be too much overhead when trying to handle files larger than the VM
space you're willing to sacrifice to the map.

In the general case I don't see it paying off, but in some specific
cases it could be a win.  I'd be interested to see experiments.

- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 



A well reasoned explanation.  We use mmap'ing with great success, but 
not for very large files or databases, for the reasons described above.


You definitely get a performance lift with mmap'd I/O largely because a 
level of buffer shadowing is removed.


On a B-Tree index access I measured a speed improvement of about 40% by 
changing from reads and local cacheing to mmap'd access.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: AW: [sqlite] Memory mapped db

2006-09-28 Thread Trevor Talbot

Michael is referring to a direct map from disk pages to memory pages
(OS notion of pages, not sqlite's), using something like mmap() on
unix or MapViewOfFile() on Windows.  This way memory is directly
backed by the file it refers to, instead of copying the data to
entirely new pages (possibly backed by swap).  It removes one level of
(redundant) cache.

The complications tend to come in when considering I/O control and
locking.  OS pages don't necessarily map to sqlite pages, so there can
be some "odd" boundaries there.  This would be most noticable when
trying to flush data to disk.  (The typical mmap abstraction requires
you force dirty OS pages to disk.  Interactions between file maps
(often copy-on-write) and underlying OS caches can be weird.).

You're also bounded by VM space when trying to map large files.  Most
mapping abstractions use "windows" intended to map several sequential
OS pages, and since sqlite randomly accesses pages, it would probably
be too much overhead when trying to handle files larger than the VM
space you're willing to sacrifice to the map.

In the general case I don't see it paying off, but in some specific
cases it could be a win.  I'd be interested to see experiments.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Problem with .import

2006-09-28 Thread Rich Shepard

On Thu, 28 Sep 2006, Griggs, Donald wrote:


Depending on your needs, if it's awkward to trim away the final delimiter,
you might choose to simply add a dummy field to your table.


Donald,

  No, it's easy enough in joe to replace a space followed by the end-of-line
($) with just the end-of-line. This exercise was with a set of test data,
but I'll make sure that the real data have no trailing spaces when they're
inserted into the database.

  It's the apparent lack of documentation on these subtle points that I
think need correcting. I wonder now how many more gotcha's are waiting to
trap me as a naive user.

  Playing with the table data once it was available also taught me something
I did not anticipate. If the raw data file had the two text strings
deliminete by quotes ("xxx"), that's how they were entered in the table, and
it would have apparently taken double quotes in the where phrase of the
select statement to find them.

  Specifically, when I tried this:

select * from voting where cat = "nat" and pos = "pro";

nothing was returned, while 'select * from voting' dumped all values to the
screen. Hmmm-m-m.

  I deleted all the inserted records, removed the quotes from the two text
fields in the data file, and re-imported it. Now the first select statement
works. My inference is that the data file quotes became part of the string
as stored in the table, so the select statement would need to be

select * from voting where cat = ' "nat" ' and pos = ' "pro" ';

Awkward. Feh!

Thanks,

Rich

--
Richard B. Shepard, Ph.D.   |The Environmental Permitting
Applied Ecosystem Services, Inc.(TM)|Accelerator
 Voice: 503-667-4517  Fax: 503-667-8863

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Problem with .import

2006-09-28 Thread Griggs, Donald
Regarding Rich Shepard's problem:
   "...tells me that 31 columns are expected, but it found 32."

And tagging onto Dennis Cote's explanation:
   "...I suspect you may have trailing spaces at the ends of your lines."


I agree with Dennis.  

Apparently there are different interpretations for delimited files.
If I dump an informix file to delimited format, for example, a final
delimiter is placed just before the END-OF-LINE, and the sqlite command-line
utility interprets that as an additional field.

Depending on your needs, if it's awkward to trim away the final delimiter,
you might choose to simply add a dummy field to your table.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Problem with .import -- SOLVED

2006-09-28 Thread Rich Shepard

On Thu, 28 Sep 2006, Dennis Cote wrote:


I suspect you may have trailing spaces at the ends of your lines. The
.import command isn't very smart about things like that. Your separator is
set to one space, not arbitrary whitespace. It there is another separator
after the last field it assumes there is another field there (which might
be an empty string) as well.


Dennis,

  That's it. I saw the trailing space but assigned no significance to it.
And that's why I wondered about the EOL.

  I tried to find documentation on the .import command (and the other
metacommands), but couldn't. Nothing about them in Mike Owens' book, the man
page, or a Google search. This must be one of the pieces of knowlege that's
passed from generation to generation only when someone asks.

Many thanks!

Rich

--
Richard B. Shepard, Ph.D.   |The Environmental Permitting
Applied Ecosystem Services, Inc.(TM)|Accelerator
 Voice: 503-667-4517  Fax: 503-667-8863

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Re: Re: Custom functions in GROUP BY?

2006-09-28 Thread Igor Tandetnik

Ron Stevens  wrote:

The problem is that I can't produce a canonical representation of the
entries in my database. Often times some entries are subsets of
others, but considered equal. It's possible for an entry to be a
subset of two larger entries that aren't equal themselves and still
be equal to each of the larger entries.


Since your relation is not transitive, it is not equivalence. GROUP BY 
does not make sense for it. If you have A ~ B and A ~ C but B !~ C, how 
exactly do you expect these three rows to be grouped?


Igor Tandetnik 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Problem with close

2006-09-28 Thread Onnig Kouyoumdjian
I'm using the sqlite3 dll with the Finisar.SQLite.NET wrapper in our
application written in C#. Currently I'm using version 3.3.7, but I have
been having the problem going back to version 3.3.4 which is when we
started using SQLite.  Most of the time everything works fine, but every
once in a blue moon (or so) when a request to close a connection is
made, SQLITE_ERROR is returned.  I've been able to determine that
SQLITE_ERROR is returned because magic happened to be SQLITE_BUSY when I
called close.  

One of my biggest problems is that I have not yet been able to reproduce
this problem on demand.  I've written a test program that simulates the
actual application but so far I can't make it fail on my development
environment.  

I have not come to any conclusions whether the problem is from my code,
the SQLite.NET wrapper or SQLite.  

One thing that doesn't make sense to me is why SQLite was BUSY when I
called close. I don't get an exception or busy before I call close,
since I would log such a thing and I don't see anything in the log until
the SQLITE_ERROR.

Once the error occurs, I can ignore it and continue to use the database
by opening a new connection and everything works fine.  My problem is
that once the error happens during a close, I'm guessing that a VM is
now hanging around somewhere and is not released.  This causes a problem
for me where at the end of a job that my application runs, I delete the
database because it is no longer needed, but I can't in this case
because the file is locked in some way.  

My application is a Windows service, so the only way that I have found
so far to clear it up is the restart the service.  

My application does use two threads for database calls, but the database
is called through a singleton class and I make sure that there is a lock
for all of the database calls including the open and close. So there
should never be any time where two threads would make a call to a
specific database.  I have two databases and it is possible that the two
threads call the different databases at the same time.

I am compiling the sqlite dll myself with VS.NET 2003 making sure it is
compiled with THREADSAFE=1.

I do have a theory that this problem shows up when the hard drive is
under a heavy load, but so far haven't been able to prove it. 

I will take suggestion on a fix or a Band-Aid.  The Band-Aid being: once
the job is done if I can find a way for the sqlite dll to release the
files, then I can delete them and all will be well.  

Thanks,

Onnig



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Problem with .import

2006-09-28 Thread Dennis Cote

Rich Shepard wrote:

  When I try to import a data file into an existing table, sqlite's shell
tells me that 31 columns are expected, but it found 32. Now I no 
longer have
the visual acuity I did when I was a teenager, but no matter how many 
times
I count the fields, they total 31. Therefore, I don't know why I'm 
getting

the error message.

  Here's the table:

CREATE TABLE voting (vote_id INTEGER PRIMARY KEY,
 cat TEXT, pos TEXT, pr1 REAL, pr2 REAL,
 pr3 REAL, pr4 REAL, pr5 REAL, pr6 REAL,
 pr7 REAL, pr8 REAL, pr9 REAL, pr10 REAL,
 pr11 REAL, pr12 REAL, pr13 REAL,
 pr14 REAL, pr15 REAL, pr16 REAL,
 pr17 REAL, pr18 REAL, pr19 REAL,
 pr20 REAL, pr21 REAL, pr22 REAL,
 pr23 REAL, pr24 REAL, pr25 REAL,
 pr26 REAL, pr27 REAL, pr28 REAL);

  And here's the first line of data (I set the .separator to " "):

1 "nat" "pro" 0.500 0.333 3.000 0.167 1.000 1.000 6.000 0.111 0.333 0.333
4.000 6.000 6.000 6.000 8.000 0.143 0.200 8.000 0.200 0.125 0.025 5.000
7.000 0.143 0.111 2.000 0.025 6.000

 What the CLI returns is:

sqlite> .import voting.dat voting
voting.dat line 1: expected 31 columns of data but found 32

  Is there a missing end-of-line terminator? What does sqlite see as the
32nd column?

TIA,

Rich


Rich,

It works just fine for me.

SQLite version 3.3.5
Enter ".help" for instructions
sqlite> CREATE TABLE voting (vote_id INTEGER PRIMARY KEY,
  ...>  cat TEXT, pos TEXT, pr1 REAL, pr2 REAL,
  ...>  pr3 REAL, pr4 REAL, pr5 REAL, pr6 REAL,
  ...>  pr7 REAL, pr8 REAL, pr9 REAL, pr10 REAL,
  ...>  pr11 REAL, pr12 REAL, pr13 REAL,
  ...>  pr14 REAL, pr15 REAL, pr16 REAL,
  ...>  pr17 REAL, pr18 REAL, pr19 REAL,
  ...>  pr20 REAL, pr21 REAL, pr22 REAL,
  ...>  pr23 REAL, pr24 REAL, pr25 REAL,
  ...>  pr26 REAL, pr27 REAL, pr28 REAL);
sqlite> .separator " "
sqlite> .import data.txt voting
sqlite> select * from voting;
1 "nat" "pro" 0.5 0.333 3.0 0.167 1.0 1.0 6.0 0.111 0.333 0.333 4.0 6.0 
6.0 6.0

8.0 0.143 0.2 8.0 0.2 0.125 0.025 5.0 7.0 0.143 0.111 2.0 0.025 6.0

Where data.txt contains the line you posted.

1 "nat" "pro" 0.500 0.333 3.000 0.167 1.000 1.000 6.000 0.111 0.333 
0.333 4.000 6.000 6.000 6.000 8.000 0.143 0.200 8.000 0.200 0.125 0.025 
5.000 7.000 0.143 0.111 2.000 0.025 6.000


I suspect you may have trailing spaces at the ends of your lines. The 
.import command isn't very smart about things like that. Your separator 
is set to one space, not arbitrary whitespace. It there is another 
separator after the last field it assumes there is another field there 
(which might be an empty string) as well.


HTH
Dennis Cote


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: Custom functions in GROUP BY?

2006-09-28 Thread Ron Stevens

The problem is that I can't produce a canonical representation of the
entries in my database. Often times some entries are subsets of others, but
considered equal. It's possible for an entry to be a subset of two larger
entries that aren't equal themselves and still be equal to each of the
larger entries.

On 9/28/06, Igor Tandetnik <[EMAIL PROTECTED]> wrote:


Ron Stevens  wrote:
> I have a custom function that compares two text values and returns 1
> if they're equal based on an algorithm that's not strictly text
> comparison and 0 if they don't match. I want to do a query that
> groups all rows that match using my custom function into a single
> group. Is that possible?

What you really need is a custom function that produces a "canonical
representation" for all the strings in the same equivalence class under
your equivalence relation. That is, a function CR(s) such as for every
s1 and s2, s1 ~ s2 if and only if CR(s1) = CR(s2). Here '~' is your
equivalence relation, '=' is the usual byte-wise comparison. E.g. if '~'
is a case-insensitive comparison, then CR(s) could return s converted to
all lowercase (or all uppercase).

Once you have such a function, you can simply GROUP BY CR(fieldName).

Igor Tandetnik



-
To unsubscribe, send email to [EMAIL PROTECTED]

-




[sqlite] Problem with .import

2006-09-28 Thread Rich Shepard

  When I try to import a data file into an existing table, sqlite's shell
tells me that 31 columns are expected, but it found 32. Now I no longer have
the visual acuity I did when I was a teenager, but no matter how many times
I count the fields, they total 31. Therefore, I don't know why I'm getting
the error message.

  Here's the table:

CREATE TABLE voting (vote_id INTEGER PRIMARY KEY,
 cat TEXT, pos TEXT, pr1 REAL, pr2 REAL,
 pr3 REAL, pr4 REAL, pr5 REAL, pr6 REAL,
 pr7 REAL, pr8 REAL, pr9 REAL, pr10 REAL,
 pr11 REAL, pr12 REAL, pr13 REAL,
 pr14 REAL, pr15 REAL, pr16 REAL,
 pr17 REAL, pr18 REAL, pr19 REAL,
 pr20 REAL, pr21 REAL, pr22 REAL,
 pr23 REAL, pr24 REAL, pr25 REAL,
 pr26 REAL, pr27 REAL, pr28 REAL);

  And here's the first line of data (I set the .separator to " "):

1 "nat" "pro" 0.500 0.333 3.000 0.167 1.000 1.000 6.000 0.111 0.333 0.333
4.000 6.000 6.000 6.000 8.000 0.143 0.200 8.000 0.200 0.125 0.025 5.000
7.000 0.143 0.111 2.000 0.025 6.000

 What the CLI returns is:

sqlite> .import voting.dat voting
voting.dat line 1: expected 31 columns of data but found 32

  Is there a missing end-of-line terminator? What does sqlite see as the
32nd column?

TIA,

Rich

--
Richard B. Shepard, Ph.D.   |The Environmental Permitting
Applied Ecosystem Services, Inc.(TM)|Accelerator
 Voice: 503-667-4517  Fax: 503-667-8863

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] [tclsqlite] problem with copy from tcl

2006-09-28 Thread Michel Salvagniac

Hello,
i wish to insert data in a table from a Tcl program
i use this command:
"db1 copy ignore mvt sbple92.txt |"
no data is inserted and
"db1 errorcode"
returns 0
the file is ok (number of columns..), if i use sqlite3.exe and ".import 
sbple92.txt mvt"

the data is inserted
Sqlite 3.3.7
Tcl/Tk 8.4.13
Windows XP sp2

Thanks,
Michel


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] How to get version of an SQLIte file

2006-09-28 Thread lordphoenix
Le Thu, 28 Sep 2006 19:36:34 +0200,
Kees Nuyt <[EMAIL PROTECTED]> a écrit :

> BTW, I wonder why do you ask? It is much faster to test that
> yourself than ask someone else
Of Course I had made some tests but I wanted to be sure to have all
available information In order to be able to handle all possibilities
in my software.


signature.asc
Description: PGP signature


[sqlite] Re: Custom functions in GROUP BY?

2006-09-28 Thread Igor Tandetnik

Ron Stevens  wrote:

I have a custom function that compares two text values and returns 1
if they're equal based on an algorithm that's not strictly text
comparison and 0 if they don't match. I want to do a query that
groups all rows that match using my custom function into a single
group. Is that possible?


What you really need is a custom function that produces a "canonical 
representation" for all the strings in the same equivalence class under 
your equivalence relation. That is, a function CR(s) such as for every 
s1 and s2, s1 ~ s2 if and only if CR(s1) = CR(s2). Here '~' is your 
equivalence relation, '=' is the usual byte-wise comparison. E.g. if '~' 
is a case-insensitive comparison, then CR(s) could return s converted to 
all lowercase (or all uppercase).


Once you have such a function, you can simply GROUP BY CR(fieldName).

Igor Tandetnik 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Custom functions in GROUP BY?

2006-09-28 Thread Ron Stevens

I have a custom function that compares two text values and returns 1 if
they're equal based on an algorithm that's not strictly text comparison and
0 if they don't match. I want to do a query that groups all rows that match
using my custom function into a single group. Is that possible?


Re: AW: [sqlite] Memory mapped db

2006-09-28 Thread Thomas . L
On Thu, 28 Sep 2006 15:45:54 +0200, you wrote:

Hi Michael

>-Ursprüngliche Nachricht-
>Von: Jay Sprenkle [mailto:[EMAIL PROTECTED] 
>Gesendet: Donnerstag, 28. September 2006 15:37
>An: sqlite-users@sqlite.org
>Betreff: Re: [sqlite] Memory mapped db

>That's not really the same. I would have to copy the db into ram after
>opening it and since the db is too big to fit into the memory I would have
>to recreate it dependend on my selects. 
>If sqlite maps the db into memory, the operating system manages the mapping,
>sqlite "just" has to move it's view over the file.

I'm working with some "small" SQLite databases, with less than 150.000
Records (Extractions of a bigger CS-DB).  The records a stored in
several Tables. One Table with 4000-5000 Records are bigger, the rest
up to limit are smaller. I'm using SQlite to do some quickly jobs with
the data on a local machine. I read and load the whole Result, opened
by a SQL-Query, as a virtual table into a RAM's Vector-List. If this
done, I can read Record by Record, forward, backward, skip any
direction, alter, append new record, delete records. I can all do,
what I want

 It seems to me, I do that, who you ask. But... I don't know, which
developer-System you are using. I am working with VC++ .Net 2003.
If that wrong to you, excuse the disturbance   ;-)
Is it OK, spend some time at my HP.

Best Regards
Thomas

www.thlu.de

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Memory mapped db

2006-09-28 Thread Kees Nuyt
On Thu, 28 Sep 2006 15:32:03 +0200, you wrote:

>
>Has anyone tested an sqlite which memory-maps the db-file into ram? Is this
>an old  (maybe bad idea :-) ? I've looked over the source and it seems that
>read and write operations are used through a singled interface, so it maybe
>possible to implement it without too much trouble...
>
>Any comments are welcome :-)
>
> Michael

SQLite already does that, it is called caching, and you can set
the both the page size and the number of pages in cache with the
PRAGMA statement.
http://www.sqlite.org/pragma.html#modify

Read about SQLite internals for a more detailed explanation.
http://www.sqlite.org/arch.html

The page cache can be shared by mutiple daatabase connections:
http://www.sqlite.org/sharedcache.html
-- 
  (  Kees Nuyt
  )
c[_]

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Updating a whole column at once

2006-09-28 Thread James W. Walker

On 9/27/06, James W. Walker <[EMAIL PROTECTED]> wrote:


 What is the fastest way to change the values in one column in every
 row?  What I thought of was like so:

 BEGIN TRANSACTION;
 UPDATE MyTable SET TheCol=7 WHERE keyCol=1;
 UPDATE MyTable SET TheCol=8 WHERE keyCol=2;
 ... and so on for each row
 COMMIT;



"Trevor Talbot" <[EMAIL PROTECTED]> wrote:


If the column values are algorithmically related to each other, then
you can have sqlite do it in one pass, possibly using custom functions
to determine relationship.  For the simple example you gave, something
like:

UPDATE MyTable SET TheCol = keyCol + 6;


There's no simple formula, but I suppose I could use a custom 
function to do a table lookup.  That's an interesting idea!

--
  James W. Walker, ScriptPerfection Enterprises, Inc.
  

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] "contains" function

2006-09-28 Thread Noel Frankinet

Dennis Cote a écrit :

Noel Frankinet wrote:


I would like to create a user function to know if a record is 
contained in a rectangle ?

If have looked to sqlite_create_function
I know that my record has for column (xmin,ymin,xmax,ymax)
I would like to use the new function to write something like
select * from table where ??? contains  (how I pass the selecting 
rectangle)


Is it possible ?
How do I do that ?
Thank you


Noel,

You need to create a predicate function that returns a boolean or 
integer (0 or 1) value based on 6 input values. Conceptually it would 
look like this:


   int point_in_rect(px, py, rxl, rxh, ryl, ryh)

Where px and py are the location of a point, rxl and rxh are the low 
and high limits of the rectangle's x dimension, and ryl and ryh are 
the low and high limits of the rectangle's y dimension.


You would use this function like this:

   create table points(x int, y int, data text);

   select * from points where point_in_rect(x, y, 10, 100, 25, 75);

This should give you an idea of how to implement it using the sqlite 
APIs.


   /* implement point in rect predicate function */

   void point_in_rect(sqlite3_context* ctx, int argc, sqlite3_value** 
argv)

   {
   int p, rl, rh, in = 0;

   /* check x values first */
   p = sqlite3_value_int(argv[0]);
   rl = sqlite3_value_int(argv[2]);
   rh = sqlite3_value_int(argv[3]);
   if (p >= rl && p <=rh) {
   /* now check y values */
   p = sqlite3_value_int(argv[1]);
   rl = sqlite3_value_int(argv[4]);
   rh = sqlite3_value_int(argv[5]);
   if (p >= rl && p <= rh)
   in = 1;
   }
 sqlite3_result_int(ctx, in);
   }


   /* register point in rect predicate */

   sqlite3* db;
   rc = sqlite3_create_function(db, "point_in_rect", 6,
   SQLITE_UTF8, NULL, point_in_rect, NULL, NULL);


HTH
Dennis Cote


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






Very good Denis,
That' what I was looking for; but had some problem wrapping my head 
around it.


Thanks a lot
Regards

--
Noël Frankinet
Gistek Software SA
http://www.gistek.net


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] How to get version of an SQLIte file

2006-09-28 Thread Kees Nuyt
On Thu, 28 Sep 2006 09:35:54 +0200, you wrote:

> Le Thu, 28 Sep 2006 00:01:08 +0200,
> Kees Nuyt <[EMAIL PROTECTED]> a écrit :
>
>> The first 15 bytes of the sqlite database file tell you which
>> file format it is. The file format doesn't change that often, so
>> it can be compatible with several software versions.
>
> Thanks for your answer. Do you know the different available
> values ofthis 15 bytes?

No, except for the software version I currently use 3.3.7, the
the file signature is the C string "SQLite format 3".

BTW, I wonder why do you ask? It is much faster to test that
yourself than ask someone else
-- 
  (  Kees Nuyt
  )
c[_]

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] locks and attached databases

2006-09-28 Thread drh
Ran <[EMAIL PROTECTED]> wrote:
> 
> I thought to solve this by attaching another database I will create
> temporarily for each query, and to place the temporary tables in this
> attached temporary database. This way, *I think*, my main database will be
> locked shared (I will only select from it) and the temporary database will
> be locked exclusively. So I will be able to run such queries complex in
> parallel.
> 

Correct.

By splitting up each table of a databae into a separate database
file you can achieve table-level locking in SQLite.
--
D. Richard Hipp   <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] locks and attached databases

2006-09-28 Thread Ran

Hi all,

I am not sure of the way attached database behaves in respect to locks and
could not find an answer in the documentation about it.

I have a very complex query which I simplify by using temporary tables.
However, this means that every time this complex query is executed, the
database is exclusively locked (because I create temporary tables and insert
the intermediate results into them).

This means that when data is read from my database I get exclusive lock, so
no two readers can read in parallel.

I thought to solve this by attaching another database I will create
temporarily for each query, and to place the temporary tables in this
attached temporary database. This way, *I think*, my main database will be
locked shared (I will only select from it) and the temporary database will
be locked exclusively. So I will be able to run such queries complex in
parallel.

Am I correct in my assumption that the attached database will be the only
one to be locked? Or maybe when database is attached the locks of the two
databases are common?

Thanks in advance,

Ran


Re: [sqlite] "contains" function

2006-09-28 Thread Dennis Cote

Noel Frankinet wrote:


I would like to create a user function to know if a record is 
contained in a rectangle ?

If have looked to sqlite_create_function
I know that my record has for column (xmin,ymin,xmax,ymax)
I would like to use the new function to write something like
select * from table where ??? contains  (how I pass the selecting 
rectangle)


Is it possible ?
How do I do that ?
Thank you


Noel,

You need to create a predicate function that returns a boolean or 
integer (0 or 1) value based on 6 input values. Conceptually it would 
look like this:


   int point_in_rect(px, py, rxl, rxh, ryl, ryh)

Where px and py are the location of a point, rxl and rxh are the low and 
high limits of the rectangle's x dimension, and ryl and ryh are the low 
and high limits of the rectangle's y dimension.


You would use this function like this:

   create table points(x int, y int, data text);

   select * from points where point_in_rect(x, y, 10, 100, 25, 75);

This should give you an idea of how to implement it using the sqlite APIs.

   /* implement point in rect predicate function */

   void point_in_rect(sqlite3_context* ctx, int argc, sqlite3_value** argv)
   {
   int p, rl, rh, in = 0;

   /* check x values first */
   p = sqlite3_value_int(argv[0]);
   rl = sqlite3_value_int(argv[2]);
   rh = sqlite3_value_int(argv[3]);
   if (p >= rl && p <=rh) {
   /* now check y values */
   p = sqlite3_value_int(argv[1]);
   rl = sqlite3_value_int(argv[4]);
   rh = sqlite3_value_int(argv[5]);
   if (p >= rl && p <= rh)
   in = 1;
   }
  
   sqlite3_result_int(ctx, in);

   }


   /* register point in rect predicate */

   sqlite3* db;
   rc = sqlite3_create_function(db, "point_in_rect", 6,
   SQLITE_UTF8, NULL, point_in_rect, NULL, NULL);


HTH
Dennis Cote


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Re: Multiple Updates

2006-09-28 Thread Igor Tandetnik

Chris Gurtler <[EMAIL PROTECTED]> wrote:

Is it possible to do multiple updates of blobs using the bind
variables, I was doing them 1 at a time but it was a little slow.

For example :-

rc = sqlite3_prepare(objects_db, "UPDATE table SET proprietary_data =
? WHERE device_id = ? and instance = ?", -1, , 0);

for (i= 0; i <10;i++)
{
   sqlite3_bind_blob(pStmt, 1, proprietary_data, proprietary_data_len,
SQLITE_STATIC);
   sqlite3_bind_int(pStmt, 2, object->device_id);
   sqlite3_bind_int(pStmt, 3, object->objectIdentifier.instance);

   rc = sqlite3_step(pStmt);
}
   if (sqlite3_finalize(pStmt))


You seem to run the exact same query 10 times, binding the exact same 
data each time. What is the point of the exercise?


Also, you need to call sqlite3_reset before you rebind parameters and 
step again. I bet your calls fail in all iterations of the loop except 
the first.


Igor Tandetnik 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Re: "contains" function

2006-09-28 Thread Igor Tandetnik

Noel Frankinet <[EMAIL PROTECTED]> wrote:

I would like to create a user function to know if a record is
contained in a rectangle ?
If have looked to sqlite_create_function
I know that my record has for column (xmin,ymin,xmax,ymax)
I would like to use the new function to write something like
select * from table where ??? contains  (how I pass the selecting
rectangle)

Is it possible ?


Not without modifying SQLite's parser. Is it really that bad to have to 
write "where contains(X, Y)" instead ?


Igor Tandetnik 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Memory mapped db

2006-09-28 Thread John Stanton

Michael Wohlwend wrote:

Has anyone tested an sqlite which memory-maps the db-file into ram? Is this
an old  (maybe bad idea :-) ? I've looked over the source and it seems that
read and write operations are used through a singled interface, so it maybe
possible to implement it without too much trouble...

Any comments are welcome :-)

 Michael

It is not a bad idea, but would impose some limitations.  In general 
memory mapping removes one layer of buffer shadowing and is beneficial.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] Memory mapped db

2006-09-28 Thread Michael Wohlwend


-Ursprüngliche Nachricht-
Von: Jay Sprenkle [mailto:[EMAIL PROTECTED] 
Gesendet: Donnerstag, 28. September 2006 15:37
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] Memory mapped db


>use the database named:memory:
>for a ram database. In a lot of cases it will be cached by
>the operating system so it ends up being that way anyway!

That's not really the same. I would have to copy the db into ram after
opening it and since the db is too big to fit into the memory I would have
to recreate it dependend on my selects. 
If sqlite maps the db into memory, the operating system manages the mapping,
sqlite "just" has to move it's view over the file.

 Michael




Re: [sqlite] Memory mapped db

2006-09-28 Thread Jay Sprenkle

use the database named:memory:
for a ram database. In a lot of cases it will be cached by
the operating system so it ends up being that way anyway!

On 9/28/06, Michael Wohlwend <[EMAIL PROTECTED]> wrote:


Has anyone tested an sqlite which memory-maps the db-file into ram? Is this
an old  (maybe bad idea :-) ? I've looked over the source and it seems that
read and write operations are used through a singled interface, so it maybe
possible to implement it without too much trouble...



--
SqliteImporter and SqliteReplicator: Command line utilities for Sqlite
http://www.reddawn.net/~jsprenkl/Sqlite

Cthulhu Bucks!
http://www.cthulhubucks.com

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Memory mapped db

2006-09-28 Thread Michael Wohlwend

Has anyone tested an sqlite which memory-maps the db-file into ram? Is this
an old  (maybe bad idea :-) ? I've looked over the source and it seems that
read and write operations are used through a singled interface, so it maybe
possible to implement it without too much trouble...

Any comments are welcome :-)

 Michael


Re: [sqlite] Multiple Updates

2006-09-28 Thread Trevor Talbot

On 9/28/06, Chris Gurtler <[EMAIL PROTECTED]> wrote:


I wish I could use transactions, but because I'm updating a blob I need
to do it that way. Ill check out the PRAGMA option to see if that helps.


Why does what you showed above mean you can't use transactions?

Also, that PRAGMA will mean a corrupted database if there is a crash
or power failure while you're performing operations on it, so make
sure that's acceptable.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Multiple Updates

2006-09-28 Thread Chris Gurtler

Thanks,

I wish I could use transactions, but because I'm updating a blob I need 
to do it that way. Ill check out the PRAGMA option to see if that helps.


Regards,

Chris


He Shiming wrote:


Hi All,

Is it possible to do multiple updates of blobs using the bind 
variables, I was doing them 1 at a time but it was a little slow.


For example :-

rc = sqlite3_prepare(objects_db, "UPDATE table SET proprietary_data = 
? WHERE device_id = ? and instance = ?", -1, , 0);


for (i= 0; i <10;i++)
{
   sqlite3_bind_blob(pStmt, 1, proprietary_data, 
proprietary_data_len, SQLITE_STATIC);

   sqlite3_bind_int(pStmt, 2, object->device_id);
   sqlite3_bind_int(pStmt, 3, object->objectIdentifier.instance);

   rc = sqlite3_step(pStmt);
}
   if (sqlite3_finalize(pStmt))


Regards,

Chris



There is a pretty simple answer to all these kinds of questions. Use 
transactions. You can see the details about transactions and 
performance here: 
http://www.sqlite.org/cvstrac/wiki?p=PerformanceConsiderations . In my 
experience, performance do improve a lot when transaction is used for 
such an update or insert iteration.


If you don't know it yet, another idea is to use PRAGMA synchronous = 
OFF; . This way, sqlite will rely on operating system disk cache, 
which is an even greater improvement in performance.


Best regards,
He Shiming

- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Multiple Updates

2006-09-28 Thread He Shiming

Hi All,

Is it possible to do multiple updates of blobs using the bind variables, I 
was doing them 1 at a time but it was a little slow.


For example :-

rc = sqlite3_prepare(objects_db, "UPDATE table SET proprietary_data = ? 
WHERE device_id = ? and instance = ?", -1, , 0);


for (i= 0; i <10;i++)
{
   sqlite3_bind_blob(pStmt, 1, proprietary_data, proprietary_data_len, 
SQLITE_STATIC);

   sqlite3_bind_int(pStmt, 2, object->device_id);
   sqlite3_bind_int(pStmt, 3, object->objectIdentifier.instance);

   rc = sqlite3_step(pStmt);
}
   if (sqlite3_finalize(pStmt))


Regards,

Chris



There is a pretty simple answer to all these kinds of questions. Use 
transactions. You can see the details about transactions and performance 
here: http://www.sqlite.org/cvstrac/wiki?p=PerformanceConsiderations . In my 
experience, performance do improve a lot when transaction is used for such 
an update or insert iteration.


If you don't know it yet, another idea is to use PRAGMA synchronous = OFF; . 
This way, sqlite will rely on operating system disk cache, which is an even 
greater improvement in performance.


Best regards,
He Shiming 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: AW: [sqlite] Performance question

2006-09-28 Thread Michael Wohlwend


-Ursprüngliche Nachricht-
Von: Martin Pfeifle [mailto:[EMAIL PROTECTED] 
Gesendet: Dienstag, 26. September 2006 13:35
An: sqlite-users@sqlite.org
Betreff: AW: AW: [sqlite] Performance question


>Hi Michael,
>could you please (re)post the exact create inex statements +primary key you
used. For speeding up 
>your query, you need an index on x only but not on id,x. Best Martin

The table looks like:
(blobsize between 100 and 8000 bytes, 25 rows in the table)

Create table t1 (x integer, y integer, flag integer, data blob)
Create index idx on t1 (x,y,flag)
(it doesn't matter if is inlcuded in the index)

Takes 5ms on my pda, 100 of those need 500ms:
Select data from t1 where x=v1 and y=v1 and flag=f  
Takes 7sec(!) on pda for a rectangle with 60 blobs:
Select data from t1 where (x between xlow and xhigh) and (y between ylow and
yhigh) and flag=f
Lightning fast:
Adding a column xy set to (x << 16|y) and replacing idex with an idx on xy:
Select x,y,data from t1 where xy in (xy1,xy2,...)

Cheers,
 Michael






Re: [sqlite] How to get version of an SQLIte file

2006-09-28 Thread lordphoenix
Le Thu, 28 Sep 2006 00:01:08 +0200,
Kees Nuyt <[EMAIL PROTECTED]> a écrit :

> The first 15 bytes of the sqlite database file tell you which
> file format it is. The file format doesn't change that often, so
> it can be compatible with several software versions.

Thanks for your answer. Do you know the different available values of
this 15 bytes?


signature.asc
Description: PGP signature