Re: [sqlite] delayed (batch) transactions

2006-12-19 Thread Jay Sprenkle

On 12/19/06, Laszlo Elteto <[EMAIL PROTECTED]> wrote:

For this particular application it would NOT be a problem to lose like 2-5
seconds of transactions. I wonder if it is possible to tell SQLite to "hold
off" the transactions, ACCUMMULATE them until a certain time (or if cache
memory is exhausted - which is not yet the case as we have a modest
database), then make a BIG COMMIT (ie. all previous transactions committed
or none). That way it's still transactional (ie. no currupted database - I
really don't want to use sync = OFF) but the I/O performance wouldnt slow
down serving requests.


Have you considered a data warehouse sort of setup?
Write your data to a small cache database that's later uploaded to the larger
'big' database.

--
The PixAddixImage Collector suite:
http://groups-beta.google.com/group/pixaddix

SqliteImporter and SqliteReplicator: Command line utilities for Sqlite
http://www.reddawn.net/~jsprenkl/Sqlite

Cthulhu Bucks!
http://www.cthulhubucks.com

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Time zone conversion

2006-12-19 Thread Lloyd
Hi,
  I would like to implement a time zone independent function. So I get
the time zone information from user, and plans to convert the datetime
to the time zone specified. To make it clear I give an example

here StartTime is stored as an integer in the database

select datetime(StartTime,'unixepoch') from mytable;

will give the universal time.

To get the local time, what I do is- Let the time zone given by user
+0530

convert   +0530 to seconds, =13200 ((5*60+30)*60), then 

select datetime(StartTime+13200,'unixepoch') from mytable;

But it is not working as I expected. (no result is shown)

what could be the reason? Is there any better way for me to achieve the
same result?

Thanks and Regards,
  Lloyd


__
Scanned and protected by Email scanner

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Need a wince test

2006-12-19 Thread Brodie Thiesfield
There are a few problems with your patch.

+# ifdef _WIN32_WCE
+static HANDLE loadLibraryUtf8(const char *z){
+  WCHAR zWide[MAX_PATH];
+  MultiByteToWideChar(CP_ACP,0,z,-1,zWide,MAX_PATH);
+  return LoadLibrary(zWide);
+}
+#   define SQLITE_OPEN_LIBRARY(A)  loadLibraryUtf8(A)
+#   define SQLITE_FIND_SYMBOL(A,B) GetProcAddressA(A,B)
+# else
+#   define SQLITE_OPEN_LIBRARY(A)  LoadLibrary(A)
+#   define SQLITE_FIND_SYMBOL(A,B) GetProcAddress(A,B)
+# endif

The problem in question is not a Windows CE only one. It occurs with any
client that builds in Unicode mode. Therefore you need to test for the
_UNICODE define instead of _WINCE. Windows CE compilers will also set
_UNICODE (I believe - Robert?).

CP_ACP is not UTF-8. Use either CP_UTF8 or your own UTF-8 conversion
functions from the OS library. Note also that MultiByteToWideChar may
fail or return ERROR_NO_UNICODE_TRANSLATION (1113L) for UTF-8 conversions.

There is no GetProcAddressA. You need to use GetProcAddress.

The patch will need to be something like the following. Which I have
tested and builds with no errors or warnings in _UNICODE mode. Still
need someone to test it in WINCE to be sure.

 # include 
 # define SQLITE_LIBRARY_TYPE HANDLE
-# define SQLITE_OPEN_LIBRARY(A)  LoadLibrary(A)
+# ifdef _UNICODE
+static HANDLE loadLibraryUtf8(const char *z){
+  WCHAR zWide[MAX_PATH];
+  DWORD dwLen = MultiByteToWideChar(CP_UTF8,0,z,-1,zWide,MAX_PATH);
+  if (dwLen == 0 || dwLen > MAX_PATH) return NULL;
+  return LoadLibraryW(zWide);
+}
+#   define SQLITE_OPEN_LIBRARY(A)  loadLibraryUtf8(A)
+# else
+#   define SQLITE_OPEN_LIBRARY(A)  LoadLibrary(A)
+# endif
 # define SQLITE_FIND_SYMBOL(A,B) GetProcAddress(A,B)
 # define SQLITE_CLOSE_LIBRARY(A) FreeLibrary(A)

Regards,
Brodie

[EMAIL PROTECTED] wrote:
> Can somebody with access to wince please test patch [3537]
> for me and let me know if it works to fix ticket #2023.
> 
>   http://www.sqlite.org/cvstrac/chngview?cn=3537
>   http://www.sqlite.org/cvstrac/tktview?tn=2023
> 
> --
> D. Richard Hipp  <[EMAIL PROTECTED]>
> 
> 
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
> 
> 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Need a wince test

2006-12-19 Thread drh
Can somebody with access to wince please test patch [3537]
for me and let me know if it works to fix ticket #2023.

  http://www.sqlite.org/cvstrac/chngview?cn=3537
  http://www.sqlite.org/cvstrac/tktview?tn=2023

--
D. Richard Hipp  <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] delayed (batch) transactions

2006-12-19 Thread drh
Laszlo Elteto <[EMAIL PROTECTED]> wrote:
> I DO need Durability, so I don't want to drop that. In fact, I need and want
> normal transactional updates - just not immediately flushed to disk. 

If the information is not flushed to disk, how can it be durable?

--
D. Richard Hipp  <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] sqlite internationalization

2006-12-19 Thread Dennis Cote

Rashmi Hiremath wrote:

Can anyone send me the code of C++ API for
SQLite3.3.8.

  
Check out http://www.sqlite.org/cvstrac/wiki?p=SqliteWrappers in 
particular the section on C++.


I would suggest CppSQLite as it provides full access to sqlite with a 
very thin interface.


HTH
Dennis Cote

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] improve performance

2006-12-19 Thread Giuseppe Cannella
i've a big database 8 Gb with few tables all necessary indexes and many 
read-only access
i'v 4Gb ram a fast disk/network

update and delete are once at day and few record so i can stop the access in 
that time.

I think to modify these parameters, which values to use??

PRAGMA default_cache_size = ??
PRAGMA case_sensitive_like =  1
PRAGMA page_size = ???
PRAGMA temp_store = MEMORY ???

i don't think to get tables in memory can improve the performance because the 
search operate only by index
any others idea?

thank



--
Passa a Infostrada. ADSL e Telefono senza limiti e senza canone Telecom
http://click.libero.it/infostrada19dic06



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] delayed (batch) transactions

2006-12-19 Thread Laszlo Elteto
Shared cache won't help as my problem is the file flush operations at each
COMMIT, not the reading part. My original test was done in a single-threaded
program and it clearly shows the timing issue is with FileFlushBuffers. 

-Original Message-
From: Kees Nuyt [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 19, 2006 12:08 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] delayed (batch) transactions

On Tue, 19 Dec 2006 13:52:19 -0500, you wrote:

>I've started to use SQLite and it works fine - except for performance. 
>The application gets requests (possibly from many users) and does a few 
>transactions on the database. (eg. create a context for the request; 
>later it may be updated and when the user releases the record is 
>deleted.)
>
>I tried all three sync methods and with FULL, NORMAL and OFF. For 100 
>transactions (request / check / release) I see 800 open/close (in 
>1200msec), 5800 lock/unlock (22 msec) and 600 (FULL - 17,000 msec), 400 
>(NORMAL - 12,000 msec) or 0 (OFF) file flush operations. The latter is 
>really slow but I understand it. (Have to wait for the actual disk
operation to complete).
>
>For this particular application it would NOT be a problem to lose like 
>2-5 seconds of transactions. I wonder if it is possible to tell SQLite 
>to "hold off" the transactions, ACCUMMULATE them until a certain time 
>(or if cache memory is exhausted - which is not yet the case as we have 
>a modest database), then make a BIG COMMIT (ie. all previous 
>transactions committed or none). That way it's still transactional (ie. 
>no currupted database - I really don't want to use sync = OFF) but the 
>I/O performance wouldnt slow down serving requests.
>
>Anybody has done that already? If yes, where can I find such modified 
>SQLite source?
>I saw that all file I/O related calls are in two files: pager.c and 
>vdbeaux.c so they are already well isolated and relatively easy to 
>understand.
>
>Any help for such "delayed" transaction method in SQLite?
>(I tried to find any relevant message in the email archives but came up
>empty.)
>
>Thanks,
>
>Laszlo Elteto
>CISSP, System Architect
>SafeNet, Inc.

I wonder if shared cache would help you?
http://www.sqlite.org/sharedcache.html

In general, in a high concurrency environment sqlite might not be a suitable
solution.
http://www.sqlite.org/whentouse.html
--
  (  Kees Nuyt
  )
c[_]


-
To unsubscribe, send email to [EMAIL PROTECTED]

-

The information contained in this electronic mail transmission may be 
privileged and confidential, and therefore, protected from disclosure. If you 
have received this communication in error, please notify us immediately by 
replying to this message and deleting it from your computer without copying or 
disclosing it.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] delayed (batch) transactions

2006-12-19 Thread Laszlo Elteto
The problem with that approach is that SQLite doesn't support nested
transactions. And I do have transactions which sometimes have to be rolled
back. Oh and there are multiple threads involved. I don't think it's really
good to start a transaction in one thread, add a few more from several other
threads, then finish it from yet another thread.

Laszlo Elteto
SafeNet, Inc. 

-Original Message-
From: Dennis Cote [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 19, 2006 11:30 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] delayed (batch) transactions

Laszlo Elteto wrote:
> For this particular application it would NOT be a problem to lose like 
> 2-5 seconds of transactions. I wonder if it is possible to tell SQLite 
> to "hold off" the transactions, ACCUMMULATE them until a certain time 
> (or if cache memory is exhausted - which is not yet the case as we 
> have a modest database), then make a BIG COMMIT (ie. all previous 
> transactions committed or none). That way it's still transactional 
> (ie. no currupted database - I really don't want to use sync = OFF) 
> but the I/O performance wouldnt slow down serving requests.
>
>   
Laszlo,

You should be able to do this yourself without changing the SQLite source
code.

You can create two functions to wrap the begin and end transaction
operations. You can have your begin function check for an existing
transaction and only open a new one if there isn't one open yet. It records
the start time for the transaction. The close function counts down until all
open transactions are closed. If the current time is more than your limit
after the start time, it actually closes the transaction and flushes the
changes to disk. Pseudo code is below:

begin_transaction
if transaction open
   increment open count
else
   open transaction
   set transaction open to true
   set transaction start time
   set open count to 1

end_transaction
decrement open count
if open count = 0
   if now - transaction start time > 5 seconds
  close transaction
  set transaction open to false

HTH
Dennis Cote


-
To unsubscribe, send email to [EMAIL PROTECTED]

-

The information contained in this electronic mail transmission may be 
privileged and confidential, and therefore, protected from disclosure. If you 
have received this communication in error, please notify us immediately by 
replying to this message and deleting it from your computer without copying or 
disclosing it.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Is there a method for doing bulk insertion?

2006-12-19 Thread Anderson, James H \(IT\)
Thanks! 

-Original Message-
From: jphillip [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 19, 2006 4:24 PM
To: sqlite-users@sqlite.org
Subject: RE: [sqlite] Is there a method for doing bulk insertion?


issue .help
look for .separator 

example for a csv file with colon(:) separators
issue .separator ':'

use an editor to change the existing separator character(s) to the 
character you want to use.



On Tue, 19 Dec 2006, Anderson, James H (IT) wrote:

> So I can assume that there's no way to use a delimiter other than a
> comma to import a CSV file? 
> 
> -Original Message-
> From: jphillip [mailto:[EMAIL PROTECTED] 
> Sent: Tuesday, December 19, 2006 3:47 PM
> To: sqlite-users@sqlite.org
> Subject: RE: [sqlite] Is there a method for doing bulk insertion?
> 
> 
> 
> .help pretty well sums it up.
> 
> On Tue, 19 Dec 2006, Anderson, James H (IT) wrote:
> 
> > How do I find doc on .import?
> > 
> > Is there a way to specify the delimiter for the CSV file?
> > 
> > Thanks,
> > 
> > jim 
> > 
> > -Original Message-
> > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
> > Sent: Monday, December 18, 2006 9:12 AM
> > To: sqlite-users@sqlite.org
> > Subject: Re: [sqlite] Is there a method for doing bulk insertion?
> > 
> > "Anderson, James H \(IT\)" <[EMAIL PROTECTED]> wrote:
> > > or do I have to creation a gazillion insert statements?
> > > 
> > 
> > The sqlite3 command-line shell has a ".import" command which
> > can be used to read CSV data.  But the way this works internally
> > is that the command-line shell constructs an INSERT statement,
> > parses each line of the CSV file and binds the values to that
> > INSERT statement, then runs the INSERT statement for each line.
> > So at the end of the day, a bunch of INSERT statements are still
> > getting evaluated - you just don't see them.
> > 
> > On my workstation, an INSERT statement can be parsed, compiled,
> > and evaluated in 25-40 microseconds.  That's about 3 rows
> > per second.  How much performance do you need?
> > 
> > --
> > D. Richard Hipp  <[EMAIL PROTECTED]>
> > 
> > 
> >
>

> > -
> > To unsubscribe, send email to [EMAIL PROTECTED]
> >
>

> > -
> > 
> > 
> > NOTICE: If received in error, please destroy and notify sender.
Sender
> does not intend to waive confidentiality or privilege. Use of this
email
> is prohibited when received in error.
> > 
> >
>

> -
> > To unsubscribe, send email to [EMAIL PROTECTED]
> >
>

> -
> > 
> > 
> 
> You have to be BRAVE to grow OLD.
> There are no old CARELESS pilots or electricians.
> 
> 
>

> -
> To unsubscribe, send email to [EMAIL PROTECTED]
>

> -
> 
> 
> NOTICE: If received in error, please destroy and notify sender. Sender
does not intend to waive confidentiality or privilege. Use of this email
is prohibited when received in error.
> 
>

-
> To unsubscribe, send email to [EMAIL PROTECTED]
>

-
> 
> 

You have to be BRAVE to grow OLD.
There are no old CARELESS pilots or electricians.



-
To unsubscribe, send email to [EMAIL PROTECTED]

-


NOTICE: If received in error, please destroy and notify sender. Sender does not 
intend to waive confidentiality or privilege. Use of this email is prohibited 
when received in error.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] delayed (batch) transactions

2006-12-19 Thread Laszlo Elteto
I DO need Durability, so I don't want to drop that. In fact, I need and want
normal transactional updates - just not immediately flushed to disk. I've
looked at the source and know there is no simple compile option for what I
want. My question was more like: Anybody already done this? (So I don't have
to do the work myself IF it's already been done.)

Laszlo Elteto
SafeNet, Inc.

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 19, 2006 11:15 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] delayed (batch) transactions

Laszlo Elteto <[EMAIL PROTECTED]> wrote:
> I've started to use SQLite and it works fine - except for performance. 
> The application gets requests (possibly from many users) and does a 
> few transactions on the database. (eg. create a context for the 
> request; later it may be updated and when the user releases the record 
> is deleted.)
> 
> I tried all three sync methods and with FULL, NORMAL and OFF. For 100 
> transactions (request / check / release) I see 800 open/close (in 
> 1200msec), 5800 lock/unlock (22 msec) and 600 (FULL - 17,000 msec), 
> 400 (NORMAL - 12,000 msec) or 0 (OFF) file flush operations. The 
> latter is really slow but I understand it. (Have to wait for the actual
disk operation to complete).
> 
> For this particular application it would NOT be a problem to lose like 
> 2-5 seconds of transactions. I wonder if it is possible to tell SQLite 
> to "hold off" the transactions, ACCUMMULATE them until a certain time 
> (or if cache memory is exhausted - which is not yet the case as we 
> have a modest database), then make a BIG COMMIT (ie. all previous 
> transactions committed or none). That way it's still transactional 
> (ie. no currupted database - I really don't want to use sync = OFF) 
> but the I/O performance wouldnt slow down serving requests.
> 

It takes at least two complete rotations of the disk platter to do an atomic
and durable commit.  On a 7200 RPM disk, that means 60 transactions per
second is your speed of light.

Your question boils down to this:  Can you speed up transactions by dropping
the durable property - the D in ACID.  Yes you can.  Actually, most
client/server database engines already do this for you without telling you.
Very few client/server databases are really ACID - they are usually on ACI
when confronted with a power failure.

There is no simple pragma setting or anything like that to drop durability
from SQLite simply because there is no server process hanging around to make
sure that transactions get committed atomically in the background.  You have
to do the background commits yourself.  There are various ways to do this.

One approach would be to write your changes to one or more TEMP tables.
Writes to TEMP tables are always done with synchronous=OFF since TEMP tables
do not need to survive a power loss.  So writes to TEMP tables are fast.
Then have your application periodically transfer the information in TEMP
tables over to the main database.

A second approach would be to overload the OS drivers on the backend of
SQLite to support asynchronous I/O.  Mozilla does this in Firefox in order
to boost performance on NFS.
There is well-commented sample code showing how to do this in the SQLite
source file "test_async.c".  That sample code does not combine multiple
transactions, but you could probably tweak it to make that happen.
--
D. Richard Hipp  <[EMAIL PROTECTED]>



-
To unsubscribe, send email to [EMAIL PROTECTED]

-

The information contained in this electronic mail transmission may be 
privileged and confidential, and therefore, protected from disclosure. If you 
have received this communication in error, please notify us immediately by 
replying to this message and deleting it from your computer without copying or 
disclosing it.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Is there a method for doing bulk insertion?

2006-12-19 Thread jphillip

issue .help
look for .separator 

example for a csv file with colon(:) separators
issue .separator ':'

use an editor to change the existing separator character(s) to the 
character you want to use.



On Tue, 19 Dec 2006, Anderson, James H (IT) wrote:

> So I can assume that there's no way to use a delimiter other than a
> comma to import a CSV file? 
> 
> -Original Message-
> From: jphillip [mailto:[EMAIL PROTECTED] 
> Sent: Tuesday, December 19, 2006 3:47 PM
> To: sqlite-users@sqlite.org
> Subject: RE: [sqlite] Is there a method for doing bulk insertion?
> 
> 
> 
> .help pretty well sums it up.
> 
> On Tue, 19 Dec 2006, Anderson, James H (IT) wrote:
> 
> > How do I find doc on .import?
> > 
> > Is there a way to specify the delimiter for the CSV file?
> > 
> > Thanks,
> > 
> > jim 
> > 
> > -Original Message-
> > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
> > Sent: Monday, December 18, 2006 9:12 AM
> > To: sqlite-users@sqlite.org
> > Subject: Re: [sqlite] Is there a method for doing bulk insertion?
> > 
> > "Anderson, James H \(IT\)" <[EMAIL PROTECTED]> wrote:
> > > or do I have to creation a gazillion insert statements?
> > > 
> > 
> > The sqlite3 command-line shell has a ".import" command which
> > can be used to read CSV data.  But the way this works internally
> > is that the command-line shell constructs an INSERT statement,
> > parses each line of the CSV file and binds the values to that
> > INSERT statement, then runs the INSERT statement for each line.
> > So at the end of the day, a bunch of INSERT statements are still
> > getting evaluated - you just don't see them.
> > 
> > On my workstation, an INSERT statement can be parsed, compiled,
> > and evaluated in 25-40 microseconds.  That's about 3 rows
> > per second.  How much performance do you need?
> > 
> > --
> > D. Richard Hipp  <[EMAIL PROTECTED]>
> > 
> > 
> >
> 
> > -
> > To unsubscribe, send email to [EMAIL PROTECTED]
> >
> 
> > -
> > 
> > 
> > NOTICE: If received in error, please destroy and notify sender. Sender
> does not intend to waive confidentiality or privilege. Use of this email
> is prohibited when received in error.
> > 
> >
> 
> -
> > To unsubscribe, send email to [EMAIL PROTECTED]
> >
> 
> -
> > 
> > 
> 
> You have to be BRAVE to grow OLD.
> There are no old CARELESS pilots or electricians.
> 
> 
> 
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> 
> -
> 
> 
> NOTICE: If received in error, please destroy and notify sender. Sender does 
> not intend to waive confidentiality or privilege. Use of this email is 
> prohibited when received in error.
> 
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
> 
> 

You have to be BRAVE to grow OLD.
There are no old CARELESS pilots or electricians.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] sqlite internationalization

2006-12-19 Thread Rashmi Hiremath
Can anyone send me the code of C++ API for
SQLite3.3.8.

Thanks
Rashmi
--- Cory Nelson <[EMAIL PROTECTED]> wrote:

> On 12/15/06, Rashmi Hiremath
> <[EMAIL PROTECTED]> wrote:
> > Hi
> >
> > I would like to know wheather SQlite 3.3.8
> supports
> > internationalization.
> 
> It will store UTF-8 or UTF-16, but that's it.
> 
> > Thanks
> > Rashmi
> >
> > __
> > Do You Yahoo!?
> > Tired of spam?  Yahoo! Mail has the best spam
> protection around
> > http://mail.yahoo.com
> >
> >
>
-
> > To unsubscribe, send email to
> [EMAIL PROTECTED]
> >
>
-
> >
> >
> 
> 
> -- 
> Cory Nelson
> http://www.int64.org
> 
>
-
> To unsubscribe, send email to
> [EMAIL PROTECTED]
>
-
> 
> 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Mathematical "power" operator?

2006-12-19 Thread Jesús López
Hi Jeff,

I'm not a Tcl user. I use C# and Robert Simpson's SQLite ADO.NET 2.0
provider to access SQLite databases. 

SQLite allows you to define both scalar and aggregate user defined
functions. I think the way to approach your issue is to define a scalar user
defined function. I don't know how to do it with Tcl, I just know how to do
it with c#.

Regards:

Jesús López


-Mensaje original-
De: Jeff Godfrey [mailto:[EMAIL PROTECTED] 
Enviado el: martes, 19 de diciembre de 2006 21:45
Para: sqlite-users@sqlite.org
Asunto: [sqlite] Mathematical "power" operator?

Hi All,

New to SQLite, so bear with me... ;^)

I'm trying to migrate an MS-Access database over to SQLite.  I have a VIEW
created from a SELECT statement that uses the mathematical "power" operator
("^") for both "square root" and "squared" operations.  It seems that SQLite
doesn't support the "^" operator, so I'm trying to find the best way around
that.  I'm using SQLite from Tcl.  I know that I can create my own Tcl-based
replacements for these functions and register them with SQLite via the
"function" method, though that still seems to leave an issue.

I am using a 3rd part tool (SQLite Expert Personal) to create, manipulate,
and generally experiment with my SQLite database.  Obviously, if I create a
Tcl-based function replacement, I can use it from within my code, but it
won't be recognized when I open up the VIEW query that uses it via the 3rd
part tool.

Also, I can fairly easily change the query to get by without the need for
the "squared" function, though the "square root" function would seem to be a
bit more tricky to "code around".  So, a few questions:

1.  Thoughts on my specific issue with the missing mathematical operator?
2.  More generally, do people who "add" functions to SQLite just not use 3rd
party tools to work with their data?

Thanks for any insight...

Jeff Godfrey


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Is there a method for doing bulk insertion?

2006-12-19 Thread Anderson, James H \(IT\)
Thanks! 

-Original Message-
From: Jeff Godfrey [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 19, 2006 4:01 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Is there a method for doing bulk insertion?

Take a look at the ".separator" command.  It seems to be what you 
need...

Jeff


- Original Message - 
From: "Anderson, James H (IT)" <[EMAIL PROTECTED]>
To: 
Sent: Tuesday, December 19, 2006 2:52 PM
Subject: RE: [sqlite] Is there a method for doing bulk insertion?


So I can assume that there's no way to use a delimiter other than a
comma to import a CSV file?



-
To unsubscribe, send email to [EMAIL PROTECTED]

-


NOTICE: If received in error, please destroy and notify sender. Sender does not 
intend to waive confidentiality or privilege. Use of this email is prohibited 
when received in error.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Is there a method for doing bulk insertion?

2006-12-19 Thread Jeff Godfrey
Take a look at the ".separator" command.  It seems to be what you 
need...


Jeff


- Original Message - 
From: "Anderson, James H (IT)" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, December 19, 2006 2:52 PM
Subject: RE: [sqlite] Is there a method for doing bulk insertion?


So I can assume that there's no way to use a delimiter other than a
comma to import a CSV file?


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Is there a method for doing bulk insertion?

2006-12-19 Thread Anderson, James H \(IT\)
So I can assume that there's no way to use a delimiter other than a
comma to import a CSV file? 

-Original Message-
From: jphillip [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 19, 2006 3:47 PM
To: sqlite-users@sqlite.org
Subject: RE: [sqlite] Is there a method for doing bulk insertion?



.help pretty well sums it up.

On Tue, 19 Dec 2006, Anderson, James H (IT) wrote:

> How do I find doc on .import?
> 
> Is there a way to specify the delimiter for the CSV file?
> 
> Thanks,
> 
> jim 
> 
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
> Sent: Monday, December 18, 2006 9:12 AM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Is there a method for doing bulk insertion?
> 
> "Anderson, James H \(IT\)" <[EMAIL PROTECTED]> wrote:
> > or do I have to creation a gazillion insert statements?
> > 
> 
> The sqlite3 command-line shell has a ".import" command which
> can be used to read CSV data.  But the way this works internally
> is that the command-line shell constructs an INSERT statement,
> parses each line of the CSV file and binds the values to that
> INSERT statement, then runs the INSERT statement for each line.
> So at the end of the day, a bunch of INSERT statements are still
> getting evaluated - you just don't see them.
> 
> On my workstation, an INSERT statement can be parsed, compiled,
> and evaluated in 25-40 microseconds.  That's about 3 rows
> per second.  How much performance do you need?
> 
> --
> D. Richard Hipp  <[EMAIL PROTECTED]>
> 
> 
>

> -
> To unsubscribe, send email to [EMAIL PROTECTED]
>

> -
> 
> 
> NOTICE: If received in error, please destroy and notify sender. Sender
does not intend to waive confidentiality or privilege. Use of this email
is prohibited when received in error.
> 
>

-
> To unsubscribe, send email to [EMAIL PROTECTED]
>

-
> 
> 

You have to be BRAVE to grow OLD.
There are no old CARELESS pilots or electricians.



-
To unsubscribe, send email to [EMAIL PROTECTED]

-


NOTICE: If received in error, please destroy and notify sender. Sender does not 
intend to waive confidentiality or privilege. Use of this email is prohibited 
when received in error.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Is there a method for doing bulk insertion?

2006-12-19 Thread jphillip


.help pretty well sums it up.

On Tue, 19 Dec 2006, Anderson, James H (IT) wrote:

> How do I find doc on .import?
> 
> Is there a way to specify the delimiter for the CSV file?
> 
> Thanks,
> 
> jim 
> 
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
> Sent: Monday, December 18, 2006 9:12 AM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Is there a method for doing bulk insertion?
> 
> "Anderson, James H \(IT\)" <[EMAIL PROTECTED]> wrote:
> > or do I have to creation a gazillion insert statements?
> > 
> 
> The sqlite3 command-line shell has a ".import" command which
> can be used to read CSV data.  But the way this works internally
> is that the command-line shell constructs an INSERT statement,
> parses each line of the CSV file and binds the values to that
> INSERT statement, then runs the INSERT statement for each line.
> So at the end of the day, a bunch of INSERT statements are still
> getting evaluated - you just don't see them.
> 
> On my workstation, an INSERT statement can be parsed, compiled,
> and evaluated in 25-40 microseconds.  That's about 3 rows
> per second.  How much performance do you need?
> 
> --
> D. Richard Hipp  <[EMAIL PROTECTED]>
> 
> 
> 
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> 
> -
> 
> 
> NOTICE: If received in error, please destroy and notify sender. Sender does 
> not intend to waive confidentiality or privilege. Use of this email is 
> prohibited when received in error.
> 
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
> 
> 

You have to be BRAVE to grow OLD.
There are no old CARELESS pilots or electricians.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Mathematical "power" operator?

2006-12-19 Thread Jeff Godfrey
Hi All,

New to SQLite, so bear with me... ;^)

I'm trying to migrate an MS-Access database over to SQLite.  I have a VIEW 
created from a SELECT statement that uses the mathematical "power" operator 
("^") for both "square root" and "squared" operations.  It seems that SQLite 
doesn't support the "^" operator, so I'm trying to find the best way around 
that.  I'm using SQLite from Tcl.  I know that I can create my own Tcl-based 
replacements for these functions and register them with SQLite via the 
"function" method, though that still seems to leave an issue.

I am using a 3rd part tool (SQLite Expert Personal) to create, manipulate, and 
generally experiment with my SQLite database.  Obviously, if I create a 
Tcl-based function replacement, I can use it from within my code, but it won't 
be recognized when I open up the VIEW query that uses it via the 3rd part tool.

Also, I can fairly easily change the query to get by without the need for the 
"squared" function, though the "square root" function would seem to be a bit 
more tricky to "code around".  So, a few questions:

1.  Thoughts on my specific issue with the missing mathematical operator?
2.  More generally, do people who "add" functions to SQLite just not use 3rd 
party tools to work with their data?

Thanks for any insight...

Jeff Godfrey

[sqlite] Question about SQLite for Symbian

2006-12-19 Thread panslaw
Hi,
I know that SQLite was written in C and I'm sure it's possible to port
it for Symbian OS. But I wonder - maybe there is already (ready to use
)any SQLite library for Symbian OS? It's really importent for to get information
about SQLite and Symbian - thanks in advance for any answers.

RGDS,
slaw


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Is there a method for doing bulk insertion?

2006-12-19 Thread Anderson, James H \(IT\)
How do I find doc on .import?

Is there a way to specify the delimiter for the CSV file?

Thanks,

jim 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 18, 2006 9:12 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Is there a method for doing bulk insertion?

"Anderson, James H \(IT\)" <[EMAIL PROTECTED]> wrote:
> or do I have to creation a gazillion insert statements?
> 

The sqlite3 command-line shell has a ".import" command which
can be used to read CSV data.  But the way this works internally
is that the command-line shell constructs an INSERT statement,
parses each line of the CSV file and binds the values to that
INSERT statement, then runs the INSERT statement for each line.
So at the end of the day, a bunch of INSERT statements are still
getting evaluated - you just don't see them.

On my workstation, an INSERT statement can be parsed, compiled,
and evaluated in 25-40 microseconds.  That's about 3 rows
per second.  How much performance do you need?

--
D. Richard Hipp  <[EMAIL PROTECTED]>



-
To unsubscribe, send email to [EMAIL PROTECTED]

-


NOTICE: If received in error, please destroy and notify sender. Sender does not 
intend to waive confidentiality or privilege. Use of this email is prohibited 
when received in error.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] delayed (batch) transactions

2006-12-19 Thread Kees Nuyt
On Tue, 19 Dec 2006 13:52:19 -0500, you wrote:

>I've started to use SQLite and it works fine - except for performance. The
>application gets requests (possibly from many users) and does a few
>transactions on the database. (eg. create a context for the request; later
>it may be updated and when the user releases the record is deleted.)
>
>I tried all three sync methods and with FULL, NORMAL and OFF. For 100
>transactions (request / check / release) I see 800 open/close (in 1200msec),
>5800 lock/unlock (22 msec) and 600 (FULL - 17,000 msec), 400 (NORMAL -
>12,000 msec) or 0 (OFF) file flush operations. The latter is really slow but
>I understand it. (Have to wait for the actual disk operation to complete).
>
>For this particular application it would NOT be a problem to lose like 2-5
>seconds of transactions. I wonder if it is possible to tell SQLite to "hold
>off" the transactions, ACCUMMULATE them until a certain time (or if cache
>memory is exhausted - which is not yet the case as we have a modest
>database), then make a BIG COMMIT (ie. all previous transactions committed
>or none). That way it's still transactional (ie. no currupted database - I
>really don't want to use sync = OFF) but the I/O performance wouldnt slow
>down serving requests.
>
>Anybody has done that already? If yes, where can I find such modified SQLite
>source?
>I saw that all file I/O related calls are in two files: pager.c and
>vdbeaux.c so they are already well isolated and relatively easy to
>understand.
>
>Any help for such "delayed" transaction method in SQLite?
>(I tried to find any relevant message in the email archives but came up
>empty.)
>
>Thanks,
>
>Laszlo Elteto
>CISSP, System Architect
>SafeNet, Inc.

I wonder if shared cache would help you?
http://www.sqlite.org/sharedcache.html

In general, in a high concurrency environment sqlite might not
be a suitable solution.
http://www.sqlite.org/whentouse.html
-- 
  (  Kees Nuyt
  )
c[_]

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] delayed (batch) transactions

2006-12-19 Thread Dennis Cote

Laszlo Elteto wrote:

For this particular application it would NOT be a problem to lose like 2-5
seconds of transactions. I wonder if it is possible to tell SQLite to "hold
off" the transactions, ACCUMMULATE them until a certain time (or if cache
memory is exhausted - which is not yet the case as we have a modest
database), then make a BIG COMMIT (ie. all previous transactions committed
or none). That way it's still transactional (ie. no currupted database - I
really don't want to use sync = OFF) but the I/O performance wouldnt slow
down serving requests.

  

Laszlo,

You should be able to do this yourself without changing the SQLite 
source code.


You can create two functions to wrap the begin and end transaction 
operations. You can have your begin function check for an existing 
transaction and only open a new one if there isn't one open yet. It 
records the start time for the transaction. The close function counts 
down until all open transactions are closed. If the current time is more 
than your limit after the start time, it actually closes the transaction 
and flushes the changes to disk. Pseudo code is below:


begin_transaction
   if transaction open
  increment open count
   else
  open transaction
  set transaction open to true
  set transaction start time
  set open count to 1

end_transaction
   decrement open count
   if open count = 0
  if now - transaction start time > 5 seconds
 close transaction
 set transaction open to false

HTH
Dennis Cote

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] delayed (batch) transactions

2006-12-19 Thread drh
Laszlo Elteto <[EMAIL PROTECTED]> wrote:
> I've started to use SQLite and it works fine - except for performance. The
> application gets requests (possibly from many users) and does a few
> transactions on the database. (eg. create a context for the request; later
> it may be updated and when the user releases the record is deleted.)
> 
> I tried all three sync methods and with FULL, NORMAL and OFF. For 100
> transactions (request / check / release) I see 800 open/close (in 1200msec),
> 5800 lock/unlock (22 msec) and 600 (FULL - 17,000 msec), 400 (NORMAL -
> 12,000 msec) or 0 (OFF) file flush operations. The latter is really slow but
> I understand it. (Have to wait for the actual disk operation to complete).
> 
> For this particular application it would NOT be a problem to lose like 2-5
> seconds of transactions. I wonder if it is possible to tell SQLite to "hold
> off" the transactions, ACCUMMULATE them until a certain time (or if cache
> memory is exhausted - which is not yet the case as we have a modest
> database), then make a BIG COMMIT (ie. all previous transactions committed
> or none). That way it's still transactional (ie. no currupted database - I
> really don't want to use sync = OFF) but the I/O performance wouldnt slow
> down serving requests.
> 

It takes at least two complete rotations of the disk platter
to do an atomic and durable commit.  On a 7200 RPM disk, that
means 60 transactions per second is your speed of light.

Your question boils down to this:  Can you speed up transactions
by dropping the durable property - the D in ACID.  Yes you
can.  Actually, most client/server database engines already
do this for you without telling you. Very few client/server
databases are really ACID - they are usually on ACI when
confronted with a power failure.

There is no simple pragma setting or anything like that to
drop durability from SQLite simply because there is no 
server process hanging around to make sure that transactions
get committed atomically in the background.  You have to
do the background commits yourself.  There are various ways
to do this.

One approach would be to write your changes to one or more
TEMP tables.  Writes to TEMP tables are always done with
synchronous=OFF since TEMP tables do not need to survive
a power loss.  So writes to TEMP tables are fast.  Then
have your application periodically transfer the information
in TEMP tables over to the main database.

A second approach would be to overload the OS drivers on
the backend of SQLite to support asynchronous I/O.  Mozilla
does this in Firefox in order to boost performance on NFS.
There is well-commented sample code showing how to do this
in the SQLite source file "test_async.c".  That sample code
does not combine multiple transactions, but you could probably
tweak it to make that happen.
--
D. Richard Hipp  <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Query Optimizer and INTEGER PRIMARY KEY

2006-12-19 Thread Jesús López
Richard,

Thanks for your reply.

I have tried CREATE INDEX IX_Post ON Post(CategoryID) and I have got the
same results. I did it before posting my first post :-), I guessed integer
primary key field is in every index, but not sure.

Regards:

Jesús López


-Mensaje original-
De: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Enviado el: martes, 19 de diciembre de 2006 16:48
Para: sqlite-users@sqlite.org
Asunto: Re: [sqlite] Query Optimizer and INTEGER PRIMARY KEY

Dennis Cote <[EMAIL PROTECTED]> wrote:
> Jesús López wrote:
> > Given the table:
> >
> > CREATE TABLE Posts (
> > PostID INTEGER PRIMARY KEY,
> > CategoryID INT NOT NULL
> > )
> >
> > And the index:
> >
> > CREATE UNIQUE INDEX IX_Posts (CategoryID, PostID)
> >
> 
> Your create index statement is invalid as posted, it should be
> 
> CREATE UNIQUE INDEX IX_Posts ON Posts(CategoryID, PostID)
> 
> But it looks like you have found a real bug in the optimization code. 
> You should create a ticket at http://www.sqlite.org/cvstrac/tktnew. 
> SQLite should be able to execute these queries without even accessing 
> the Posts table itself since all the required data is available in the 
> index in the correct order.
> 

In SQLite, every index has the INTEGER PRIMARY KEY as an implied
final column.  So an index on 

Post(CatagoryID, PostID)

Is really an index like this:

Post(CategoryID, PostID, PostID)

In other words, the extra PostID column is redundant.  If you
omit it all together, you run less risk of confusing the optimizer.
Why don't you try defining the index as

CREATE INDEX IX_Post ON Post(CategoryID)

--
D. Richard Hipp  <[EMAIL PROTECTED]>



-
To unsubscribe, send email to [EMAIL PROTECTED]

-


smime.p7s
Description: S/MIME cryptographic signature


RE: [sqlite] Query Optimizer and INTEGER PRIMARY KEY

2006-12-19 Thread Jesús López
Dennis,

Thanks for your reply. 

Sorry for the typo, I wrote it directly on Outlook and I did not check it. 

I will fill a bug report (a ticket)

Regards:

Jesús López


-Mensaje original-
De: Dennis Cote [mailto:[EMAIL PROTECTED] 
Enviado el: martes, 19 de diciembre de 2006 16:30
Para: sqlite-users@sqlite.org
Asunto: Re: [sqlite] Query Optimizer and INTEGER PRIMARY KEY

Jesús López wrote:
> Given the table:
>
> CREATE TABLE Posts (
> PostID INTEGER PRIMARY KEY,
> CategoryID INT NOT NULL
> )
>
> And the index:
>
> CREATE UNIQUE INDEX IX_Posts (CategoryID, PostID)
>
> The query:
>
> SELECT CategoryID, PostID
> FROM Posts
> ORDER BY CategoryID
> LIMIT 10
>
> Takes about 15 ms to complete.
>
> However the query:
>
> SELECT CategoryID, PostID
> FROM Posts
> ORDER BY CategoryID, PostID
> LIMIT 10
>
> Takes about 1200 ms to complete. Almost one hundred times slower.
>
> This seems to me odd behavior, because I have specified both CategoryID
and
> PostID in the index IX_Posts. SQLite is sorting the rows, and it is not
> taking advantage of the index.
>
> If I define the table like this:
>
> CREATE TABLE Posts (
> PostID LONG PRIMARY KEY,
> CategoryID INT NOT NULL
> )
>
> both queries takes about 15 ms, which is even more amazing.
>
> What is hapening here? Why SQLite behaves like if the INTEGER PRIMARY KEY
> field was not in IX_Posts index?
>
>   
Jesus,

Your create index statement is invalid as posted, it should be

CREATE UNIQUE INDEX IX_Posts ON Posts(CategoryID, PostID)

But it looks like you have found a real bug in the optimization code. 
You should create a ticket at http://www.sqlite.org/cvstrac/tktnew. 
SQLite should be able to execute these queries without even accessing 
the Posts table itself since all the required data is available in the 
index in the correct order.

Dennis Cote



-
To unsubscribe, send email to [EMAIL PROTECTED]

-


smime.p7s
Description: S/MIME cryptographic signature


[sqlite] delayed (batch) transactions

2006-12-19 Thread Laszlo Elteto
I've started to use SQLite and it works fine - except for performance. The
application gets requests (possibly from many users) and does a few
transactions on the database. (eg. create a context for the request; later
it may be updated and when the user releases the record is deleted.)

I tried all three sync methods and with FULL, NORMAL and OFF. For 100
transactions (request / check / release) I see 800 open/close (in 1200msec),
5800 lock/unlock (22 msec) and 600 (FULL - 17,000 msec), 400 (NORMAL -
12,000 msec) or 0 (OFF) file flush operations. The latter is really slow but
I understand it. (Have to wait for the actual disk operation to complete).

For this particular application it would NOT be a problem to lose like 2-5
seconds of transactions. I wonder if it is possible to tell SQLite to "hold
off" the transactions, ACCUMMULATE them until a certain time (or if cache
memory is exhausted - which is not yet the case as we have a modest
database), then make a BIG COMMIT (ie. all previous transactions committed
or none). That way it's still transactional (ie. no currupted database - I
really don't want to use sync = OFF) but the I/O performance wouldnt slow
down serving requests.

Anybody has done that already? If yes, where can I find such modified SQLite
source?
I saw that all file I/O related calls are in two files: pager.c and
vdbeaux.c so they are already well isolated and relatively easy to
understand.

Any help for such "delayed" transaction method in SQLite?
(I tried to find any relevant message in the email archives but came up
empty.)

Thanks,

Laszlo Elteto
CISSP, System Architect
SafeNet, Inc.

The information contained in this electronic mail transmission may be
privileged and confidential, and therefore, protected from disclosure. If
you have received this communication in error, please notify us immediately
by replying to this message and deleting it from your computer without
copying or disclosing it.

The information contained in this electronic mail transmission may be 
privileged and confidential, and therefore, protected from disclosure. If you 
have received this communication in error, please notify us immediately by 
replying to this message and deleting it from your computer without copying or 
disclosing it.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Query Optimizer and INTEGER PRIMARY KEY

2006-12-19 Thread Dennis Cote

[EMAIL PROTECTED] wrote:

In SQLite, every index has the INTEGER PRIMARY KEY as an implied
final column.  So an index on 


Post(CatagoryID, PostID)

Is really an index like this:

Post(CategoryID, PostID, PostID)

In other words, the extra PostID column is redundant.  If you
omit it all together, you run less risk of confusing the optimizer.
Why don't you try defining the index as

CREATE INDEX IX_Post ON Post(CategoryID)

  

Richard,

Even using the index you suggested, SQLite still does a sort for the 
second query. It doesn't use the index at all. This still seems like a 
bug in the optimizer.


Also, I would have thought that SQLite would automatically optimize away 
the redundant storage of the rowid in an index if it was also storing an 
integer primary key column. The users shouldn't have to know how SQLite 
implements these features internally in order to define their tables and 
indexes without redundant data being stored.


Dennis Cote



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Query Optimizer and INTEGER PRIMARY KEY

2006-12-19 Thread drh
Dennis Cote <[EMAIL PROTECTED]> wrote:
> Jesús López wrote:
> > Given the table:
> >
> > CREATE TABLE Posts (
> > PostID INTEGER PRIMARY KEY,
> > CategoryID INT NOT NULL
> > )
> >
> > And the index:
> >
> > CREATE UNIQUE INDEX IX_Posts (CategoryID, PostID)
> >
> 
> Your create index statement is invalid as posted, it should be
> 
> CREATE UNIQUE INDEX IX_Posts ON Posts(CategoryID, PostID)
> 
> But it looks like you have found a real bug in the optimization code. 
> You should create a ticket at http://www.sqlite.org/cvstrac/tktnew. 
> SQLite should be able to execute these queries without even accessing 
> the Posts table itself since all the required data is available in the 
> index in the correct order.
> 

In SQLite, every index has the INTEGER PRIMARY KEY as an implied
final column.  So an index on 

Post(CatagoryID, PostID)

Is really an index like this:

Post(CategoryID, PostID, PostID)

In other words, the extra PostID column is redundant.  If you
omit it all together, you run less risk of confusing the optimizer.
Why don't you try defining the index as

CREATE INDEX IX_Post ON Post(CategoryID)

--
D. Richard Hipp  <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Query Optimizer and INTEGER PRIMARY KEY

2006-12-19 Thread Dennis Cote

Jesús López wrote:

Given the table:

CREATE TABLE Posts (
PostID INTEGER PRIMARY KEY,
CategoryID INT NOT NULL
)

And the index:

CREATE UNIQUE INDEX IX_Posts (CategoryID, PostID)

The query:

SELECT CategoryID, PostID
FROM Posts
ORDER BY CategoryID
LIMIT 10

Takes about 15 ms to complete.

However the query:

SELECT CategoryID, PostID
FROM Posts
ORDER BY CategoryID, PostID
LIMIT 10

Takes about 1200 ms to complete. Almost one hundred times slower.

This seems to me odd behavior, because I have specified both CategoryID and
PostID in the index IX_Posts. SQLite is sorting the rows, and it is not
taking advantage of the index.

If I define the table like this:

CREATE TABLE Posts (
PostID LONG PRIMARY KEY,
CategoryID INT NOT NULL
)

both queries takes about 15 ms, which is even more amazing.

What is hapening here? Why SQLite behaves like if the INTEGER PRIMARY KEY
field was not in IX_Posts index?

  

Jesus,

Your create index statement is invalid as posted, it should be

   CREATE UNIQUE INDEX IX_Posts ON Posts(CategoryID, PostID)

But it looks like you have found a real bug in the optimization code. 
You should create a ticket at http://www.sqlite.org/cvstrac/tktnew. 
SQLite should be able to execute these queries without even accessing 
the Posts table itself since all the required data is available in the 
index in the correct order.


Dennis Cote


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Is there a method for doing bulk insertion?

2006-12-19 Thread Anderson, James H \(IT\)
Thanks. 

-Original Message-
From: Jay Sprenkle [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 18, 2006 8:32 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Is there a method for doing bulk insertion?

On 12/18/06, Anderson, James H (IT) <[EMAIL PROTECTED]>
wrote:
> I was hoping there was the equivalent of Sybase's BCP program. I was
> also hoping something programmatic was available, i.e., not something
> from the command shell. Maybe a little background would help.
>
> I'm planning on using the perl package DBD::SQLite. My department is a
> big sybase user but because of the nature of our workload, we
experience
> a lot of contention in both the transaction log and tempdb (the
database
> that houses temporary tables). I'm investigating the feasibility of
> transferring data into SQLite, doing all the data manipulations there,
> and then transferring it back to the appropriate sybase tables. I
> suspect this could be a big win for a number of our applications.
>
> But if it can be avoided, I don't want to do a CSV conversion, nor do
I
> want to shell out of the code to invoke this.

I created a c++ version for my own use. The source code is downloadable
if that's of any help to you. See my sig line for the address.

--
The PixAddixImage Collector suite:
http://groups-beta.google.com/group/pixaddix

SqliteImporter and SqliteReplicator: Command line utilities for Sqlite
http://www.reddawn.net/~jsprenkl/Sqlite

Cthulhu Bucks!
http://www.cthulhubucks.com


-
To unsubscribe, send email to [EMAIL PROTECTED]

-


NOTICE: If received in error, please destroy and notify sender. Sender does not 
intend to waive confidentiality or privilege. Use of this email is prohibited 
when received in error.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] disk I/O error writing files mounted via samba

2006-12-19 Thread Guy Hindell

Guy Hindell wrote:

Guy Hindell wrote:

[EMAIL PROTECTED] wrote:

Guy Hindell <[EMAIL PROTECTED]> wrote:
 
I would like to use sqlite (v3.3.8) on a linux box (fedora core 5) 
to read/write a database file in a directory which is actually on a 
windows share mounted via samba/cifs. I can open the file, and read 
from it, but writing produces "disk I/O error" messages 
(SQLITE_IOERR error code). I can write ordinary files on the share 
(echo "foo" > [share]/file.txt) so it doesn't look like a 
permissions issue. Only one process is ever going to access the 
file so I wouldn't expect locking issues. If I try turning things 
around so that I build/run my sqlite program on windows and access 
a file on a samba share exported from my linux box I can read/write 
without any errors.





Please turn on extended result codes using

  sqlite3_extended_result_codes(db, 1)

Then tell me the detailed error code that results from this
error.  That will help to isolate the problem.
--
D. Richard Hipp  <[EMAIL PROTECTED]>


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 





  

Hi, sorry for the delay - been away from email for a couple of days.

I have tried adding a call to sqlite3_extended_result_codes() as you 
describe above (called once after the db file is opened on the handle 
returned from sqlite3_open()). Still get simply error code 10 
(SQLITE_IOERR) back from write statements (no upper bits set), but 
now get no error text from sqlite3_get_table() (which I use to front 
all my sqlite3_exec() calls).


What next?

guy

BTW, regarding the other post about file sizes greater than 2GB, no, 
the file is tiny.






- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 





OK, a bit more playing shows that the point at which the SQLITE_IOERR 
is produced is in os_unix.c, line ~1586 -


   lock.l_type = F_UNLCK;
   lock.l_whence = SEEK_SET;
   lock.l_start = PENDING_BYTE;
   lock.l_len = 2L;  assert( PENDING_BYTE+1==RESERVED_BYTE );
   if( fcntl(pFile->h, F_SETLK, )==0 ){
 pLock->locktype = SHARED_LOCK;
   }else{
 rc = SQLITE_IOERR_UNLOCK;  /* This should never happen <--- BUT 
IT DOES :-(

   }

Not sure why I don't see this as an extended result, but it seems like 
the cifs mounted filesystem isn't behaving in the expected manner.


guy




- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 





Ah! Google provides the answer - seems like the nobrl option is required 
when mounting filesystems with cifs if this sort of locking call is 
going to be made...


http://lists.samba.org/archive/linux-cifs-client/2006-November/001583.html




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] disk I/O error writing files mounted via samba

2006-12-19 Thread Guy Hindell

Guy Hindell wrote:

[EMAIL PROTECTED] wrote:

Guy Hindell <[EMAIL PROTECTED]> wrote:
 
I would like to use sqlite (v3.3.8) on a linux box (fedora core 5) 
to read/write a database file in a directory which is actually on a 
windows share mounted via samba/cifs. I can open the file, and read 
from it, but writing produces "disk I/O error" messages 
(SQLITE_IOERR error code). I can write ordinary files on the share 
(echo "foo" > [share]/file.txt) so it doesn't look like a 
permissions issue. Only one process is ever going to access the file 
so I wouldn't expect locking issues. If I try turning things around 
so that I build/run my sqlite program on windows and access a file 
on a samba share exported from my linux box I can read/write without 
any errors.





Please turn on extended result codes using

  sqlite3_extended_result_codes(db, 1)

Then tell me the detailed error code that results from this
error.  That will help to isolate the problem.
--
D. Richard Hipp  <[EMAIL PROTECTED]>


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 





  

Hi, sorry for the delay - been away from email for a couple of days.

I have tried adding a call to sqlite3_extended_result_codes() as you 
describe above (called once after the db file is opened on the handle 
returned from sqlite3_open()). Still get simply error code 10 
(SQLITE_IOERR) back from write statements (no upper bits set), but now 
get no error text from sqlite3_get_table() (which I use to front all 
my sqlite3_exec() calls).


What next?

guy

BTW, regarding the other post about file sizes greater than 2GB, no, 
the file is tiny.






- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 





OK, a bit more playing shows that the point at which the SQLITE_IOERR is 
produced is in os_unix.c, line ~1586 -


   lock.l_type = F_UNLCK;
   lock.l_whence = SEEK_SET;
   lock.l_start = PENDING_BYTE;
   lock.l_len = 2L;  assert( PENDING_BYTE+1==RESERVED_BYTE );
   if( fcntl(pFile->h, F_SETLK, )==0 ){
 pLock->locktype = SHARED_LOCK;
   }else{
 rc = SQLITE_IOERR_UNLOCK;  /* This should never happen <--- BUT IT 
DOES :-(

   }

Not sure why I don't see this as an extended result, but it seems like 
the cifs mounted filesystem isn't behaving in the expected manner.


guy




-
To unsubscribe, send email to [EMAIL PROTECTED]
-