Re: [sqlite] Registering a custom tokenizer per database rather than per connection

2010-10-08 Thread Drake Wilson
Quoth Travis Orr , on 2010-10-08 13:37:25 -0700:
> Is it possible to register a custom FTS3 tokenizer to be persistent in
> the database so that it is available every time a connection is opened?

Not really.  How would this work?  You open a database, and the
function pointer for the tokenizer comes from... where?  The physical
pointer usually isn't guaranteed to be the same even for the same
application running twice on the same machine.

This would also have an impact on security: opening a database
shouldn't allow it to access arbitrary C code from your program, and
declaring which functions are allowed and which are not is just as
much work as reregistering the tokenizer in the first place.

However, it now occurs to me that it may be possible to use the
fts3_tokenizer() function in a trigger, which is probably a bad thing
when writing to untrusted databases.  Hmm.

   ---> Drake Wilson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Confitional IF in triggers

2010-10-08 Thread Russell A
Hi Igor
I'm converting an Interbase DB for use in a smaller application, so there are 
many different examples.

I've included one particular example below, where I have converted a single 
Interbase trigger (which used IF statements), into 4 separate SQLite triggers.
Any advice appreciated.Russell.
/* Trigger: TEXTLISTS_B4DEL */
CREATE TRIGGER TEXTLISTS_B4DEL_genericsinfo BEFORE DELETE ON TEXTLISTS
when (old.listtype in ('W', 'C')) and 
(select TextList_id from genericsinfo where TextList_id = old.textlist_id Limit 
1) is not null
begin
  Select Raise(Fail, 'DEPENDANCYERROR');
end;


/* Trigger: TEXTLISTS_B4DEL */
CREATE TRIGGER TEXTLISTS_B4DEL_PRODLOCATION BEFORE DELETE ON TEXTLISTS
when (old.listtype = 'L') and 
(select PRODUCT_ID from PRODUCTS where LOCATION = old.identifier Limit 1) is 
not null
begin
  Select Raise(Fail, 'DEPENDANCYERROR');
end;

/* Trigger: TEXTLISTS_B4DEL */
CREATE TRIGGER TEXTLISTS_B4DEL_PRODTYPE BEFORE DELETE ON TEXTLISTS
when (old.listtype = 'PT') and 
(select PRODUCT_ID from PRODUCTS where PRODTYPE = old.identifier Limit 1) is 
not null
begin
  Select Raise(Fail, 'DEPENDANCYERROR');
end;


/* Trigger: TEXTLISTS_B4DEL */
CREATE TRIGGER TEXTLISTS_B4DEL_PRODFORM BEFORE DELETE ON TEXTLISTS
when (old.listtype = 'PF') and 
(select PRODUCT_ID from PRODUCTS where PROD_FORM = old.identifier Limit 1) is 
not null
begin
  Select Raise(Fail, 'DEPENDANCYERROR');
end;






--- On Sat, 9/10/10, Igor Tandetnik  wrote:

From: Igor Tandetnik 
Subject: Re: [sqlite] Confitional IF in triggers
To: sqlite-users@sqlite.org
Received: Saturday, 9 October, 2010, 12:21 AM

Russell A  wrote:
> This may be a really dumb question, but I've searched and can't find an 
> answer.
> Do SQLite triggers support any conditional expressions, like IF, or is there 
> only the WHEN statement? If the latter, does that
> mean that multiple conditions must be in separate triggers? Any help 
> appreciated.Stopgap.

What kind of conditions do you have in mind? Could you give an example? There 
is no IF statement in SQLite, but one can get creative with the WHEN clause on 
the trigger itself, and the WHERE clauses of individual statements within the 
trigger.

Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Nicolas Williams
On Fri, Oct 08, 2010 at 05:49:18PM +0100, Simon Slavin wrote:
> On 8 Oct 2010, at 5:48pm, Stephan Wehner wrote:
> > On Fri, Oct 8, 2010 at 7:14 AM, Michele Pradella
> >  wrote:
> >>  "science fiction?" was a rhetorically question. I'm only wondering
> >> about what is the best and fastest way to DELETE a lot of records from
> >> huge DB. I know and understand physical limit of data moving: anyway for
> >> now I'm trying to split the BIG DELETE in some smaller DELETE to spread
> >> the time used. It's the only way I can figure out at the moment.
> > 
> > Is a soft-delete faster? Then you could add a slow-moving delete
> > (mentioned earlier by Aldes Rossi, for example)
> > for the soft-deleted records.
> 
> Soft-delete ?  Is that having another column which is a '1' if the
> record is supposed to exist and a '0' if it's supposed to be deleted ?

Use NULL to indicate deleted-ness.  But note that you may still have to
update lots of rows and indexes.  Indeed, a true delete might not need
much more I/O (whereas a VACUUM after the DELETE would).
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Registering a custom tokenizer per database rather than per connection

2010-10-08 Thread Travis Orr
Is it possible to register a custom FTS3 tokenizer to be persistent in
the database so that it is available every time a connection is opened?
Or do I need to reregister the tokenizer every time I open a new
connection to the database?

 

Thanks

 

Travis Orr

 

IVL Audio Inc

#3-6703 Rajpur Place

Victoria BC

V8M 1Z5

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Porting SQLite 3.7.2 to vxWorks 6.7

2010-10-08 Thread Scott A Mintz
Thank you.  Those changes (modified slightly for 3.7.2) allowed me to 
create a DKM project that compiles sqlite3.c to libSQLite3.a.

However, when I link my main VIP project, I get the following unresolved 
errors:
dld: warning: Undefined symbol 'rtpVerifyAndLock' in file 'partialImage.o'
dld: warning: Undefined symbol 'pgMgrPageFree' in file 'partialImage.o'
dld: warning: Undefined symbol 'pgMgrPageAllocAt' in file 'partialImage.o'

Those are not directly used by SQLite.  But I have a feeling that one or 
more of the file I/O, semaphore, locking, or memory library system calls 
are...

-Scott

sqlite-users-boun...@sqlite.org wrote on 10/08/2010 07:38:45 AM:

> Hello Scott,
> 
> Below is my patch on the latest SQLite 3.7.3. Please notice that I 
only
> verify it with GCC 4.1.2 compiler in VxWorks 6.6/6.7/6.8(I have not 
verify
> it with my real target machine yet).
> 
> *** sqlite3.c.orig2010-10-08 10:42:22.0 +0800
> --- sqlite3.c2010-10-08 19:24:18.390625000 +0800
> ***
> *** 17,22 
> --- 17,26 
>   ** language. The code for the "sqlite3" command-line shell is also in 
a
>   ** separate file. This file contains only code for the core SQLite
> library.
>   */
> + #if defined(OS_VXWORKS)
> + #include 
> + #endif /* OS_VXWORKS */
> +
>   #define SQLITE_CORE 1
>   #define SQLITE_AMALGAMATION 1
>   #ifndef SQLITE_PRIVATE
> ***
> *** 22795,22801 
> --- 22799,22811 
>   #include 
>   #include 
>   #include 
> +
> + #if defined(OS_VXWORKS) && defined(_WRS_KERNEL)
> + #include 
> + #else
>   #include 
> + #endif /* OS_VXWORKS */
> +
>   #include 
>   #include 
> 
> ***
> *** 24945,24951 
> --- 24955,24965 
>   /*
>** Close a file.
>*/
> + #if (OS_VXWORKS < 600)
>   static int semClose(sqlite3_file *id) {
> + #else
> + static int semClose_native(sqlite3_file *id) {
> + #endif
> if( id ){
>   unixFile *pFile = (unixFile*)id;
>   semUnlock(id, NO_LOCK);
> ***
> *** 25581,25587 
> --- 25595,25607 
>   }
>   return -1;
> }
> +
> + #if defined(OS_VXWORKS) && defined(_WRS_KERNEL)
> +   got = write(id->h, (char *)pBuf, cnt);
> + #else
> got = write(id->h, pBuf, cnt);
> + #endif /* OS_VXWORKS */
> +
>   #endif
> TIMER_END;
> if( got<0 ){
> ***
> *** 26762,26768 
> --- 26782,26792 
> semIoFinder,  /* Finder function name */
> semIoMethods, /* sqlite3_io_methods object name */
> 1,/* shared memory is disabled */
> + #if (OS_VXWORKS < 600)
> semClose, /* xClose method */
> + #else
> +   semClose_native,  /* xClose method */
> + #endif
> semLock,  /* xLock method */
> semUnlock,/* xUnlock method */
> semCheckReservedLock  /* xCheckReservedLock method */
> ***
> *** 27517,27523 
> noLock = eType!=SQLITE_OPEN_MAIN_DB;
> 
> 
> ! #if defined(__APPLE__) || SQLITE_ENABLE_LOCKING_STYLE
> struct statfs fsInfo;
> if( fstatfs(fd, ) == -1 ){
>   ((unixFile*)pFile)->lastErrno = errno;
> --- 27541,27547 
> noLock = eType!=SQLITE_OPEN_MAIN_DB;
> 
> 
> ! #if (defined(__APPLE__) || SQLITE_ENABLE_LOCKING_STYLE) &&
> !defined(OS_VXWORKS)
> struct statfs fsInfo;
> if( fstatfs(fd, ) == -1 ){
>   ((unixFile*)pFile)->lastErrno = errno;
> ***
> *** 27530,27536 
> }
>   #endif
> 
> ! #if SQLITE_ENABLE_LOCKING_STYLE
>   #if SQLITE_PREFER_PROXY_LOCKING
> isAutoProxy = 1;
>   #endif
> --- 27554,27560 
> }
>   #endif
> 
> ! #if SQLITE_ENABLE_LOCKING_STYLE && !OS_VXWORKS
>   #if SQLITE_PREFER_PROXY_LOCKING
> isAutoProxy = 1;
>   #endif
> ***
> *** 27793,27799 
> ** tests repeatable.
> */
> memset(zBuf, 0, nBuf);
> ! #if !defined(SQLITE_TEST)
> {
>   int pid, fd;
>   fd = open("/dev/urandom", O_RDONLY);
> --- 27817,27823 
> ** tests repeatable.
> */
> memset(zBuf, 0, nBuf);
> ! #if !defined(SQLITE_TEST) && !defined(OS_VXWORKS)
> {
>   int pid, fd;
>   fd = open("/dev/urandom", O_RDONLY);
> 
> I'v used definitions as below:
> 
> EXTRA_DEFINE+=-DOS_VXWORKS_660=660
> EXTRA_DEFINE+=-DOS_VXWORKS_670=670
> EXTRA_DEFINE+=-DOS_VXWORKS_680=680
> EXTRA_DEFINE+=-DOS_VXWORKS=OS_VXWORKS_670
> EXTRA_DEFINE+=-DSQLITE_HOMEGROWN_RECURSIVE_MUTEX
> EXTRA_DEFINE+=-DSQLITE_ENABLE_LOCKING_STYLE=1
> EXTRA_DEFINE+=-DSQLITE_OMIT_LOAD_EXTENSION
> 
> 
> Best Regards,
> 
> Huang Zhihua
> 
> 2010/10/8 Scott A Mintz 
> 
> > I searched through all the mail archives to see if I could find 
someone
> > that has ported SQLite to vxWorks in kernel mode.  Apparently, there 
are a
> > few folks attempting it.  And fewer succeeding at it.
> >
> > I found an article published by ZhiHua Huang where he describes the 
mods
> > he made to port SQLite 3.6.23.1 to 

Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Stephan Wehner
On Fri, Oct 8, 2010 at 9:49 AM, Simon Slavin  wrote:
>
> On 8 Oct 2010, at 5:48pm, Stephan Wehner wrote:
>
>> On Fri, Oct 8, 2010 at 7:14 AM, Michele Pradella
>>  wrote:
>>>  "science fiction?" was a rhetorically question. I'm only wondering
>>> about what is the best and fastest way to DELETE a lot of records from
>>> huge DB. I know and understand physical limit of data moving: anyway for
>>> now I'm trying to split the BIG DELETE in some smaller DELETE to spread
>>> the time used. It's the only way I can figure out at the moment.
>>
>> Is a soft-delete faster? Then you could add a slow-moving delete
>> (mentioned earlier by Aldes Rossi, for example)
>> for the soft-deleted records.
>
> Soft-delete ?  Is that having another column which is a '1' if the record is 
> supposed to exist and a '0' if it's supposed to be deleted ?
>

Yes, that's what I meant. Usually call the column 'deleted'; 1 means
deleted, 0 means not-deleted.

Stephan

> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Stephan Wehner

-> http://stephan.sugarmotor.org (blog and homepage)
-> http://loggingit.com
-> http://www.thrackle.org
-> http://www.buckmaster.ca
-> http://www.trafficlife.com
-> http://stephansmap.org -- http://blog.stephansmap.org
-> http://twitter.com/stephanwehner / @stephanwehner
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Simon Slavin

On 8 Oct 2010, at 5:48pm, Stephan Wehner wrote:

> On Fri, Oct 8, 2010 at 7:14 AM, Michele Pradella
>  wrote:
>>  "science fiction?" was a rhetorically question. I'm only wondering
>> about what is the best and fastest way to DELETE a lot of records from
>> huge DB. I know and understand physical limit of data moving: anyway for
>> now I'm trying to split the BIG DELETE in some smaller DELETE to spread
>> the time used. It's the only way I can figure out at the moment.
> 
> Is a soft-delete faster? Then you could add a slow-moving delete
> (mentioned earlier by Aldes Rossi, for example)
> for the soft-deleted records.

Soft-delete ?  Is that having another column which is a '1' if the record is 
supposed to exist and a '0' if it's supposed to be deleted ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Stephan Wehner
On Fri, Oct 8, 2010 at 7:14 AM, Michele Pradella
 wrote:
>  "science fiction?" was a rhetorically question. I'm only wondering
> about what is the best and fastest way to DELETE a lot of records from
> huge DB. I know and understand physical limit of data moving: anyway for
> now I'm trying to split the BIG DELETE in some smaller DELETE to spread
> the time used. It's the only way I can figure out at the moment.
>

Is a soft-delete faster? Then you could add a slow-moving delete
(mentioned earlier by Aldes Rossi, for example)
for the soft-deleted records.

Stephan

> Il 08/10/2010 15.55, Jay A. Kreibich ha scritto:
>> On Fri, Oct 08, 2010 at 09:09:09AM +0200, Michele Pradella scratched on the 
>> wall:
>>>    I was thinking this too, but I take this for last chance: my hope is I
>>> can delete 5 millions of records in few seconds, science fiction? :)
>>    Science fiction of the worst B-grade sort.
>>
>>    Think about the numbers.  You're talking about updating a significant
>>    chunk of a multi-gigabyte file.  The WAL file tells you the changes
>>    amount to ~600MB of writes.  That's a whole CDs worth of data.  These
>>    days that might not be much for storage, but it is still a lot of
>>    data to move around.  Even if your storage system has a continuous,
>>    sustained write ability of 20MB/sec, that's a half minute.  How fast
>>    can your disk copy 600MB worth of data?
>>
>>    But you're not just writing.  You're doing a lot of reads from all
>>    over the file in an attempt to figure out what to modify and write.
>>    Both the reads and the writes (the integration, at least) are
>>    scattered and small, so you're not going to get anywhere near the
>>    sustained performance levels.  10x slower would be extremely good.
>>
>>    Or think of it in more physical numbers... If you're using a single
>>    vanilla disk, it likely spins at 7200 RPMs.  If it takes five minutes
>>    to update 5,000,000 records, that's an average of almost 140 records
>>    per disk revolution.  That's pretty good, considering everything else
>>    that is going on!
>>
>>
>>
>>    The only possible way to manipulate that much data in a "few seconds"
>>    is to load up on RAM, get a real operating system, and throw the
>>    whole database into memory.  Or spend many, many, many thousands of
>>    dollars on a very wide disk array with a very large battery-backed
>>    cache and a huge pipe between your host and the array.
>>
>>    Big storage is cheap.  Fast storage is not.  Don't confuse the two.
>>
>>     -j
>>
>>
>
>
> --
> Selea s.r.l.
>
>
>        Michele Pradella R
>
>
>        SELEA s.r.l.
>
> Via Aldo Moro 69
> Italy - 46019 Cicognara (MN)
> Tel +39 0375 889091
> Fax +39 0375 889080
> *michele.prade...@selea.com* 
> *http://www.selea.com*
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Stephan Wehner

-> http://stephan.sugarmotor.org (blog and homepage)
-> http://loggingit.com
-> http://www.thrackle.org
-> http://www.buckmaster.ca
-> http://www.trafficlife.com
-> http://stephansmap.org -- http://blog.stephansmap.org
-> http://twitter.com/stephanwehner / @stephanwehner
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] TestFixture 3.7.2 - Some WAL tests fail on QNX OS

2010-10-08 Thread Dan Kennedy

On Oct 8, 2010, at 9:44 PM, Raj, Praveen wrote:

> Hello,
>
> I debugged the SQLite functions and here is my finding:
>
> The call to "mmap" in the function "unixShmMap" is causing the issue.
> void *pMem = mmap(0, szRegion, PROT_READ|PROT_WRITE,
>  MAP_SHARED, pShmNode->h, iRegion*szRegion);
>
> It is setting the previous memory region/regions to zero while  
> mapping the new ones. Mmap call internally uses the QNX API mmap64()  
> to map the required memory region. Not sure on what is happening  
> here. Just need to dig into memory mapping to find whats happening  
> and hopefully find a solution.
>
> Dan - Do you have any idea on why this could be happening?

Sounds like a bug in QNX to me.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] TestFixture 3.7.2 - Some WAL tests fail on QNX OS

2010-10-08 Thread Black, Michael (IS)
According to the QNX mmap page
http://www.qnx.com/developers/docs/6.3.0SP3/neutrino/lib_ref/m/mmap.html
 
MAP_NOINIT
When specified, the POSIX requirement that the memory be zeroed is relaxed. The 
physical memory being used for this allocation must have been previously freed 
with UNMAP_INIT_OPTIONAL for this flag to have any effect. 


This flag was added in the QNX Neutrino Core OS 6.3.2.  

Interesting that this claims it's a POSIX requirement but I don't think most 
any others do this.

Michael D. Black
Senior Scientist
Advanced Analytics Directorate
Northrop Grumman Information Systems
 



From: sqlite-users-boun...@sqlite.org on behalf of Raj, Praveen
Sent: Fri 10/8/2010 9:44 AM
To: General Discussion of SQLite Database
Subject: EXTERNAL:Re: [sqlite] TestFixture 3.7.2 - Some WAL tests fail on QNX OS



Hello,

I debugged the SQLite functions and here is my finding:

The call to "mmap" in the function "unixShmMap" is causing the issue.
void *pMem = mmap(0, szRegion, PROT_READ|PROT_WRITE,
  MAP_SHARED, pShmNode->h, iRegion*szRegion);

It is setting the previous memory region/regions to zero while mapping the new 
ones. Mmap call internally uses the QNX API mmap64() to map the required memory 
region. Not sure on what is happening here. Just need to dig into memory 
mapping to find whats happening and hopefully find a solution.

Dan - Do you have any idea on why this could be happening?

Thanks,
Praveen


-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Dan Kennedy
Sent: Wednesday, October 06, 2010 9:05 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] TestFixture 3.7.2 - Some WAL tests fail on QNX OS


On Oct 6, 2010, at 9:39 PM, Raj, Praveen wrote:

> Hi Dan,
>
> For debugging purpose we use the QNX Momentics which is the IDE
> provided by QNX. I believe to certain extent
> we should be able to debug the issue using IDE. In what situations
> does the shared memory block be zeroed.
> I believe it should be zeroed when the db is closed or when WAL file
> is checkpointed. Any other scenarios?

Following a checkpoint when the writer wraps around to
the start of the log file.

> Please provide pointers on what needs to be checked along with
> "unixShmMap()" when the problem occurs.

There is only one call to sqlite3OsShmMap() in wal.c. Try
to stop the debugger when this function is called with iPage==1.

At this point there should be a pointer to the first 32KB of
mapped memory in pWal->apWiData[0]. It should not be full of
zeroes as it is in the dumps that you sent. Check if this is
the case. More specifically - 100 bytes or so into the buffer
should not contain all zeroes. It should contain a bunch of
32-bit non-zero frame numbers.

Then check if it is still the case after the call to
sqlite3OsShmMap() has returned. Either way, you're looking
for the point where that buffer pWal->apWiData[0] is
overwritten with zeroes.

Dan.



> Thanks,
> Praveen
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org
> ] On Behalf Of Dan Kennedy
> Sent: Wednesday, October 06, 2010 7:37 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] TestFixture 3.7.2 - Some WAL tests fail on QNX
> OS
>
>
> On Oct 6, 2010, at 8:15 PM, Raj, Praveen wrote:
>
>> Hello,
>>
>> I'm new to SQLite, so not sure what is the behavior of these failed
>> tests on Unix/Linux. Are they successfully passing all the WAL
>> journal mode tests on Unix?
>>
>> What could be happening when the shared memory (-shm file) is
>> increased from 32k to 64k?
>> Is a new 64k memory chunk allocated and old 32k data is copied here
>> and then 32k is purged?
>
> At some point in the transaction, the first 32KB of shared-memory
> are being zeroed. I'm not sure why this would happen.
>
> The WAL file looks Ok. And the header in the shared-memory looks
> Ok. But most of the first 32KB of shared-memory has been zeroed
> (it is likely that the whole thing was zeroed and then the header
> rewritten).
>
> Maybe something about running the unixShmMap() function on QNX to
> mmap() the second 32KB of space (we use a separate mapping
> for each 32KB block) is zeroing the first. Somehow. Do you have
> a good debugger for the platform?
>
> Dan.
>
>
>
>
>> Any insights on whats happening here would be of great help.
>>
>> Thanks,
>> Praveen
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org
>> ] On Behalf Of Raj, Praveen
>> Sent: Sunday, October 03, 2010 7:42 PM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] TestFixture 3.7.2 - Some WAL tests fail on QNX
>> OS
>>
>> Hi,
>>
>> Yes the files sizes of "test.db", "test.db-wal" and "test.db-shm"
>> are 1024, 4333512 and 65536 bytes respectively as specified below.
>>
>> Most of the test scripts are failing 

Re: [sqlite] Compiling dll 3.7.3 issue was [BUG] JOIN subquery in FROM with FTS3 table

2010-10-08 Thread Shane Harrelson
You'll get this if your .DEF file includes any APIs that are not
compiled into your build.
A few sources are:

SQLITE_OMIT_LOAD_EXTENSION=1
SQLITE_ENABLE_COLUMN_METADATA=1
SQLITE_ENABLE_STAT2=1
SQLITE_ENABLE_FTS3=1
SQLITE_ENABLE_RTREE=1

Depending on how you're compiling, you have a couple of options.

1) enable RTREE extensions in your build
2) remove the RTREE APIs from your .DEF file
3) don't use the .DEF (see below)

All of the external SQLite APIs are prefixed with the define
SQLITE_API.  This is usually not defined to any value.  You could try
changing this to use the export declspec directive.  Something like
the following should work:

#define SQLITE_API __declspec(dllexport)

HTH.
-SHane


On Fri, Oct 8, 2010 at 10:03 AM, Shopsland gmail  wrote:
> Hi,
>
> Thanks Dan, Richard and Max for the (ultra) fast answer and Dan and
> Richard for the fix, because, as Max says, the bug is fixed now in
> 3.7.3.
>
> But I have a problem compiling sqlite with VC++. I was able to compile
> 3.7.2 without issues.
>
> With 3.7.3 The linker says:
>
> 'error LNK2001: unresolved external symbol sqlite3_rtree_geometry_callback'
>
> Keep up the good work! :-)
>
> Jochi Martinez
> www.bfreenews.com
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Involving in sqlite development

2010-10-08 Thread Paweł Hajdan , Jr .
On Fri, Oct 8, 2010 at 15:28, sjtirtha  wrote:

> 1. setup an development environment for SQLite (I'm using Ubuntu)
>Do you have any preference which editor or IDE should I use?


It's very easy. Download the sources, compile the "standard" way
(./configure; make). Any editor is fine.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] TestFixture 3.7.2 - Some WAL tests fail on QNX OS

2010-10-08 Thread Raj, Praveen
Hello,

I debugged the SQLite functions and here is my finding:

The call to "mmap" in the function "unixShmMap" is causing the issue.
void *pMem = mmap(0, szRegion, PROT_READ|PROT_WRITE,
  MAP_SHARED, pShmNode->h, iRegion*szRegion);

It is setting the previous memory region/regions to zero while mapping the new 
ones. Mmap call internally uses the QNX API mmap64() to map the required memory 
region. Not sure on what is happening here. Just need to dig into memory 
mapping to find whats happening and hopefully find a solution.

Dan - Do you have any idea on why this could be happening?

Thanks,
Praveen


-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Dan Kennedy
Sent: Wednesday, October 06, 2010 9:05 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] TestFixture 3.7.2 - Some WAL tests fail on QNX OS


On Oct 6, 2010, at 9:39 PM, Raj, Praveen wrote:

> Hi Dan,
>
> For debugging purpose we use the QNX Momentics which is the IDE
> provided by QNX. I believe to certain extent
> we should be able to debug the issue using IDE. In what situations
> does the shared memory block be zeroed.
> I believe it should be zeroed when the db is closed or when WAL file
> is checkpointed. Any other scenarios?

Following a checkpoint when the writer wraps around to
the start of the log file.

> Please provide pointers on what needs to be checked along with
> "unixShmMap()" when the problem occurs.

There is only one call to sqlite3OsShmMap() in wal.c. Try
to stop the debugger when this function is called with iPage==1.

At this point there should be a pointer to the first 32KB of
mapped memory in pWal->apWiData[0]. It should not be full of
zeroes as it is in the dumps that you sent. Check if this is
the case. More specifically - 100 bytes or so into the buffer
should not contain all zeroes. It should contain a bunch of
32-bit non-zero frame numbers.

Then check if it is still the case after the call to
sqlite3OsShmMap() has returned. Either way, you're looking
for the point where that buffer pWal->apWiData[0] is
overwritten with zeroes.

Dan.



> Thanks,
> Praveen
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org
> ] On Behalf Of Dan Kennedy
> Sent: Wednesday, October 06, 2010 7:37 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] TestFixture 3.7.2 - Some WAL tests fail on QNX
> OS
>
>
> On Oct 6, 2010, at 8:15 PM, Raj, Praveen wrote:
>
>> Hello,
>>
>> I'm new to SQLite, so not sure what is the behavior of these failed
>> tests on Unix/Linux. Are they successfully passing all the WAL
>> journal mode tests on Unix?
>>
>> What could be happening when the shared memory (-shm file) is
>> increased from 32k to 64k?
>> Is a new 64k memory chunk allocated and old 32k data is copied here
>> and then 32k is purged?
>
> At some point in the transaction, the first 32KB of shared-memory
> are being zeroed. I'm not sure why this would happen.
>
> The WAL file looks Ok. And the header in the shared-memory looks
> Ok. But most of the first 32KB of shared-memory has been zeroed
> (it is likely that the whole thing was zeroed and then the header
> rewritten).
>
> Maybe something about running the unixShmMap() function on QNX to
> mmap() the second 32KB of space (we use a separate mapping
> for each 32KB block) is zeroing the first. Somehow. Do you have
> a good debugger for the platform?
>
> Dan.
>
>
>
>
>> Any insights on whats happening here would be of great help.
>>
>> Thanks,
>> Praveen
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org
>> ] On Behalf Of Raj, Praveen
>> Sent: Sunday, October 03, 2010 7:42 PM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] TestFixture 3.7.2 - Some WAL tests fail on QNX
>> OS
>>
>> Hi,
>>
>> Yes the files sizes of "test.db", "test.db-wal" and "test.db-shm"
>> are 1024, 4333512 and 65536 bytes respectively as specified below.
>>
>> Most of the test scripts are failing when the -shm file size grows
>> from 32kb to 64kb (though the "test.db-wal"
>> file size varies for different test scripts).
>> But there are cases in WAL tests where the size increases to 64kb,
>> but the problem is not seen.
>> Is there any problem while allocating/increasing shared memory, or
>> is the WAL page indexes getting corrupted when shared memory
>> increases?
>>
>> Please suggest some tips to find out the root cause for this issue.
>>
>> Thanks,
>> Praveen
>> 
>> From: sqlite-users-boun...@sqlite.org [sqlite-users-
>> boun...@sqlite.org] On Behalf Of Dan Kennedy [danielk1...@gmail.com]
>> Sent: Saturday, October 02, 2010 2:39 PM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] TestFixture 3.7.2 - Some WAL tests fail on QNX
>> OS
>>
>> On Oct 1, 2010, at 2:20 PM, Raj, Praveen wrote:
>>
>>> Hi,
>>>
>>> I'm 

Re: [sqlite] Involving in sqlite development

2010-10-08 Thread Gabor Grothendieck
On Thu, Oct 7, 2010 at 5:26 PM, sjtirtha  wrote:
> Hi,
>
> I'm interested involving in sqlite development.
> How can I start it?
>

If I can add to this question is there a posted wishlist,
todo list or roadmap?

Something like this:
http://www.h2database.com/html/roadmap.html

-- 
Statistics & Software Consulting
GKX Group, GKX Associates Inc.
tel: 1-877-GKX-GROUP
email: ggrothendieck at gmail.com
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Michele Pradella
  "science fiction?" was a rhetorically question. I'm only wondering 
about what is the best and fastest way to DELETE a lot of records from 
huge DB. I know and understand physical limit of data moving: anyway for 
now I'm trying to split the BIG DELETE in some smaller DELETE to spread 
the time used. It's the only way I can figure out at the moment.

Il 08/10/2010 15.55, Jay A. Kreibich ha scritto:
> On Fri, Oct 08, 2010 at 09:09:09AM +0200, Michele Pradella scratched on the 
> wall:
>>I was thinking this too, but I take this for last chance: my hope is I
>> can delete 5 millions of records in few seconds, science fiction? :)
>Science fiction of the worst B-grade sort.
>
>Think about the numbers.  You're talking about updating a significant
>chunk of a multi-gigabyte file.  The WAL file tells you the changes
>amount to ~600MB of writes.  That's a whole CDs worth of data.  These
>days that might not be much for storage, but it is still a lot of
>data to move around.  Even if your storage system has a continuous,
>sustained write ability of 20MB/sec, that's a half minute.  How fast
>can your disk copy 600MB worth of data?
>
>But you're not just writing.  You're doing a lot of reads from all
>over the file in an attempt to figure out what to modify and write.
>Both the reads and the writes (the integration, at least) are
>scattered and small, so you're not going to get anywhere near the
>sustained performance levels.  10x slower would be extremely good.
>
>Or think of it in more physical numbers... If you're using a single
>vanilla disk, it likely spins at 7200 RPMs.  If it takes five minutes
>to update 5,000,000 records, that's an average of almost 140 records
>per disk revolution.  That's pretty good, considering everything else
>that is going on!
>
>
>
>The only possible way to manipulate that much data in a "few seconds"
>is to load up on RAM, get a real operating system, and throw the
>whole database into memory.  Or spend many, many, many thousands of
>dollars on a very wide disk array with a very large battery-backed
>cache and a huge pipe between your host and the array.
>
>Big storage is cheap.  Fast storage is not.  Don't confuse the two.
>
> -j
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* 
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Involving in sqlite development

2010-10-08 Thread Simon Slavin

On 8 Oct 2010, at 2:28pm, sjtirtha wrote:

> I do not have really C programming experience. But I have 8 years
> programming experience in
> several languages: PHP, JavaScript, Java, C++, ABAP
> And I'm willing to learn C.

Of all of those, if you can run Apache, the easiest one for you to start with 
is probably one of the PHP interfaces.

> Concrete tasks what I have in my mind:
> 1. setup an development environment for SQLite (I'm using Ubuntu)
>Do you have any preference which editor or IDE should I use?

Try this:

http://php.net/manual/en/book.sqlite3.php

> 2. Start looking for bugs that easy to fix, but nobody has time to fix it
>Could you give me some hints which bugs are easy to fix for a
> starter like me?
> 
> P.S. I just saw, there is developer mailing list for sqlite. Should I
> continue my query there?

The PS is a good way to get an answer to question 2, but you should get plenty 
of practise as a user before starting to patch SQLite since there are many 
aspects to it that may look weird until you get used to them.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Compiling dll 3.7.3 issue was [BUG] JOIN subquery in FROM with FTS3 table

2010-10-08 Thread Shopsland gmail
Hi,

Thanks Dan, Richard and Max for the (ultra) fast answer and Dan and
Richard for the fix, because, as Max says, the bug is fixed now in
3.7.3.

But I have a problem compiling sqlite with VC++. I was able to compile
3.7.2 without issues.

With 3.7.3 The linker says:

'error LNK2001: unresolved external symbol sqlite3_rtree_geometry_callback'

Keep up the good work! :-)

Jochi Martinez
www.bfreenews.com
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Jay A. Kreibich
On Fri, Oct 08, 2010 at 09:09:09AM +0200, Michele Pradella scratched on the 
wall:
>   I was thinking this too, but I take this for last chance: my hope is I 
> can delete 5 millions of records in few seconds, science fiction? :)

  Science fiction of the worst B-grade sort.

  Think about the numbers.  You're talking about updating a significant
  chunk of a multi-gigabyte file.  The WAL file tells you the changes
  amount to ~600MB of writes.  That's a whole CDs worth of data.  These
  days that might not be much for storage, but it is still a lot of
  data to move around.  Even if your storage system has a continuous,
  sustained write ability of 20MB/sec, that's a half minute.  How fast
  can your disk copy 600MB worth of data?

  But you're not just writing.  You're doing a lot of reads from all
  over the file in an attempt to figure out what to modify and write.
  Both the reads and the writes (the integration, at least) are
  scattered and small, so you're not going to get anywhere near the
  sustained performance levels.  10x slower would be extremely good.

  Or think of it in more physical numbers... If you're using a single
  vanilla disk, it likely spins at 7200 RPMs.  If it takes five minutes
  to update 5,000,000 records, that's an average of almost 140 records
  per disk revolution.  That's pretty good, considering everything else
  that is going on!



  The only possible way to manipulate that much data in a "few seconds"
  is to load up on RAM, get a real operating system, and throw the
  whole database into memory.  Or spend many, many, many thousands of
  dollars on a very wide disk array with a very large battery-backed
  cache and a huge pipe between your host and the array.

  Big storage is cheap.  Fast storage is not.  Don't confuse the two.

   -j


-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Intelligence is like underwear: it is important that you have it,
 but showing it to the wrong people has the tendency to make them
 feel uncomfortable." -- Angela Johnson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Involving in sqlite development

2010-10-08 Thread P Kishor
why don't you start with...

On Fri, Oct 8, 2010 at 8:28 AM, sjtirtha  wrote:
> Hi,
>
> Here is what I want to do:
> 1. I want to learn more about SQL Database implementation


the above. You will be quite busy doing the above. When you are good
at the above, you can definitely contribute back by helping other
newcomers on the list.


> 2. During the learn I want to give back what I get to the community
>
> I do not have really C programming experience. But I have 8 years
> programming experience in
> several languages: PHP, JavaScript, Java, C++, ABAP
> And I'm willing to learn C.
>
> Concrete tasks what I have in my mind:
> 1. setup an development environment for SQLite (I'm using Ubuntu)
>    Do you have any preference which editor or IDE should I use?
> 2. Start looking for bugs that easy to fix, but nobody has time to fix it
>    Could you give me some hints which bugs are easy to fix for a
> starter like me?
>
> P.S. I just saw, there is developer mailing list for sqlite. Should I
> continue my query there?
>
> Regards,
> Steve
>
>
>
> On Thu, Oct 7, 2010 at 4:26 PM, sjtirtha  wrote:
>> Hi,
>>
>> I'm interested involving in sqlite development.
>> How can I start it?
>
>
> You really need to explain further and more clearly what you want to
> do before anyone will be able to guide you.
>
>
>>
>> Regards
>> Steve
>> ___
>> sqlite-users mailing list
>> sqlite-users at sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Puneet Kishor http://www.punkish.org
Carbon Model http://carbonmodel.org
Charter Member, Open Source Geospatial Foundation http://www.osgeo.org
Science Commons Fellow, http://sciencecommons.org/about/whoweare/kishor
Nelson Institute, UW-Madison http://www.nelson.wisc.edu
---
Assertions are politics; backing up assertions with evidence is science
===
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Involving in sqlite development

2010-10-08 Thread sjtirtha
Hi,

Here is what I want to do:
1. I want to learn more about SQL Database implementation
2. During the learn I want to give back what I get to the community

I do not have really C programming experience. But I have 8 years
programming experience in
several languages: PHP, JavaScript, Java, C++, ABAP
And I'm willing to learn C.

Concrete tasks what I have in my mind:
1. setup an development environment for SQLite (I'm using Ubuntu)
Do you have any preference which editor or IDE should I use?
2. Start looking for bugs that easy to fix, but nobody has time to fix it
Could you give me some hints which bugs are easy to fix for a
starter like me?

P.S. I just saw, there is developer mailing list for sqlite. Should I
continue my query there?

Regards,
Steve



On Thu, Oct 7, 2010 at 4:26 PM, sjtirtha  wrote:
> Hi,
>
> I'm interested involving in sqlite development.
> How can I start it?


You really need to explain further and more clearly what you want to
do before anyone will be able to guide you.


>
> Regards
> Steve
> ___
> sqlite-users mailing list
> sqlite-users at sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Confitional IF in triggers

2010-10-08 Thread Igor Tandetnik
Russell A  wrote:
> This may be a really dumb question, but I've searched and can't find an 
> answer.
> Do SQLite triggers support any conditional expressions, like IF, or is there 
> only the WHEN statement? If the latter, does that
> mean that multiple conditions must be in separate triggers? Any help 
> appreciated.Stopgap.

What kind of conditions do you have in mind? Could you give an example? There 
is no IF statement in SQLite, but one can get creative with the WHEN clause on 
the trigger itself, and the WHERE clauses of individual statements within the 
trigger.

Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Max Vlasov
On Thu, Oct 7, 2010 at 8:05 PM, Michele Pradella  wrote:

>  Hi all, I have a question about how to speed up a DELETE statement.
> I have a DB of about 3GB: the DB has about 23 millions of records.
> The DB is indexed by a DateTime column (is a 64 bit integer), and
> suppose you want to delete all records before a date.
> Now I'm using a syntax like this (I try all the statement with the
> sqlite shell):
> suppose to use __int64 DateValue=the date limit you want to delete
>
> DELETE FROM table_name WHERE DateTime
> the statement is trying to delete about 5 millions records and it takes
> about 4-5minutes.
> Is there a way to try to speed up the DELETE?
>



I think there is some optimization possible in this case, but I don't know
whether it exists in the optimizer currently.
It could work like this. If sqlite first deletes entries from the index
B-tree while appending rowids to the auto index based on the rowid of the
main table (it is a part of the index obviously), the next delete from the
main B-tree can use this index to delete faster. I tried to determine
existence of such optimization from numbers of reading and writing and it
seems that DELETE does separate lookup in the main table for every index
entry it found.
Does something like this exist in sqlite and if no, worth it or not
implementing?

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Tcl bindins problem, once again

2010-10-08 Thread Richard Hipp
On Fri, Oct 8, 2010 at 2:55 AM, Paweł Salawa  wrote:

>  Hi,
>
> I mentioned the problem before but it seems to be ignored for now. I'd
> really appreciate some response, at least confirmation or denial that
> I'm right about the problem.
>
>
You are correct - duplicate column names means that the TCL interface can
only access the latter of the two columns.  We do not have any plans to
address this in the near future.  Please make your column names distinct
when using the TCL interface.


> Quoting from previous thread:
>
> --- QUOTE ---
> select a1.txt, a2.txt
>   from a a1
>   join a a2
>using (ref)
>  where a1.lang = 'french'
>and
>a2.lang = 'english';
> --- END OF QUOTE ---
>
> Note, that the "SELECT" is not known to application - it's custom select
> typed by end-user, so I don't deciede about "AS" aliases for columns.
>
> --- QUOTE ---
> We can either use:
> db eval $query arr {
>   parray arr
> }
>
> But then we would have arr(*) = a.txt a.txt but only one array entry:
> arr(a.txt).
>
> Other way is to use:
>
> set results [db eval $query]
> foreach cellValue $results {
>   puts $cellValue
> }
>
> But then we lose informations about column names in results.
>
> One of possible solutions would be to add some prefix or suffix to array
> indexes.
>
> Other one would be to provide column names in results of [db eval $query],
> using for example flag, like: [db eval $query -withColumnNames] or
> something
> like that. The results would be: [list $columnList $tableDataList]
> --- END OF QUOTE ---
>
> Currently I implemented workaround with 2 calls mixed. First is "db eval
> $query arr {...}" to collect list of result columns and second "set
> results [db eval $query]" to get all values.
> It works, but it causes additional query execution. It's a workaround,
> not a solution.
> Shouldn't it be fixed?
>
> Regards,
> Googie
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Black, Michael (IS)
Rather than doing batch deletes why don't you add a date/time constraint to 
your selects?
 
Then, you can have a thread which does a lazy delete on the old data.  Or, you 
can create a trigger on inserts which deletes anything older than N-days or 
such.
 
Or...if all you want is an interruptable process do your delete from a select 
statement one record at a time.
 
It should take about the same amount of time but your users wouldn't notice it.
 
Your never going to get through 600MB of data and 5 million records in 
"seconds".
 
Michael D. Black
Senior Scientist
Advanced Analytics Directorate
Northrop Grumman Information Systems
 



From: sqlite-users-boun...@sqlite.org on behalf of Michele Pradella
Sent: Fri 10/8/2010 4:06 AM
To: General Discussion of SQLite Database
Subject: EXTERNAL:Re: [sqlite] Speed up DELETE of a lot of records




  I try 32 and it takes same time. Moreover the RAM used is about 500MB
Even with "PRAGMA synchronous = OFF;" operation time is the same.
I think the only way for now is to make more DELETE with less element,
but it's not the best way




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Porting SQLite 3.7.2 to vxWorks 6.7

2010-10-08 Thread ZhiHua Huang
Hello Scott,

Below is my patch on the latest SQLite 3.7.3. Please notice that I only
verify it with GCC 4.1.2 compiler in VxWorks 6.6/6.7/6.8(I have not verify
it with my real target machine yet).

*** sqlite3.c.orig2010-10-08 10:42:22.0 +0800
--- sqlite3.c2010-10-08 19:24:18.390625000 +0800
***
*** 17,22 
--- 17,26 
  ** language. The code for the "sqlite3" command-line shell is also in a
  ** separate file. This file contains only code for the core SQLite
library.
  */
+ #if defined(OS_VXWORKS)
+ #include 
+ #endif /* OS_VXWORKS */
+
  #define SQLITE_CORE 1
  #define SQLITE_AMALGAMATION 1
  #ifndef SQLITE_PRIVATE
***
*** 22795,22801 
--- 22799,22811 
  #include 
  #include 
  #include 
+
+ #if defined(OS_VXWORKS) && defined(_WRS_KERNEL)
+ #include 
+ #else
  #include 
+ #endif /* OS_VXWORKS */
+
  #include 
  #include 

***
*** 24945,24951 
--- 24955,24965 
  /*
   ** Close a file.
   */
+ #if (OS_VXWORKS < 600)
  static int semClose(sqlite3_file *id) {
+ #else
+ static int semClose_native(sqlite3_file *id) {
+ #endif
if( id ){
  unixFile *pFile = (unixFile*)id;
  semUnlock(id, NO_LOCK);
***
*** 25581,25587 
--- 25595,25607 
  }
  return -1;
}
+
+ #if defined(OS_VXWORKS) && defined(_WRS_KERNEL)
+   got = write(id->h, (char *)pBuf, cnt);
+ #else
got = write(id->h, pBuf, cnt);
+ #endif /* OS_VXWORKS */
+
  #endif
TIMER_END;
if( got<0 ){
***
*** 26762,26768 
--- 26782,26792 
semIoFinder,  /* Finder function name */
semIoMethods, /* sqlite3_io_methods object name */
1,/* shared memory is disabled */
+ #if (OS_VXWORKS < 600)
semClose, /* xClose method */
+ #else
+   semClose_native,  /* xClose method */
+ #endif
semLock,  /* xLock method */
semUnlock,/* xUnlock method */
semCheckReservedLock  /* xCheckReservedLock method */
***
*** 27517,27523 
noLock = eType!=SQLITE_OPEN_MAIN_DB;


! #if defined(__APPLE__) || SQLITE_ENABLE_LOCKING_STYLE
struct statfs fsInfo;
if( fstatfs(fd, ) == -1 ){
  ((unixFile*)pFile)->lastErrno = errno;
--- 27541,27547 
noLock = eType!=SQLITE_OPEN_MAIN_DB;


! #if (defined(__APPLE__) || SQLITE_ENABLE_LOCKING_STYLE) &&
!defined(OS_VXWORKS)
struct statfs fsInfo;
if( fstatfs(fd, ) == -1 ){
  ((unixFile*)pFile)->lastErrno = errno;
***
*** 27530,27536 
}
  #endif

! #if SQLITE_ENABLE_LOCKING_STYLE
  #if SQLITE_PREFER_PROXY_LOCKING
isAutoProxy = 1;
  #endif
--- 27554,27560 
}
  #endif

! #if SQLITE_ENABLE_LOCKING_STYLE && !OS_VXWORKS
  #if SQLITE_PREFER_PROXY_LOCKING
isAutoProxy = 1;
  #endif
***
*** 27793,27799 
** tests repeatable.
*/
memset(zBuf, 0, nBuf);
! #if !defined(SQLITE_TEST)
{
  int pid, fd;
  fd = open("/dev/urandom", O_RDONLY);
--- 27817,27823 
** tests repeatable.
*/
memset(zBuf, 0, nBuf);
! #if !defined(SQLITE_TEST) && !defined(OS_VXWORKS)
{
  int pid, fd;
  fd = open("/dev/urandom", O_RDONLY);

I'v used definitions as below:

EXTRA_DEFINE+=-DOS_VXWORKS_660=660
EXTRA_DEFINE+=-DOS_VXWORKS_670=670
EXTRA_DEFINE+=-DOS_VXWORKS_680=680
EXTRA_DEFINE+=-DOS_VXWORKS=OS_VXWORKS_670
EXTRA_DEFINE+=-DSQLITE_HOMEGROWN_RECURSIVE_MUTEX
EXTRA_DEFINE+=-DSQLITE_ENABLE_LOCKING_STYLE=1
EXTRA_DEFINE+=-DSQLITE_OMIT_LOAD_EXTENSION


Best Regards,

Huang Zhihua

2010/10/8 Scott A Mintz 

> I searched through all the mail archives to see if I could find someone
> that has ported SQLite to vxWorks in kernel mode.  Apparently, there are a
> few folks attempting it.  And fewer succeeding at it.
>
> I found an article published by ZhiHua Huang where he describes the mods
> he made to port SQLite 3.6.23.1 to vxWorks 6.5
> http://www.mail-archive.com/sqlite-users@sqlite.org/msg51531.html
>
> Using that as a starting point, I modified my files but I get the
> following errors:
> "C:/CCViews/Mintz_NetlinxUCS_L7x/NetLinxUCS/SQLite3_LIB/sqlite3.c", line
> 27262: error (dcc:1633): parse error  near 'struct'
> "C:/CCViews/Mintz_NetlinxUCS_L7x/NetLinxUCS/SQLite3_LIB/sqlite3.c", line
> 27262: error (dcc:1206): syntax error
> "C:/CCViews/Mintz_NetlinxUCS_L7x/NetLinxUCS/SQLite3_LIB/sqlite3.c", line
> 27262: fatal error (dcc:1340): can't recover from earlier errors
>
> This is the code it's complaining about.  The "struct statfs fsInfo;" line
> is line 27262.
>
> #ifdef FD_CLOEXEC
>  fcntl(fd, F_SETFD, fcntl(fd, F_GETFD, 0) | FD_CLOEXEC);
> #endif
>
>  noLock = eType!=SQLITE_OPEN_MAIN_DB;
>
>
> #if defined(__APPLE__) || SQLITE_ENABLE_LOCKING_STYLE
>  struct statfs fsInfo;
>  if( fstatfs(fd, ) == -1 ){
>((unixFile*)pFile)->lastErrno = errno;
>if( dirfd>=0 ) close(dirfd); /* silently leak 

Re: [sqlite] Confitional IF in triggers

2010-10-08 Thread Brian P Curley
On a field by field basis:

CASE..WHEN [condition 1]..THEN [result 1]..WHEN [condition n]..THEN 
[result n]..ELSE..END

- Original Message - 
From: "Russell A" 
To: 
Sent: Friday, October 08, 2010 01:24 a
Subject: [sqlite] Confitional IF in triggers


This may be a really dumb question, but I've searched and can't find an 
answer.
Do SQLite triggers support any conditional expressions, like IF, or is there 
only the WHEN statement? If the latter, does that mean that multiple 
conditions must be in separate triggers?
Any help appreciated.Stopgap.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Michele Pradella
  I try 32 and it takes same time. Moreover the RAM used is about 500MB
Even with "PRAGMA synchronous = OFF;" operation time is the same.
I think the only way for now is to make more DELETE with less element, 
but it's not the best way


Il 08/10/2010 10.41, Marcus Grimm ha scritto:
> Michele Pradella wrote:
>>ok I'll try with 3.7.3
>> DELETE is a little bit faster, and the -wal is reintegrated when I close
>> the connection.
>> Changing cache_size (I try 1) DELETE takes same time to complete.
> 1 doesn't sounds very big, I used to define it to e.g. 32 instead when
> working with a GB size DB (mainly for index creation and "PRAGMA 
> integrity_check;")
>
> Maybe playing with temporary setting PRAGMA synchronous = OFF; would be 
> useful,
> although I think the wal mode should already avoid too much syncing.
>
> Marcus
>
>
>> Was my fault, because to close the shell connection I used Ctrl+C but
>> this leave the -wal file. If I close with .quit the -wal file is
>> reintegrated.
>> I thought that Ctrl+C is like a ".quit " but it's not.
>> Anyway if I close the DB connection with Ctrl+C and than reopen
>> connection and close it with .quit the -wal file is not reintegrated.
>>
>> Il 08/10/2010 9.56, Michele Pradella ha scritto:
>>> I'll try to increase cache size, and I'll try operation on my Db with
>>> the 3.7.3 anyway I already ported the Fix of the WAL issue from recent
>>> snapshot. I'll try and let you know
>>>
>>> Il 08/10/2010 9.52, Marcus Grimm ha scritto:
 Michele Pradella wrote:
>  As I explain in previews email, I think that recreating the index is
> the slowest operation I can do on my DB.
> Anyway in my first email I ask another question about -wal file
> Tryin to DELETE the (5 millions) records with the shell SQLITE interface
> I can see the -wal grow up till 600MB. I can not understand why the -wal
> is no more reintegrated, and even when I close the connection (closing
> the shell) -wal file still exist.
> Trying for example the statement "create table new as select * from
> current where condition = keep;" the -wal file grow up till 1,5GB and
> than the same, after closing shell the -wal remain.
>
> Moreover the operation above "create new" terminate with a "Error:
> disk I/O error"
> The hard disk I use has a lot of free space and it's SATA2 hard disk, so
> is internal
 You may try with the new 3.7.3 version, the 3.7.2 doesn't operate
 very well on win32 when doing huge transactions in wal mode.

 2nd, when running sqlite with a gigabyte sized database it is useful
 to heavily increase the cache size, not sure if that helps for delete
 statements though, but it does in particular when creating indices.

 Marcus

> Il 07/10/2010 20.38, Petite Abeille ha scritto:
>> On Oct 7, 2010, at 6:05 PM, Michele Pradella wrote:
>>
>>> I have a DB of about 3GB: the DB has about 23 millions of records. [..]
>>> the statement is trying to delete about 5 millions records and it takes
>>> about 4-5minutes.
>>> Is there a way to try to speed up the DELETE?
>> Considering that you want to delete about a quarter of the records, 
>> perhaps it would be more efficient to recreate that table altogether, no?
>>
>> Pseudocode:
>>
>> create table new as select * from current where condition = keep;
>> create index on new;
>> drop table current;
>> alter table rename new to current;
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* 
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Michele Pradella
  DELETE on PrimaryKey instead of DateTime index takes same time

Il 08/10/2010 10.30, Michele Pradella ha scritto:
>ok I'll try with 3.7.3
> DELETE is a little bit faster, and the -wal is reintegrated when I close
> the connection.
> Changing cache_size (I try 1) DELETE takes same time to complete.
> Was my fault, because to close the shell connection I used Ctrl+C but
> this leave the -wal file. If I close with .quit the -wal file is
> reintegrated.
> I thought that Ctrl+C is like a ".quit " but it's not.
> Anyway if I close the DB connection with Ctrl+C and than reopen
> connection and close it with .quit the -wal file is not reintegrated.
>
> Il 08/10/2010 9.56, Michele Pradella ha scritto:
>> I'll try to increase cache size, and I'll try operation on my Db with
>> the 3.7.3 anyway I already ported the Fix of the WAL issue from recent
>> snapshot. I'll try and let you know
>>
>> Il 08/10/2010 9.52, Marcus Grimm ha scritto:
>>> Michele Pradella wrote:
  As I explain in previews email, I think that recreating the index is
 the slowest operation I can do on my DB.
 Anyway in my first email I ask another question about -wal file
 Tryin to DELETE the (5 millions) records with the shell SQLITE interface
 I can see the -wal grow up till 600MB. I can not understand why the -wal
 is no more reintegrated, and even when I close the connection (closing
 the shell) -wal file still exist.
 Trying for example the statement "create table new as select * from
 current where condition = keep;" the -wal file grow up till 1,5GB and
 than the same, after closing shell the -wal remain.

 Moreover the operation above "create new" terminate with a "Error:
 disk I/O error"
 The hard disk I use has a lot of free space and it's SATA2 hard disk, so
 is internal
>>> You may try with the new 3.7.3 version, the 3.7.2 doesn't operate
>>> very well on win32 when doing huge transactions in wal mode.
>>>
>>> 2nd, when running sqlite with a gigabyte sized database it is useful
>>> to heavily increase the cache size, not sure if that helps for delete
>>> statements though, but it does in particular when creating indices.
>>>
>>> Marcus
>>>
 Il 07/10/2010 20.38, Petite Abeille ha scritto:
> On Oct 7, 2010, at 6:05 PM, Michele Pradella wrote:
>
>> I have a DB of about 3GB: the DB has about 23 millions of records. [..]
>> the statement is trying to delete about 5 millions records and it takes
>> about 4-5minutes.
>> Is there a way to try to speed up the DELETE?
> Considering that you want to delete about a quarter of the records, 
> perhaps it would be more efficient to recreate that table altogether, no?
>
> Pseudocode:
>
> create table new as select * from current where condition = keep;
> create index on new;
> drop table current;
> alter table rename new to current;
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* 
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Marcus Grimm

Michele Pradella wrote:
>   ok I'll try with 3.7.3
> DELETE is a little bit faster, and the -wal is reintegrated when I close 
> the connection.
> Changing cache_size (I try 1) DELETE takes same time to complete.

1 doesn't sounds very big, I used to define it to e.g. 32 instead when
working with a GB size DB (mainly for index creation and "PRAGMA 
integrity_check;")

Maybe playing with temporary setting PRAGMA synchronous = OFF; would be useful,
although I think the wal mode should already avoid too much syncing.

Marcus


> Was my fault, because to close the shell connection I used Ctrl+C but 
> this leave the -wal file. If I close with .quit the -wal file is 
> reintegrated.
> I thought that Ctrl+C is like a ".quit " but it's not.
> Anyway if I close the DB connection with Ctrl+C and than reopen 
> connection and close it with .quit the -wal file is not reintegrated.
> 
> Il 08/10/2010 9.56, Michele Pradella ha scritto:
>>I'll try to increase cache size, and I'll try operation on my Db with
>> the 3.7.3 anyway I already ported the Fix of the WAL issue from recent
>> snapshot. I'll try and let you know
>>
>> Il 08/10/2010 9.52, Marcus Grimm ha scritto:
>>> Michele Pradella wrote:
 As I explain in previews email, I think that recreating the index is
 the slowest operation I can do on my DB.
 Anyway in my first email I ask another question about -wal file
 Tryin to DELETE the (5 millions) records with the shell SQLITE interface
 I can see the -wal grow up till 600MB. I can not understand why the -wal
 is no more reintegrated, and even when I close the connection (closing
 the shell) -wal file still exist.
 Trying for example the statement "create table new as select * from
 current where condition = keep;" the -wal file grow up till 1,5GB and
 than the same, after closing shell the -wal remain.

 Moreover the operation above "create new" terminate with a "Error:
 disk I/O error"
 The hard disk I use has a lot of free space and it's SATA2 hard disk, so
 is internal
>>> You may try with the new 3.7.3 version, the 3.7.2 doesn't operate
>>> very well on win32 when doing huge transactions in wal mode.
>>>
>>> 2nd, when running sqlite with a gigabyte sized database it is useful
>>> to heavily increase the cache size, not sure if that helps for delete
>>> statements though, but it does in particular when creating indices.
>>>
>>> Marcus
>>>
 Il 07/10/2010 20.38, Petite Abeille ha scritto:
> On Oct 7, 2010, at 6:05 PM, Michele Pradella wrote:
>
>> I have a DB of about 3GB: the DB has about 23 millions of records. [..]
>> the statement is trying to delete about 5 millions records and it takes
>> about 4-5minutes.
>> Is there a way to try to speed up the DELETE?
> Considering that you want to delete about a quarter of the records, 
> perhaps it would be more efficient to recreate that table altogether, no?
>
> Pseudocode:
>
> create table new as select * from current where condition = keep;
> create index on new;
> drop table current;
> alter table rename new to current;
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>>
> 
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Michele Pradella
  ok I'll try with 3.7.3
DELETE is a little bit faster, and the -wal is reintegrated when I close 
the connection.
Changing cache_size (I try 1) DELETE takes same time to complete.
Was my fault, because to close the shell connection I used Ctrl+C but 
this leave the -wal file. If I close with .quit the -wal file is 
reintegrated.
I thought that Ctrl+C is like a ".quit " but it's not.
Anyway if I close the DB connection with Ctrl+C and than reopen 
connection and close it with .quit the -wal file is not reintegrated.

Il 08/10/2010 9.56, Michele Pradella ha scritto:
>I'll try to increase cache size, and I'll try operation on my Db with
> the 3.7.3 anyway I already ported the Fix of the WAL issue from recent
> snapshot. I'll try and let you know
>
> Il 08/10/2010 9.52, Marcus Grimm ha scritto:
>> Michele Pradella wrote:
>>> As I explain in previews email, I think that recreating the index is
>>> the slowest operation I can do on my DB.
>>> Anyway in my first email I ask another question about -wal file
>>> Tryin to DELETE the (5 millions) records with the shell SQLITE interface
>>> I can see the -wal grow up till 600MB. I can not understand why the -wal
>>> is no more reintegrated, and even when I close the connection (closing
>>> the shell) -wal file still exist.
>>> Trying for example the statement "create table new as select * from
>>> current where condition = keep;" the -wal file grow up till 1,5GB and
>>> than the same, after closing shell the -wal remain.
>>>
>>> Moreover the operation above "create new" terminate with a "Error:
>>> disk I/O error"
>>> The hard disk I use has a lot of free space and it's SATA2 hard disk, so
>>> is internal
>> You may try with the new 3.7.3 version, the 3.7.2 doesn't operate
>> very well on win32 when doing huge transactions in wal mode.
>>
>> 2nd, when running sqlite with a gigabyte sized database it is useful
>> to heavily increase the cache size, not sure if that helps for delete
>> statements though, but it does in particular when creating indices.
>>
>> Marcus
>>
>>> Il 07/10/2010 20.38, Petite Abeille ha scritto:
 On Oct 7, 2010, at 6:05 PM, Michele Pradella wrote:

> I have a DB of about 3GB: the DB has about 23 millions of records. [..]
> the statement is trying to delete about 5 millions records and it takes
> about 4-5minutes.
> Is there a way to try to speed up the DELETE?
 Considering that you want to delete about a quarter of the records, 
 perhaps it would be more efficient to recreate that table altogether, no?

 Pseudocode:

 create table new as select * from current where condition = keep;
 create index on new;
 drop table current;
 alter table rename new to current;

 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* 
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how do I load a csv file or what is the simplest text file to load into sqlite3?

2010-10-08 Thread Oliver Peters
joshua wojnas  writes:

> 
> how do I load a csv file or what is the simplest text file to load
> into sqlite3?


1. sqlite3 yoursqlitedatabasename
2. .import yourtextfile yourtablename (see .h for documentation)

ready


but what you need is:

- an existing sqlitedb (step1 already creates the db)
- the table in the sqlitedb where you want to insert your data (you need
knowledge how to plan a "good" db and about the tool to do it (SQL))
- an adequate structure of yourtextfile (reliable delimiter)
- no header in yourtextfile
- choose the delimiter in sqlite3 before importing (ie:
.separator \t for TABS,
standard in sqlite is |)


> what would be the best format to use I use text pad and open office
> calc. Also open office base.


the best format is a text file

last hint: read the documentation

Oliver




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Michele Pradella
  I'll try to increase cache size, and I'll try operation on my Db with 
the 3.7.3 anyway I already ported the Fix of the WAL issue from recent 
snapshot. I'll try and let you know

Il 08/10/2010 9.52, Marcus Grimm ha scritto:
> Michele Pradella wrote:
>>As I explain in previews email, I think that recreating the index is
>> the slowest operation I can do on my DB.
>> Anyway in my first email I ask another question about -wal file
>> Tryin to DELETE the (5 millions) records with the shell SQLITE interface
>> I can see the -wal grow up till 600MB. I can not understand why the -wal
>> is no more reintegrated, and even when I close the connection (closing
>> the shell) -wal file still exist.
>> Trying for example the statement "create table new as select * from
>> current where condition = keep;" the -wal file grow up till 1,5GB and
>> than the same, after closing shell the -wal remain.
>>
>> Moreover the operation above "create new" terminate with a "Error:
>> disk I/O error"
>> The hard disk I use has a lot of free space and it's SATA2 hard disk, so
>> is internal
> You may try with the new 3.7.3 version, the 3.7.2 doesn't operate
> very well on win32 when doing huge transactions in wal mode.
>
> 2nd, when running sqlite with a gigabyte sized database it is useful
> to heavily increase the cache size, not sure if that helps for delete
> statements though, but it does in particular when creating indices.
>
> Marcus
>
>> Il 07/10/2010 20.38, Petite Abeille ha scritto:
>>> On Oct 7, 2010, at 6:05 PM, Michele Pradella wrote:
>>>
 I have a DB of about 3GB: the DB has about 23 millions of records. [..]
 the statement is trying to delete about 5 millions records and it takes
 about 4-5minutes.
 Is there a way to try to speed up the DELETE?
>>> Considering that you want to delete about a quarter of the records, perhaps 
>>> it would be more efficient to recreate that table altogether, no?
>>>
>>> Pseudocode:
>>>
>>> create table new as select * from current where condition = keep;
>>> create index on new;
>>> drop table current;
>>> alter table rename new to current;
>>>
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>>
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* 
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Marcus Grimm

Michele Pradella wrote:
>   As I explain in previews email, I think that recreating the index is 
> the slowest operation I can do on my DB.
> Anyway in my first email I ask another question about -wal file
> Tryin to DELETE the (5 millions) records with the shell SQLITE interface 
> I can see the -wal grow up till 600MB. I can not understand why the -wal 
> is no more reintegrated, and even when I close the connection (closing 
> the shell) -wal file still exist.
> Trying for example the statement "create table new as select * from 
> current where condition = keep;" the -wal file grow up till 1,5GB and 
> than the same, after closing shell the -wal remain.
> 
> Moreover the operation above "create new" terminate with a "Error: 
> disk I/O error"
> The hard disk I use has a lot of free space and it's SATA2 hard disk, so 
> is internal

You may try with the new 3.7.3 version, the 3.7.2 doesn't operate
very well on win32 when doing huge transactions in wal mode.

2nd, when running sqlite with a gigabyte sized database it is useful
to heavily increase the cache size, not sure if that helps for delete
statements though, but it does in particular when creating indices.

Marcus

> 
> Il 07/10/2010 20.38, Petite Abeille ha scritto:
>> On Oct 7, 2010, at 6:05 PM, Michele Pradella wrote:
>>
>>> I have a DB of about 3GB: the DB has about 23 millions of records. [..]
>>> the statement is trying to delete about 5 millions records and it takes
>>> about 4-5minutes.
>>> Is there a way to try to speed up the DELETE?
>> Considering that you want to delete about a quarter of the records, perhaps 
>> it would be more efficient to recreate that table altogether, no?
>>
>> Pseudocode:
>>
>> create table new as select * from current where condition = keep;
>> create index on new;
>> drop table current;
>> alter table rename new to current;
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
> 
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Michele Pradella
  I was thinking this too, but I take this for last chance: my hope is I 
can delete 5 millions of records in few seconds, science fiction? :)

Il 08/10/2010 9.00, Aldes Rossi ha scritto:
>Il 10/08/2010 08:30 AM, Michele Pradella ha scritto:
>> I don't know if could be faster to do more Delete of less records, or
>> perhaps making a VIEW and than deleting all the records matching the
>> VIEW. I'm thinking about this to find the fastest solution, because the
>> problem is that when sqlite is deleting the records obviously I can not
>> access the DB for insert new records and all the operations have to wait
>> for delete complete...and 4-5minutes is too much time to wait.
> Maybe splitting this operation in many Delete of less record lengthens the
> total time, but results in shortest service interruptions, and may in
> fact be
> more acceptable.
>
> Aldes Rossi
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* 
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Aldes Rossi
  Il 10/08/2010 08:30 AM, Michele Pradella ha scritto:
> I don't know if could be faster to do more Delete of less records, or
> perhaps making a VIEW and than deleting all the records matching the
> VIEW. I'm thinking about this to find the fastest solution, because the
> problem is that when sqlite is deleting the records obviously I can not
> access the DB for insert new records and all the operations have to wait
> for delete complete...and 4-5minutes is too much time to wait.
Maybe splitting this operation in many Delete of less record lengthens the
total time, but results in shortest service interruptions, and may in 
fact be
more acceptable.

Aldes Rossi

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Tcl bindins problem, once again

2010-10-08 Thread Paweł Salawa
  Hi,

I mentioned the problem before but it seems to be ignored for now. I'd 
really appreciate some response, at least confirmation or denial that 
I'm right about the problem.

Quoting from previous thread:

--- QUOTE ---
select a1.txt, a2.txt
   from a a1
   join a a2
using (ref)
  where a1.lang = 'french'
and
a2.lang = 'english';
--- END OF QUOTE ---

Note, that the "SELECT" is not known to application - it's custom select 
typed by end-user, so I don't deciede about "AS" aliases for columns.

--- QUOTE ---
We can either use:
db eval $query arr {
   parray arr
}

But then we would have arr(*) = a.txt a.txt but only one array entry: 
arr(a.txt).

Other way is to use:

set results [db eval $query]
foreach cellValue $results {
   puts $cellValue
}

But then we lose informations about column names in results.

One of possible solutions would be to add some prefix or suffix to array 
indexes.

Other one would be to provide column names in results of [db eval $query],
using for example flag, like: [db eval $query -withColumnNames] or 
something
like that. The results would be: [list $columnList $tableDataList]
--- END OF QUOTE ---

Currently I implemented workaround with 2 calls mixed. First is "db eval 
$query arr {...}" to collect list of result columns and second "set 
results [db eval $query]" to get all values.
It works, but it causes additional query execution. It's a workaround, 
not a solution.
Shouldn't it be fixed?

Regards,
Googie

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Michele Pradella
  As I explain in previews email, I think that recreating the index is 
the slowest operation I can do on my DB.
Anyway in my first email I ask another question about -wal file
Tryin to DELETE the (5 millions) records with the shell SQLITE interface 
I can see the -wal grow up till 600MB. I can not understand why the -wal 
is no more reintegrated, and even when I close the connection (closing 
the shell) -wal file still exist.
Trying for example the statement "create table new as select * from 
current where condition = keep;" the -wal file grow up till 1,5GB and 
than the same, after closing shell the -wal remain.

Moreover the operation above "create new" terminate with a "Error: 
disk I/O error"
The hard disk I use has a lot of free space and it's SATA2 hard disk, so 
is internal

Il 07/10/2010 20.38, Petite Abeille ha scritto:
> On Oct 7, 2010, at 6:05 PM, Michele Pradella wrote:
>
>> I have a DB of about 3GB: the DB has about 23 millions of records. [..]
>> the statement is trying to delete about 5 millions records and it takes
>> about 4-5minutes.
>> Is there a way to try to speed up the DELETE?
> Considering that you want to delete about a quarter of the records, perhaps 
> it would be more efficient to recreate that table altogether, no?
>
> Pseudocode:
>
> create table new as select * from current where condition = keep;
> create index on new;
> drop table current;
> alter table rename new to current;
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* 
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Michele Pradella
  I think that if the total records are 23 millions and the record to 
delete is 5 millions, creating a temp table of 23-5=18millions records 
is slower than deleting directly 5millions records.
Anyway, the table has got only 1 index on the DateTime column that is 
INTEGER.
I dropping all index deleting and recreate index is the slowest 
operation I can do on the DB, so I can not use it.
I can try to Delete on Primary Key instead on the DateTime, but I think 
that the slowest operation is the deletion of the record not the 
selection. Further more I can not be sure that the Primary Key is always 
incremental, usually should be, but if for example the Primary key wrap 
could not be the right thing to do.
I don't know if could be faster to do more Delete of less records, or 
perhaps making a VIEW and than deleting all the records matching the 
VIEW. I'm thinking about this to find the fastest solution, because the 
problem is that when sqlite is deleting the records obviously I can not 
access the DB for insert new records and all the operations have to wait 
for delete complete...and 4-5minutes is too much time to wait.


Il 07/10/2010 19.14, P Kishor ha scritto:
> On Thu, Oct 7, 2010 at 11:05 AM, Michele Pradella
>   wrote:
>>   Hi all, I have a question about how to speed up a DELETE statement.
>> I have a DB of about 3GB: the DB has about 23 millions of records.
>> The DB is indexed by a DateTime column (is a 64 bit integer), and
>> suppose you want to delete all records before a date.
>> Now I'm using a syntax like this (I try all the statement with the
>> sqlite shell):
>> suppose to use __int64 DateValue=the date limit you want to delete
>>
>> DELETE FROM table_name WHERE DateTime>
> What is the speed of
>
> SELECT FROM table WHERE DateTime>= DateValue;
>
> If the above speed is acceptable, then try the following
>
> CREATE TABLE tmp AS SELECT FROM table WHERE DateTime>= DateValue;
> DROP TABLE table;
> ALTER TABLE tmp RENAME to table;
>
>
>> the statement is trying to delete about 5 millions records and it takes
>> about 4-5minutes.
>> Is there a way to try to speed up the DELETE? I already try to put the
>> DELETE statement between a BEGIN; COMMIT; statement, but same result.
>> After the delete complete I have a -wal file of about 600MB: this file
>> is not deleted even if I disconnect from the database.
>> Is that the right behavior? I thought that when the last DB connection
>> terminate the -wal file is reintegrated in the DB, but it's not.
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* 
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users