Re: [sqlite] Data structure

2007-04-11 Thread John Stanton
I used an approach similar to the Bloom Filter for data retrieval.  It 
could be very fast at retrieving substrings from large data sets but was 
fairly complex to implement.


I would not go with that approach unless you had some very broad 
retrieval requirements and a very large data set.


Lloyd wrote:
I was just wondering what the odds were of doing a better job than the 
filing system pros, how much time/code that would take on your part and 
how much that time would cost versus speccing a bigger/faster machine.


Martin



I am not fully clear. I just want my program to run at most efficient
and fast even on a low profile end user system. So I am in search for
the best data structure. 


I have heard about bloom filter, but in my case it is not applicable.
The final point is, I am just searching for a data structure which is
the result of lateral thinking by experts ;) (like bloom filter)

Thanks,
  Lloyd


__
Scanned and protected by Email scanner

-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Data structure

2007-04-11 Thread John Stanton

Lloyd wrote:

On Wed, 2007-04-11 at 10:00 -0500, P Kishor wrote:


I think, looking from Lloyd's email address, (s)he might be limited to
what CDAC, Trivandrum might be providing its users.

Lloyd, you already know what size your data sets are. Esp. if it
doesn't change, putting the entire dataset in RAM is the best option.
If you don't need SQL capabilities, you probably can just use
something like BerkeleyDB or DBD::Deep (if using Perl), and that will
be plenty fast. Of course, if it can't be done then it can't be done,
and you will have to recommend more RAM for the machines (the CPU
seems fast enough, just the memory may be a bottleneck).



Sorry, I am not talking about the limitations of the system in our side,
but end user who uses our software. I want the tool to be run at its
best on a low end machine also. 


I don't want the capabilities of a data base here. Just want to store
data, search for presence, remove it when there is no more use of it.

Surely I will check out BerkeleyDB. The data set must be in ram, because
the total size of it is very small. (Few maga bytes) I just want to
spped up the search, which is done millions of times.

Thanks,

 LLoyd

You might discover that you can craft a very effective memory resident 
storage system using a compression system like Huffman Encoding and an 
index method appropriate to the key you are using for retrieval.  That 
could work very well in an embedded system, have a small footprint in 
data and code and be very fast.


__
Scanned and protected by Email scanner

-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Data structure

2007-04-12 Thread John Stanton
We use a very simple data retrieval method for smallish datasets.  The 
data is just stored in memory or as a memory mapped file and a 
sequential search used.  It sounds crude but when you use a fast search 
algorithm like Boyer-Moore it outperforms index methods up to a 
surprisingly large number of records.  As you can imagine the code 
footprint is miniscule and if you add regular expression logic you can 
realize very intricate search patterns.


We use the method in conjunction with a database to achieve an enormous 
speed increase on "LIKE" type searches.  Grep a few files to get a feel 
for the performance.


Another method which works well for memory resident storage is to 
implement self balancing AVL trees.  The code is simple and the 
performance lightning fast.  With a little ingenuity you can use disk 
storage.  Mini Sql (MSql) is a good example of how this can be effective.


As Einstein said - "Make it as simple as possible, but not too simple". 
   Applying Occam's Razor can turn bloated solutions into more 
effective lean ones.  Typical solutions come in two sizes just like Army 
boots - too big and too small.


Lloyd wrote:

Would anybody suggest a good tool for performance measurement (on
Linux) ?

On Wed, 2007-04-11 at 10:35 -0500, John Stanton wrote:


You might discover that you can craft a very effective memory
resident 
storage system using a compression system like Huffman Encoding and
an 
index method appropriate to the key you are using for retrieval.
That 
could work very well in an embedded system, have a small footprint in 
data and code and be very fast.




__
Scanned and protected by Email scanner

-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Performance analysis of SQLite statements

2007-04-12 Thread John Stanton

What do you want to measure?

Jonas Sandman wrote:

Anyone know a good bench-marking (preferably free or cheap) which can be
used to benchmark C/C++ code in Windows?

Best regards,
Jonas

On 4/12/07, Samuel R. Neff <[EMAIL PROTECTED]> wrote:




Are there any tools to help analyze the performance of components with a
particular SQLite statement?  I'm aware of the EXPLAIN option which can
show
what VBDE code was used to execute a statement, but afaik there is no way
to
tell the time each step took.

Basically I want to know how long the different components of a single 
SQL

statement took relative to the whole statement.  Provide for more
fine-grained analysis than just comparing two SQL statements based on
total
execution time.

Thanks,

Sam


---
We're Hiring! Seeking a passionate developer to join our team building
products. Position is in the Washington D.C. metro area. If interested
contact [EMAIL PROTECTED]

-Original Message-
From: Joe Wilson [mailto:[EMAIL PROTECTED]
Sent: Wednesday, April 11, 2007 8:02 PM
To: [EMAIL PROTECTED]
Subject: Re: [sqlite] Select columns & performance

...

Do a lot of benchmarking.



- 


To unsubscribe, send email to [EMAIL PROTECTED]

- 









-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Still getting "Insertion failed because database is full." errors

2007-04-13 Thread John Stanton
Are you sure that you are not exceeding the capacity of the flash memory 
to handle writes?


Joel Cochran wrote:

Hi folks,

I had sent this message out a couple of weeks ago, and I am still searching
for a solution.  I looked at the application design and made a modest
change: I now create a single SQLiteConnection and use it from start to
finish of the application.  Previously I was creating multiple connections
(and disconnecting, disposing them and then running GC), but all that has
been replaced with the single connection approach.

At first I thought this had solved the problem, because all in house 
testing

runs beautifully.  However, as soon as the device is sent to the field, the
error starts again.  Unfortunately, it means that I have never been able to
catch this in debug.  I did, however, change the error reporting a little
and got some more information.  The SQLiteException I am not getting
includes this information:

Insertion failed because the database is full
database or disk is full

at System.Data.SQLite.SQLite3.Reset()
at System.Data.SQLite.SQLite3.Step()
at System.Data.SQLite.SQLiteDataReader.NextResult()
at System.Data.SQLite.SQLiteDataReader.ctor()
at System.Data.SQLite.SQLiteCommand.ExecuteReader()
at System.Data.SQLite.SQLiteCommand.ExecuteDbDataReader()
at ... etc etc


I downloaded the C source and tried to read through it, but honestly I am
not a C programmer and didn't get very far.

Other than a possible bug, the only thing I can think of is that there is
something fundamentally wrong with my architecture.  What I can't get, is
why the message has anything to do with inserting.  While there are several
actions in my product that Insert into the database, the error never occurs
at those points.  If I understood what was being inserted, perhaps I could
figure out a soultion.

If anyone can help, I'd greatly appreciate it.  The original message is
included below this one.

TIA,

Joel


-- Original message --
Hi all,

I have searched the web, the newsgroups, and the archives, but all I can
find is one reference to someone having this trouble with Python and a 
bunch

of references that list this as predefined error #13
according to http://www.sqlite.org/capi3.html.

What I can't find is any help in determining why a program might receive
this error.  The database is only 203KB and has 6 tables (maybe 1,000 rows
in all the tables combined) running off a 1GB CompactFlash card on a 
Windows

Mobile 5 device with 256MB of onboard RAM: 50MB of that is dedicated to
programs and the rest is storage.  The only app running on the device is 
the

one in question.  The error occurs at seemingly random times and forces the
user to end the application and start over.

At this point I don't know much else: the user is in the field and I will
have the device back late this afternoon for debugging.  I was hoping to
have a head start before I get the device, because as it is I have no idea
what the cause could be.
-- End Original message --




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: AW: [sqlite] Still getting "Insertion failed because database is full." errors

2007-04-13 Thread John Stanton
If something passes all tests but fails later then it is very likely 
failing where testing was not performed, such as the hardware under 
other conditions.


Michael Ruck wrote:

Hi,

Is this the only device seeing this error or are *all* devices seeing this
error? Have you checked the CF card? May be its just the card, which is
corrupt and you're hitting these bugs at points, where the file system is
hitting a bad sector.

Is this running in a transaction?

Mike 


-Ursprüngliche Nachricht-
Von: Joel Cochran [mailto:[EMAIL PROTECTED] 
Gesendet: Freitag, 13. April 2007 17:46

An: [EMAIL PROTECTED]
Betreff: [sqlite] Still getting "Insertion failed because database is full."
errors

Hi folks,

I had sent this message out a couple of weeks ago, and I am still searching
for a solution.  I looked at the application design and made a modest
change: I now create a single SQLiteConnection and use it from start to
finish of the application.  Previously I was creating multiple connections
(and disconnecting, disposing them and then running GC), but all that has
been replaced with the single connection approach.

At first I thought this had solved the problem, because all in house testing
runs beautifully.  However, as soon as the device is sent to the field, the
error starts again.  Unfortunately, it means that I have never been able to
catch this in debug.  I did, however, change the error reporting a little
and got some more information.  The SQLiteException I am not getting
includes this information:

Insertion failed because the database is full database or disk is full

at System.Data.SQLite.SQLite3.Reset()
at System.Data.SQLite.SQLite3.Step()
at System.Data.SQLite.SQLiteDataReader.NextResult()
at System.Data.SQLite.SQLiteDataReader.ctor()
at System.Data.SQLite.SQLiteCommand.ExecuteReader()
at System.Data.SQLite.SQLiteCommand.ExecuteDbDataReader()
at ... etc etc


I downloaded the C source and tried to read through it, but honestly I am
not a C programmer and didn't get very far.

Other than a possible bug, the only thing I can think of is that there is
something fundamentally wrong with my architecture.  What I can't get, is
why the message has anything to do with inserting.  While there are several
actions in my product that Insert into the database, the error never occurs
at those points.  If I understood what was being inserted, perhaps I could
figure out a soultion.

If anyone can help, I'd greatly appreciate it.  The original message is
included below this one.

TIA,

Joel


-- Original message --
Hi all,

I have searched the web, the newsgroups, and the archives, but all I can
find is one reference to someone having this trouble with Python and a bunch
of references that list this as predefined error #13 according to
http://www.sqlite.org/capi3.html.

What I can't find is any help in determining why a program might receive
this error.  The database is only 203KB and has 6 tables (maybe 1,000 rows
in all the tables combined) running off a 1GB CompactFlash card on a Windows
Mobile 5 device with 256MB of onboard RAM: 50MB of that is dedicated to
programs and the rest is storage.  The only app running on the device is the
one in question.  The error occurs at seemingly random times and forces the
user to end the application and start over.

At this point I don't know much else: the user is in the field and I will
have the device back late this afternoon for debugging.  I was hoping to
have a head start before I get the device, because as it is I have no idea
what the cause could be.
-- End Original message --


-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Still getting "Insertion failed because database is full." errors

2007-04-13 Thread John Stanton
Regular flash memory has a limited number of write cycles before it 
fails.  Are you hitting this problem by using it for general processing?


Joel Cochran wrote:

First, to answer John's question: the CF Card is a 1GB card, and the only
thing on the card is the SQLite Database (currently 509KB), so I really
don't think it is a space problem.  Unless you mean something else by Flash
memory?

Michael,

Yes, I am doing Selects, and using DataAdapters to fill DataTables (which
are then used to populate instances of objects).

Interesting question about the CF Card.  I have 2 devices right now, one in
house for development and one in the field for testing, so I suppose it
could be the memory card's problem.  But if it was bad sector space,
wouldn't it happen on the same record every time?  As it is, the problem
occurs sporadically and unpredictably.  I used the field device here in the
office and opened over a hundred records with no error (hence my thinking I
had fixed the problem).  The field tester took the device to the field and
the first record threw the error.

And I'm still hung up on why it reports an insertion error when there is no
insertion occurring (unless it is some sort of internal mechanism used by
the database itself).

For Mike's last question, and I hope this doesn't sound too newbie-ish, but
do you run Select statements in a Transaction?  The short ansewr is no, 
they

are not.  And actually, now that I think about it, I don't think any of the
updates/inserts on the device itself are either.  The software that creates
the database runs on a PC, and I know all those are in transactions, but on
the device I do not think they are.  (This program was originally written
for SqlServerCE and was recently converted to SQLite).  Could that have
anything to do with it?  Could these database changes build up over time or
something like that?  I admit I am not very knowledgeable about
Transactions.  I will gladly go and put all the updates into transactions,
but would I do the same for Select statements?

Thanks for the responses,

Joel

On 4/13/07, Michael Ruck <[EMAIL PROTECTED]> wrote:



Hi,

Is this the only device seeing this error or are *all* devices seeing 
this

error? Have you checked the CF card? May be its just the card, which is
corrupt and you're hitting these bugs at points, where the file system is
hitting a bad sector.

Is this running in a transaction?

Mike

-Ursprüngliche Nachricht-
Von: Joel Cochran [mailto:[EMAIL PROTECTED]
Gesendet: Freitag, 13. April 2007 17:46
An: [EMAIL PROTECTED]
Betreff: [sqlite] Still getting "Insertion failed because database is
full."
errors

Hi folks,

I had sent this message out a couple of weeks ago, and I am still
searching
for a solution.  I looked at the application design and made a modest
change: I now create a single SQLiteConnection and use it from start to
finish of the application.  Previously I was creating multiple 
connections

(and disconnecting, disposing them and then running GC), but all that has
been replaced with the single connection approach.

At first I thought this had solved the problem, because all in house
testing
runs beautifully.  However, as soon as the device is sent to the field,
the
error starts again.  Unfortunately, it means that I have never been able
to
catch this in debug.  I did, however, change the error reporting a little
and got some more information.  The SQLiteException I am not getting
includes this information:

Insertion failed because the database is full database or disk is full

at System.Data.SQLite.SQLite3.Reset()
at System.Data.SQLite.SQLite3.Step()
at System.Data.SQLite.SQLiteDataReader.NextResult()
at System.Data.SQLite.SQLiteDataReader.ctor()
at System.Data.SQLite.SQLiteCommand.ExecuteReader()
at System.Data.SQLite.SQLiteCommand.ExecuteDbDataReader()
at ... etc etc


I downloaded the C source and tried to read through it, but honestly I am
not a C programmer and didn't get very far.

Other than a possible bug, the only thing I can think of is that there is
something fundamentally wrong with my architecture.  What I can't get, is
why the message has anything to do with inserting.  While there are
several
actions in my product that Insert into the database, the error never
occurs
at those points.  If I understood what was being inserted, perhaps I 
could

figure out a soultion.

If anyone can help, I'd greatly appreciate it.  The original message is
included below this one.

TIA,

Joel


-- Original message --
Hi all,

I have searched the web, the newsgroups, and the archives, but all I can
find is one reference to someone having this trouble with Python and a
bunch
of references that list this as predefined error #13 according to
http://www.sqlite.org/capi3.html.

What I can't find is any help in determining why a program might receive
this error.  The database is only 203KB and has 6 tables (maybe 1,000 
rows

in all the tables combined) running off a 1GB CompactFlash 

Re: [sqlite] Supporting databases larger than 2GB on FAT filesystems?

2007-04-13 Thread John Stanton
It is limited by the maximum file size on your OS.  You can make a 
multiple file database by ATTACHing more than one database.


Ludvig Strigeus wrote:

Does Sqlite support databases larger than 2GB on FAT filesystems?

If not, how hard would it be to add so it uses additional files for the
pages that don't fit in the first file?

Thanks
Ludvig




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Still getting "Insertion failed because database is full." errors

2007-04-13 Thread John Stanton
Flash memory is for read only access on a continuing basis but has a 
certain number of write cycles to use for load it with data.  For 
example you might use it for backups or for loading and distributing 
software or playing music but if you use it like a disk drive it will 
hit its write limit after a certain number of writes and updates and fail.


Look to the detailed specs of the card you are using to get exact details.

By "general processing" I meant the way you use a disk drive and subject 
it to repeated writes and updates, not just reads.


Joel Cochran wrote:

John,

What do you mean by "general processing"?  The database is on the CF card,
the application accesses the database.  Other than what application 
normally

do (select, update, insert, etc.), I'm not sure what else to tell you.

Or do you mean over the course of the lifetime of a CF card it can only be
used so much?  That might apply to this scenario, these cards have been
written over continuously for the last 6 months.

Joel

On 4/13/07, John Stanton <[EMAIL PROTECTED]> wrote:



Regular flash memory has a limited number of write cycles before it
fails.  Are you hitting this problem by using it for general processing?






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Supporting databases larger than 2GB on FAT filesystems?

2007-04-13 Thread John Stanton

Use a file system with 64 bit addressing.

Ludvig Strigeus wrote:

I would like to have a single table larger than 2GB...though.

/Ludvig

On 4/13/07, John Stanton <[EMAIL PROTECTED]> wrote:



It is limited by the maximum file size on your OS.  You can make a
multiple file database by ATTACHing more than one database.

Ludvig Strigeus wrote:
> Does Sqlite support databases larger than 2GB on FAT filesystems?
>
> If not, how hard would it be to add so it uses additional files for the
> pages that don't fit in the first file?
>
> Thanks
> Ludvig
>



- 


To unsubscribe, send email to [EMAIL PROTECTED]

- 









-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Still getting "Insertion failed because database is full." errors

2007-04-13 Thread John Stanton
You might find some joy in the baby disk drives such as installed in the 
original ipods.


Can you substitute RAM with a battery backup if the memory card is 
always in the device?


Joel Cochran wrote:

Thanks John and Dennis,
At least now I have something to look at.  I will look into the CF problem
next.

The database itself gets generated on a PC and then transferred to the CF
Card.  During testing and development, this could have been 20-30 times a
day, constantly erasing and recreating the existing DB.  We have also sent
large numbers of JPGs along with the database in the past (there are none
now, but have been before).  So these cards have been written over a lot,
perhaps that is the problem.

I think to test this, I will send the device back to the field with a brand
new card and see if the problem persists.  If the user can go several days
of normal use without the problem, then I'll be convinced that it is the
card.  Out of curiosity I just checked the CF cards we've been using: on 
the

development machine (which has NEVER shown the error) I have a SanDisk CF
Card.  On the Testing machine that is having the problem, there is a PNY
Technologies CF Card.  I wouldn't be surprised if the SanDisk card isn't
simply better than the PNY card, so there is something else to consider.

Once actual field use begins, the database will be replaced every week or
so, along with a fair number of images (like 100-300 a week).  The purpose
of the application would have every record in the database being updated 
and

some new ones created.  And it would be that way week in and week out,
essentially forever.  We may eventually port it over to very small Tablet
PCs, but right now it is all Windows Mobile 5.  This is one of the 
reasons I

went with SQLite, so that down the road I wouldn;t have to reinvent the
database piece of the software for a different platform.

Given all this, I will definitely look into the link Dennis sent.  The
company is not going to be happy replacing CF cards all the time, so if 
that

can extend the wear then it will be welcome.

Thanks a lot,

Joel

On 4/13/07, Dennis Cote <[EMAIL PROTECTED]> wrote:



Joel Cochran wrote:
>
> Or do you mean over the course of the lifetime of a CF card it can
> only be
> used so much?  That might apply to this scenario, these cards have been
> written over continuously for the last 6 months.
>
Joel,

Yes, that is exactly the problem. You should look at using a flash file
system such as http://sourceware.org/jffs2/ that uses "wear leveling"
algorithms to spread the writes over all the flash devices blocks if you
are writing often.

HTH
Dennis Cote


- 


To unsubscribe, send email to [EMAIL PROTECTED]

- 










-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Non-amalgamized version

2007-04-15 Thread John Stanton
Just pick up the regular distribution.  The link is 
http://www.sqlite.org/sqlite-3.3.15.tar.gz


Everything you need to build Sqlite on a variety of platforms is there. 
 Well commented open source makes it simple to extend or modify.


Ludvig Strigeus wrote:

Hi,

I want a non-amalgamized version. I.e. I want to have the file structure
intact, and not everything in the same file.

Thanks,
Ludvig

On 4/15/07, Jens Miltner <[EMAIL PROTECTED]> wrote:




Am 15.4.07 um 14:00 schrieb Ludvig Strigeus:

> Hi,
>
> Is it still possible to find a non-amalgamized zip file suitable
> for use to
> build sqlite on windows? (I.e. I need the "preprocessed" files
> where the
> unix tools have already been run). I would like the non-amalgamized
> files
> because I would like to modify it more easily.
>
> Can I just split the amalgamized .c file whenever I find the file
> separator
> comments (*** Begin file  ***) etc, and will that give the same
> result
> as the package that used to exist on the webpage?

I'm not sure I understand your question correctly, but what's wrong
with < http://sqlite.org/sqlite-source-3_3_15.zip>?
You can find this link on the sqlite download page ...

HTH,





- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 










-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Still getting "Insertion failed because database is full." errors

2007-04-17 Thread John Stanton
Perhaps you need to design an experiment to detect the problem.  Part of 
it might be to log activity.  Relying on a debugger rather than logical 
analysis can waste a lot of time.


Joel Cochran wrote:

I've had him sit beside my in my office and attempt to recreate it, both
using his device and mine, but it never happens.  Actually, I did get it to
happen once on his machine, but I was not connected to my PC, so I 
connected

and tried to recreate it through DEBUG but could not.  The last time it
happened in the field, I had him stop using the device and bring it to 
me so

that I could see the Stack Trace (which I sent to the list).  With neither
his device nor mine can I recreate the problem in DEBUG.  It is very
frustrating.

Thanks,

Joel


On 4/17/07, Jonas Sandman <[EMAIL PROTECTED]> wrote:



What is the guy on in the field doing that you are not? Are you using his
device for the testing?
Since it takes minutes for him to encounter the error it can't be that
hard
to recreate. Follow him around for an hour or so and see how he uses the
program. It could easily be something he's doing that you aren't...

On 4/17/07, Joel Cochran <[EMAIL PROTECTED]> wrote:
>
> The saga continues...
>
> I was very excited by the idea that there was something wrong with the
CF
> Card.  The theory fits all the facts: it explains why the original
> database
> threw unspecified errors, it explains why now SQLite is throwing 
errors,

> it
> explains why I can't reproduce the problem in house or on my
machine.  It
> just seemed to explain everything, so yesterday I went out and bought a
> brand-spankin' new SanDisk CF card.  I loaded it up with the database,
> installed it on my tester's machine, and this morning it went back out
to
> the field for testing.
>
> Within minutes, he encountered the same error.
>
> Now I just don't believe the problem is with the card, so I feel that I
am
> right back at square one.  I'm really at my wits end and don't know 
what

> to
> do next.  I am going to go ahead and install the database on the device
> memory instead of removable media, just to test it out, but I have no
> faith
> that it will change anything.  When that fails, I will send the tester
out
> with another device entirely, but again I expect the same results.
>
> I'm convinced now that the problem is with the application 
architecture,

> but
> I have no idea what to look at anymore.  I've stared and fiddled with
this
> code so much that I'm ready to throw in the towel.  But since I'd like
to
> keep my job that isn't an option.  If I had hair, I'd pull it out.
>
> Any help at all would be appreciated.
>
> --
> Joel Cochran
>
>
>
> On 4/13/07, Michael Ruck <[EMAIL PROTECTED]> wrote:
> >
> > Unless things have changed recently, the following should still be
valid
> > for
> >
> > Windows Mobile/Windows CE devices: Usually these devices do not power
> off,
> > but
> > stay in a standby state where the memory is always powered. Check if
> > that's
> > the
> > case with your system and move as much as possible into RAM or a RAM
> disk,
> > if that
> > feature is provided by the windows mobile kernel built for your
device.
> >
> > If that's not possible, I'd suggest replacing CF cards with micro
drives
> -
> > these
> > are regular hard drives in a CF card format. I'm not up to date on
> storage
> > space,
> > but should be sufficient for your needs.
> >
> > To test the cards I'd put them in a card reader format it and fill it
> > completely
> > up with zeros. When a flash card erases a byte, it sets all bits to
ones
> > and
> > upon
> > write clears those, which need to be zero. So to test all bits you
> really
> > need to
> > zero out the entire card. This will also give the controller in the
card
> a
> > chance
> > to remap bad sectors with spares. Finally you determine the file size
of
> > the
> > card,
> > when you receive the first write error. This is (approximately) the
> number
> > of bytes
> > the card can store (at that point in time) and falling.
> >
> > It seems some cards even return "read errors", when they hit a
defective
> > sector
> > upon read. Maybe the actual error code just gets lost/mangled on the
way
> > up
> > and the
> > actual error is just a simple read error ;) I've seen reports about
this
> > with some
> > digital cameras, which would not even let people view the pictures
taken
> a
> > minute
> > ago.
> >
> > Mike
> >
> > -Ursprüngliche Nachricht-
> > Von: John Stanton [mailto:[EMAIL PROTECTED]
> > Gesendet: Freitag, 13. 

Re: [sqlite] linux and java

2007-04-18 Thread John Stanton

Try Ubuntu.  It is gaining raving fans.

Alex McFerron wrote:

I need to get a laptop up and running with linux, java, and sql lite

any suggestions on the fastest way, the best linux distribution, the
code to connect java to sql lite?  I think I can find the code to
connect java to sql lite on the java sun forums.

well, any suggestions before I just pick a linux distribution at
random and start trying to get this running would be greatly
appreciated.

cheers,
alex

- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] DB design questions

2007-04-20 Thread John Stanton
We do something like that by storing the data in TEXT format and using 
RCS to handle versioning by its reverse delta method.  It works well for 
storing notes and may be useful in your application.


A function can return the appropriate version.

Michael Ruck wrote:

Hello,

I'm currently modelling and designing a new database according the following
specifications. The general idea is the ability to store arbitrary objects,
which have attributes of various kinds. The attributes themselves may be
multivalued. The objects being stored need to be versioned, so that there's
a way to go back to previous versions of an object. The objects represent
metadata of media files and the data itself comes from various automated
sources and manual editing by a user.

My current idea was the following:

- CREATE TABLE objects (id TEXT, version TEXT)
- CREATE TABLE attributes (object_id TEXT, version TEXT, name TEXT, order
INT, type INT, value TEXT)

Is there anyone who has experience with this kind of design, do you have
better ideas on modelling this kind of data?

Thanks,
Mike


-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Data structure

2007-04-20 Thread John Stanton

Lloyd,
If you want some code examples contact me and I shall send you some.
[EMAIL PROTECTED]

Lloyd wrote:

Thank you John Stanton. This has opened new doors for me, and think it
would be helpful for others in the list too..

Thanks and Regards
Lloyd

On Thu, 2007-04-12 at 12:34 -0500, John Stanton wrote:

We use a very simple data retrieval method for smallish datasets.  The 
data is just stored in memory or as a memory mapped file and a 
sequential search used.  It sounds crude but when you use a fast search 
algorithm like Boyer-Moore it outperforms index methods up to a 
surprisingly large number of records.  As you can imagine the code 
footprint is miniscule and if you add regular expression logic you can 
realize very intricate search patterns.


We use the method in conjunction with a database to achieve an enormous 
speed increase on "LIKE" type searches.  Grep a few files to get a feel 
for the performance.


Another method which works well for memory resident storage is to 
implement self balancing AVL trees.  The code is simple and the 
performance lightning fast.  With a little ingenuity you can use disk 
storage.  Mini Sql (MSql) is a good example of how this can be effective.


As Einstein said - "Make it as simple as possible, but not too simple". 
   Applying Occam's Razor can turn bloated solutions into more 
effective lean ones.  Typical solutions come in two sizes just like Army 
boots - too big and too small.







__
Scanned and protected by Email scanner

-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQL query, finding out which row a result is in

2007-04-22 Thread John Stanton

Why not use a "where member_id = '4567373'?

Gilles Roy wrote:
Given a arbitrary statement, I need to find out which row a specific 
result is in, as efficiently as possible. The arbitrary statement can 
order the results any way it wants.


For example, imagine statement X which returns a column of memberid.  
They can be ordered by first name, last name, date, etc. If I want to 
find out where (which row) memberid 4567373 stands in the result of 
statement X, what is the most efficient way?


Currently, the best way I can think of is:

"SELECT memberid from (X)" and bring all of the results into my 
application and then iterate through all of the results in a for loop, 
looking for the memberid 4567373. It seems inefficient.


Is their a better way? It would be nice if there was some kind of 
statement that could return where a specific memberid was in the results 
of another statement.



Thanks,
Gilles



- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQL query, finding out which row a result is in

2007-04-23 Thread John Stanton
You don't have to read into a memory array.  How about just running 
through your selection with an sqlite3_step and counting the rows?


Gilles Roy wrote:

On Sun, Apr 22, 2007 at 05:33:43PM -0500, P Kishor wrote:


On 4/22/07, Gilles Roy <[EMAIL PROTECTED]> wrote:


Given a arbitrary statement, I need to find out which row a specific
result is in, as efficiently as possible. The arbitrary statement can
order the results any way it wants.




what do you mean by "which row"? Do you want to know the position of
the required row within your result set, sort of like




i = 0
foreach row
if currentrow's memberid == 4567373
  print i
  get out of the loop
else
  i++



That is what I want to do. I want to know where the memberid is in the 
list (imagine the list was a waiting list or something). Is there not a 
way to just get the row number back? Is seems inefficient to have to 
allocate all of the memory to hold all of the results and then iterate 
through them.


Thanks,
Gilles


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] TRANSACTION

2007-04-23 Thread John Stanton
I am not sure how to proceed with handling multiple SQL statements. 
Perhaps someone has some experiences they would be kind enough to share.


I want to store multi-statement SQL to implement an entire transaction 
in the form -

BEGIN
 statement
 statement
 ...
COMMIT

I can see that sqlite3_prepare has the capability of stepping through a 
multi statement string but it looks like each statement becomes a 
seperate vdbe object so if I had five statements I would have five 
sqlite3_stmt ptrs and would need to step through them in sequence.


Does anyone know if I see it correctly?  Is there some way I have not 
seen to compile them all into one vdbe object?


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: TRANSACTION

2007-04-23 Thread John Stanton

Thanks Igor, much obliged.  That fits my application quite elegantly.

Igor Tandetnik wrote:

John Stanton <[EMAIL PROTECTED]> wrote:


I want to store multi-statement SQL to implement an entire transaction
in the form -
BEGIN
 statement
 statement
 ...
COMMIT

I can see that sqlite3_prepare has the capability of stepping through
a multi statement string but it looks like each statement becomes a
seperate vdbe object so if I had five statements I would have five
sqlite3_stmt ptrs and would need to step through them in sequence.

Does anyone know if I see it correctly?



Yes you do. Just run an outer loop until sqlite3_prepare says there are 
no more queries left.



 Is there some way I have not
seen to compile them all into one vdbe object?



No.

Igor Tandetnik

- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] TRANSACTION

2007-04-23 Thread John Stanton
Thanks for the input and confirming my analysis.  I am implementing a 
remote procedure call capability and keep a library of SQL transactions 
to be executed by a remote client.


Dennis Cote wrote:

John Stanton wrote:

I am not sure how to proceed with handling multiple SQL statements. 
Perhaps someone has some experiences they would be kind enough to share.


I want to store multi-statement SQL to implement an entire transaction 
in the form -

BEGIN
 statement
 statement
 ...
COMMIT

I can see that sqlite3_prepare has the capability of stepping through 
a multi statement string but it looks like each statement becomes a 
seperate vdbe object so if I had five statements I would have five 
sqlite3_stmt ptrs and would need to step through them in sequence.


Does anyone know if I see it correctly?  Is there some way I have not 
seen to compile them all into one vdbe object?




John,

You see things correctly. You can't put multiple statements in one VDBE 
object (unless you use a trigger).


But I don't think you need to store multiple prepared statements unless 
you want to do so for speed.


Simply store the string that contains all the statements, and scan 
through them using sqlite3_prpeare in a loop. You can include the begin 
and commit statements as well. You are done after executing the last 
statement, for which sqlite3_prepare will return a null tail pointer. 
You will only ever  have one statement prepared to execute at any point.


   char* sql_block = 
   sqlite3_stmt* s;
   char* sql= sql_block;
   do {
   sqlite3_prepare_v2(db, sql, -1, , );
  
  sqlite3_finalize(s);
   } while (sql != NULL);

A trigger block can contain multiple SQL statements, so you might be 
able to use a trigger to do what you want if you execute a trigger in a 
transaction.


HTH
Dennis Cote


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Odd performance issue under Windows - correction

2007-04-27 Thread John Stanton

John Elrick wrote:

John Elrick wrote:


John Elrick wrote:


Griggs, Donald wrote:


John Elrick wrote:
 

"what the heck is happening that is creating a better than order of 
magnitude difference in execution time on five out of seven Windows 
machines?".




John,

If the database is opened and closed just once for your testing, I 
think
you can find out if buffer flushing is involved by *temporarily* 
adding Pragma synchronous=OFF;


At the beginning of your test.
  


Thanks Donald.  The results are now identical.  So, it has to do with 
buffer flushing differences between the machines.  Now I have to 
figure out what the differences are given that the XP laptop is 
configured as shipped and is showing the performance improvement.




Corrected text:

"Thanks Donald.  The results are now in line."

The real time with the pragma off is 1.78 seconds.  The real time on the 
"faster" machine is 8.4 seconds.  When I set the synchronous pragma to 
off on the "faster" machine, the time drops to 1.64.



John

Do your various machines use the same hard disk controller and similar 
technology disks?


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Odd performance issue under Windows - correction

2007-04-27 Thread John Stanton

John Elrick wrote:

John Stanton wrote:

The real time with the pragma off is 1.78 seconds.  The real time on 
the "faster" machine is 8.4 seconds.  When I set the synchronous 
pragma to off on the "faster" machine, the time drops to 1.64.



John

Do your various machines use the same hard disk controller and similar 
technology disks?




No.  Some of the machines are actually in the hands of other team 
members and the client.  My local machines range from my son's EMachines 
desktop to a brand new HP laptop running Vista.


So we have eight machines with widely varying hardware configurations, 
yet five of them are within a second or so of being 10 times slower than 
the average of other three for this particular operation.



John

I would look at the disk controller/disk drive hardware and the software 
driver to see if they are reporting correctly to the OS.  Some of your 
numbers are too fast for regular disk technology and suggest that there 
are either hardware defects or intentional shortcuts to always perform 
some form of lazy write.


Even the fastest 15,000 RPM disks will take a minimum of 8 millisecs for 
a secure disk write, and it is that disk latency which is the limiting 
factor in secure buffer flushing.


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Odd performance issue under Windows - correction

2007-04-27 Thread John Stanton

John Elrick wrote:

John Stanton wrote:


John Elrick wrote:


John Stanton wrote:



I would look at the disk controller/disk drive hardware and the 
software driver to see if they are reporting correctly to the OS.  
Some of your numbers are too fast for regular disk technology and 
suggest that there are either hardware defects or intentional 
shortcuts to always perform some form of lazy write.


Even the fastest 15,000 RPM disks will take a minimum of 8 millisecs 
for a secure disk write, and it is that disk latency which is the 
limiting factor in secure buffer flushing.



One performance timed example from an older machine with our application 
was 128 INSERTS in 15.069 seconds.  That would be 110 milliseconds per 
write.  The faster machine was showing about 2.0 seconds for the same 
operation, or 15 milliseconds per write.


The Ruby script I posted showed 8 seconds to save 1000 rows, or just at 
the edge of what you show as a minimum possibility, however it showed 
115 seconds or around 115 milliseconds per write on the "slower" machine.


While I agree that 8 milliseconds is just too darn good to be true, I'm 
a little hesitant to call 110 milliseconds a reasonable number for a 
secure disk write.  But then again, I could be wrong.  I'm no expert in 
low level hard drive programming, which I why I have asked for ideas 
here.


The 8mS is for the fastest disks available and assuming infinite speed 
of the rest of the machine.  The 100mS number would not be unexpected 
from say 5,400 RPM disks and an OS like Windows which has many 
distractions to divert it from its purpose.


I should be very suspicious of the integrity of the machine which 
performed in 8Ms.


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] sqlite and borland c++ builder

2007-04-29 Thread John Stanton

Why not use gcc to compile your library, or use a precompiled DLL?

Jonathan Kahn wrote:

Hi Ulrik,
  Thank you for responding.  I'll try anything!  The frustration that all
this has brought me I am determined to solve it.  


  If I built SQLite with a C compiler what would be the result?  What would
I be able to work with from within c++?  Won't compiling leave me with an
executable?  I am fairly new to dealing with different compilers and things,
so please forgive my ignorance. 


Thanks a lot,
- Jon


-Original Message-
From: Ulrik Petersen [mailto:[EMAIL PROTECTED] 
Sent: Sunday, April 29, 2007 2:29 AM

To: sqlite-users@sqlite.org
Subject: Re: [sqlite] sqlite and borland c++ builder

Hi Jon,

is it not an option to build SQLite with a C compiler, then call it from 
within C++?



Regards,

Ulrik Petersen


Jonathan Kahn wrote:


Even when I try to build a new dll I get errors with attach.c and it says
cannot convert 'void *' to 'Db *', no matter what route I take I always


hit


a bump.  I'm just trying anything at this point

- Jon

-Original Message-
From: Joe Wilson [mailto:[EMAIL PROTECTED] 
Sent: Sunday, April 29, 2007 1:59 AM

To: sqlite-users@sqlite.org
Subject: Re: [sqlite] sqlite and borland c++ builder 


I wrote too quickly - sqlite3.h correctly uses __cplusplus for extern "C".

But the almalgomated sqlite3.c cannot be compiled from a C++ compiler
for the reasons described below.

--- Joe Wilson <[EMAIL PROTECTED]> wrote:
 


 When I try to use the header I get errors

[C++ Error] sqlite3.h(1778): E2232 Constant member
'sqlite3_index_info::nConstraint' in class without constructors
 


It appears it is trying to compile the sqlite header file as if it were
   


C++.
 


Lemme guess - you're using the almalgomated sqlite3.c from 3.3.17, right?

from the generated sqlite3.c:

/*
** Make sure we can call this stuff from C++.
*/
#if 0
extern "C" {
#endif

See the #if 0? That's the problem. It should be:

#if __cplusplus

SQLite 3.3.17 has a bug in sqlite3.c generation.
To work around this issue, do this:

extern "C" {
#include "sqlite3.h"
}


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 



   






-
 


To unsubscribe, send email to [EMAIL PROTECTED]

   






-
 

   



__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 








-
To unsubscribe, send email to [EMAIL PROTECTED]






-






-


To unsubscribe, send email to [EMAIL PROTECTED]




-



 





-
To unsubscribe, send email to [EMAIL PROTECTED]

-


-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: AW: [sqlite] sqlite and borland c++ builder

2007-04-29 Thread John Stanton
If that is his only problem all he has to do is some basic definitions 
for his compiler specifyng the Sqlite3 API components he is using.


Michael Ruck wrote:

If I understand him correctly, he's having issues including the original
sqlite3.h in his own sources too... He tried to build sqlite again to solve
that problem, but it remains there too.

I would recommed patching up sqlite3.h to conform to BC++ requirements -
changing those structs to something the compiler understands.

Mike 


-Ursprüngliche Nachricht-
Von: John Stanton [mailto:[EMAIL PROTECTED] 
Gesendet: Sonntag, 29. April 2007 18:31

An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] sqlite and borland c++ builder

Why not use gcc to compile your library, or use a precompiled DLL?

Jonathan Kahn wrote:


Hi Ulrik,
 Thank you for responding.  I'll try anything!  The frustration that all
this has brought me I am determined to solve it.  


 If I built SQLite with a C compiler what would be the result?  What


would


I be able to work with from within c++?  Won't compiling leave me with an
executable?  I am fairly new to dealing with different compilers and


things,

so please forgive my ignorance. 


Thanks a lot,
- Jon


-Original Message-
From: Ulrik Petersen [mailto:[EMAIL PROTECTED] 
Sent: Sunday, April 29, 2007 2:29 AM

To: sqlite-users@sqlite.org
Subject: Re: [sqlite] sqlite and borland c++ builder

Hi Jon,

is it not an option to build SQLite with a C compiler, then call it from 
within C++?



Regards,

Ulrik Petersen


Jonathan Kahn wrote:



Even when I try to build a new dll I get errors with attach.c and it says
cannot convert 'void *' to 'Db *', no matter what route I take I always


hit



a bump.  I'm just trying anything at this point

- Jon

-Original Message-
From: Joe Wilson [mailto:[EMAIL PROTECTED] 
Sent: Sunday, April 29, 2007 1:59 AM

To: sqlite-users@sqlite.org
Subject: Re: [sqlite] sqlite and borland c++ builder 


I wrote too quickly - sqlite3.h correctly uses __cplusplus for extern "C".

But the almalgomated sqlite3.c cannot be compiled from a C++ compiler
for the reasons described below.

--- Joe Wilson <[EMAIL PROTECTED]> wrote:




When I try to use the header I get errors

[C++ Error] sqlite3.h(1778): E2232 Constant member
'sqlite3_index_info::nConstraint' in class without constructors



It appears it is trying to compile the sqlite header file as if it were
  


C++.




Lemme guess - you're using the almalgomated sqlite3.c from 3.3.17, right?




from the generated sqlite3.c:



/*
** Make sure we can call this stuff from C++.
*/
#if 0
extern "C" {
#endif

See the #if 0? That's the problem. It should be:

#if __cplusplus

SQLite 3.3.17 has a bug in sqlite3.c generation.
To work around this issue, do this:

extern "C" {
#include "sqlite3.h"
}


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 



  






-




To unsubscribe, send email to [EMAIL PROTECTED]

  






-



  



__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 










-
To unsubscribe, send email to [EMAIL PROTECTED]








-










-



To unsubscribe, send email to [EMAIL PROTECTED]








-













-
To unsubscribe, send email to [EMAIL PROTECTED]






-






-


To unsubscribe, send email to [EMAIL PROTECTED]




-




-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Makefile

2007-05-02 Thread John Stanton

You can run strip on the file.

Ken wrote:
Is there a way to disable the -g flag for the library? 
 
 I've found that the version compiled without the -g flags is about 3 times smaller (right around the 500k mark) but the default compile is about 1.7 meg!
 
 Is there a way to tell the Make to build a 32bit version vs a 64 bit? If not this would be really nice.
 
 Can the Make that is provided build a  libsqlite3.a and libsqlite3.so from the amalgamated sqlite3.c ???
 
 Thanks

 Ken
 
 
 
 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Makefile

2007-05-02 Thread John Stanton
Just removing the -g will not get rid of all the debugging information 
so strip will still give you a smaller executbale file.


Danilo wrote:

...or you can find and delete " -g" from the Makefile!

John Stanton ha scritto:


You can run strip on the file.

Ken wrote:

Is there a way to disable the -g flag for the library?  
I've found that the version compiled without the -g flags is about 3

times smaller (right around the 500k mark) but the default compile is
about 1.7 meg!

Is there a way to tell the Make to build a 32bit version vs a 64 bit?
If not this would be really nice.

Can the Make that is provided build a  libsqlite3.a and libsqlite3.so
from the amalgamated sqlite3.c ???

Thanks
Ken




-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] stmt question

2007-05-02 Thread John Stanton

Variables are bound until you issue a reset or finalize.

Jonathan Kahn wrote:

Hi,

  I have a couple questions about using prepared statements with sqlite.

 


  If I prepare a statement can I bind variables as my value and then set the
variables in a loop and execute?  Or in my loop would I bind my values and
step so each bind gets executed until it equals SQLITE_DONE then reset?  I
guess I am just unclear on how to execute my prepared statement and set my
values in a loop.

 


Is there a better way to do what I want?  Am I completely off base?

 


Any info is much appreciated.  This is my first time working with the
sqlite3 api and sqlite so please forgive any ignorance.

 


Thanks a lot,

- Jon





-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] May I ask why the source distribution mechanism was changed starting with 3.3.14?

2007-05-04 Thread John Stanton
I did a comparison some time back between gcc and IBM's Xlc.  The IBM 
compiler was a bit slower to compile but the fully optimized executables 
were quite different in performance.  Xlc's executable ran 40% faster. 
A look at the generated code showed that the IBM optimizer was carefully 
matched to the machine and would eliminate big blocks of code by taking 
advantage of its large register file and particular instructions.


I did a less rigorous test with gcc and lcc on a PC.  Lcc compiled much 
faster than gcc and ran much the same on well written code but gcc with 
better optimization ran much better on indifferent code.


My conclusion was that on Intel machines gcc was the best overall 
compiler choice.  The combination of compile speed, executable quality 
and compatibility with applications is hard to beat.


[EMAIL PROTECTED] wrote:

"C.Peachment" <[EMAIL PROTECTED]> wrote:


With the suggestion that the problem was a compiler bug
in PellesC for Windows, I posted a message on their forum.
One response suggested a couple of configuration changes
and also said to wait a while because it took a long time to
compile.




Fascinating.

GCC (version 4.1.0) takes less than 25 seconds to compile and
link a command-line shell using the amalgamation with full 
optimization (-O3) - which as far as I know is as slow as 
you can make the compiler run.  With all optimization turned 
off, the whole thing builds in 3.430 seconds (wall-clock time).


One of the reasons proffered for not using the amalgamation
is that compiles go faster if you do not have to recompile
the whole thing.  I read that and thought "4 seconds is
too slow?"  But perhaps GCC is just a really fast compiler
and the person who said that is just using a much slower
compiler.

I'm curious to see a comparison of the size and speed of
the executables that result from GCC and its more sluggish
competitors.  Does anybody have any data?

--
D. Richard Hipp <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] I'm Starving for New User Information

2007-05-08 Thread John Stanton

km4hr wrote:

Is there a sqlite introduction for programmers wanting to use the sqlite C
API? The info on the web site is pretty sparse. There seems to be plenty of
info regarding the use of sqlite3 all over the web.  But not much on getting
set up to write programs that use sqlite. 


I have some specific questions below. I am a C programmer and I want use
sqlite in my programs.

First question is about installation. I went to the sqlite download page and
got a ".so" file. This is obviously a shared library but I can't find any
installation document that confirms this. What do I do with the ".so" file?
I guess I need to put it somewhere, but where? I'm using Fedora Core4 at the
moment but what if I also want to use sqlite on RedHat 4? I'd eventually
like to use sqlite on HPUX 11.0 but I'll be happy for the moment to get
myself oriented on Linux.

Are there any link instructions? How do I link a C program on Fedora Core4
Linux for example?
"gcc  myprog.c  -o myprog  sqlite.so"?  Do I need to set a library path
environment variable to point to where the ".so" file is located? Or what?

Is a C program that uses sqlite statically linked? That is, is the final
executable a self contained program that can be moved from one computer to
similar computer? Or does it require supporting sqlite files/libraries to be
installed on each computer where the program is run? Can I just copy a
program that uses the C API to a similar computer and run it? I assume I
would at least have to copy some sqlite data file as well. No?

Once I've created a C program that uses sqlite can I administer its database
with sqlite3 or do I have to write utility programs to create a database,
tables, view table contents, etc?

I'd really like to understand how sqlite works. Not internally, but things
like where is the data stored? What does it mean to "install" sqlite? If I
run a C program in separate directories where does the data get stored? In a
common file somewhere? Or does each program have its own sqlite data file?  


On the sqlite web site there's a brief 5 minute getting started explanation.
It explains how to get going using sqlite3.  But where do I go after that
for more detailed understanding? 


How do you backup sqlite data? Just copy a data file? Or do you use sqlite3
to dump a file of sql statements?

The architecture is unclear to me. I'm hungry to learn more. Is there a
summary document somewhere? I'm finding plenty of tutorials on how to use
sqlite3, the command line interface. But is there anything that explains the
basic architecture? Installation? administration?

The sqlite C API documentation seems pretty clear. I can even find helpful
documents on the web. I just can't find anything describing how to install
and administer sqlite or basically how it works.

thanks





Just load the source, run configure, make and make install

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Limiting the size of a database?

2007-05-08 Thread John Stanton

Try the traditional way and use disk partitions/filesystems.

Joe Wilson wrote:

--- Ron Stevens <[EMAIL PROTECTED]> wrote:


Is it possible to tell SQLite to limit the size that a database may
grow to? It would be useful for storage constrained applications.



This is a tricky problem.

What would you have the database do if an insert failed upon reaching 
the limit?


What about the space for the journal files?

You could change the I/O subsystem's seek and write calls to have them
fail upon exceeding a threshold, but I'm not certain if that's useful.

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] A suggestion

2007-05-09 Thread John Stanton
That program does have the capability, but may not be implemented that 
way on Windows.  Why not make the change yourself?


A.J.Millan wrote:

As a suggestion, and even in the risk to abuse of Mr Hipp's patience. Would
it be possible to include in the command-line program (sqlite3.exe) the
ability to edit, an repeat at least the five or six last commands, as in
Linux?. Is to say with up-arrow and down-arrow.  I believe it would be too
helpful.

A.J.Millan
ZATOR Systems


-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] A suggestion

2007-05-09 Thread John Stanton

Rich Shepard wrote:

On Wed, 9 May 2007, John Stanton wrote:

That program does have the capability, but may not be implemented that 
way on Windows.  Why not make the change yourself?


A.J.Millan wrote:

As a suggestion, and even in the risk to abuse of Mr Hipp's patience. 
Would

it be possible to include in the command-line program (sqlite3.exe) the
ability to edit, an repeat at least the five or six last commands, as in
Linux?. Is to say with up-arrow and down-arrow.  I believe it would 
be too

helpful.



  That, I believe, is a function of the shell, not the application using 
it.

For example, the command history available in bash is not present -- at
least, was not present -- in sh. Command history, IIRC, was a feature of 
the

csh, and bash (the Bourne Again Shell) took the best features of sh, csh,
and the Textronics version tcsh.

  Years ago, about a decade, when I had PC DOS 7.0 installed along with
linux, I discovered that the DOS shell allowed tab completions. I don't
recall if it had the history recall, too.

Rich


It would be a function of "readline".  I have not looked closely but I
would imagine that to implement or emulate readline on Windows would
achieve the function.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] My HPUX Notes

2007-05-10 Thread John Stanton

Markus Hoenicka wrote:

Quoting km4hr <[EMAIL PROTECTED]>:


I just happened to notice that I may not be executing the sqlite
installation process (configure/make/make install) in a full "bash"
environment. My usual environment is "ksh". Typing in the command
"/bin/OpenSource/bin/bash" I get a bash prompt. But apparently that 
doesn't
put me in a full bash environment. I just noticed that unless I 
specify the

full path to the GNU make command (/opt/OpenSource/bin/make) then the ksh
version (/usr/bin/make) is executed. I wonder what I have to do to get 
into

a true bash environment? Must be some environment viable that has to be
changed. Oh well, one mystery begets another!



There is no such thing as a full "bash" environment. bash is just  
another shell, just like ksh. You may experience some differences  
because the shells may read different startup files, so e.g.  
environment variables may be set differently.


Some systems like FreeBSD call the GNU make binary "gmake" to  
distinguish it from the system's own make binary. You could put a  
symlink called "gmake" into /usr/bin (or /usr/local/bin, depending on  
your local policy) which points to /opt/OpenSource/bin/make. You can  
then run GNU make by using "gmake" instead of "make" in the build  
process from any shell you prefer.


regards,
Markus

Markus has good advice.  We make legacy Unix machines compatible with 
open source software by installing gcc and key GNU utilities like make, 
bison etc in /usr/local/bin and put that in the path.  Where the names 
clash, like with "make" we name the GNU one gmake.


You then have the ./configure, gmake and gmake install sequence working.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Odd results return by SELECT query WHERE word = "word"

2007-05-11 Thread John Stanton

'word' is correct SQL, "word" is not.

Matteo Vescovi wrote:

Hi,
I am getting weird results when executing a query that
has this WHERE clause: WHERE word = "word".

The query works fine if I use WHERE word = 'word'.

The following illustrates the problem:

[EMAIL PROTECTED]:~$ sqlite -version
2.8.17
[EMAIL PROTECTED]:~$ cat populate.sql
CREATE TABLE _1_gram (word TEXT, count INTEGER,
UNIQUE(word) );
INSERT INTO _1_gram VALUES("foo", 13);
INSERT INTO _1_gram VALUES("bar", 16);
INSERT INTO _1_gram VALUES("word", 36);
INSERT INTO _1_gram VALUES("foobar", 336);
[EMAIL PROTECTED]:~$ sqlite test.db < populate.sql
[EMAIL PROTECTED]:~$ sqlite test.db
SQLite version 2.8.17
Enter ".help" for instructions
sqlite> SELECT * FROM _1_gram WHERE word = 'word';
word|36
sqlite> SELECT * FROM _1_gram WHERE word = "word";
foo|13
bar|16
word|36
foobar|336
sqlite> .quit
[EMAIL PROTECTED]:~$

Am I missing something here (I haven't used SQL in a
while...), or is this a bug?

Cheers,
- Matteo Vescovi




___ 
All New Yahoo! Mail – Tired of unwanted email come-ons? Let our SpamGuard protect you. http://uk.docs.yahoo.com/nowyoucan.html


-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] One more SQLite threading question

2007-05-17 Thread John Stanton

Ed Pasma wrote:

However, it would be too time consuming to serialize every call to
sqlite3_step(), so I wonder whether it can be called in another  
thread.



This almost immediately raises
"library routine called out of sequence". It occurs as soon as the
processing of A and B overlap, that means A is preparing statement #2
while B is still executing #1.



Have you tried using Mutex or some other way to prevevent really
simultaneous calling of SQLite methods? My guess that it should work
well then, but I haven't tried it myself...



Yes I did the same experiment with a lock that made thread A wait  until 
B was finished. So actually only one thread can be active at  the time. 
I don't see how the outcome of this experiment can be of  any interest, 
as there is no time reduction any longer. But your  guess is right that, 
it works.


How would multiple threads be faster than a single one when you are 
accessing a single resource?



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] One more SQLite threading question

2007-05-19 Thread John Stanton

You can get what you want right now.  It is called PostgreSQL.

Ken wrote:
I would be interested in a version of SQLITE that handled threading in a much cleaner way. I have a need for a single process version that is threaded. 
 
 But, where  SQLITE locking is concerned each thread is really like a seperate Database connection. The locking occurs as a part of the Pager locking which is whole file for the duration of the transaction. AFAIK, the shared cache API is pretty worthless as the only way to implement this is through a single "server" thread. Which in that case whats the point of a shared cache? What is it shared against, since all threads must send data to the shared server anyways and none may access it concurrently.
 
 One thing that Other database engines do is allow read and writes to occur without blocking. That is a Reader never blocks a writer and a Writer never blocks a reader. SQLITE does not do this, Only a single writer or Multiple readers, but not both concurrently.
 
 I'm not trying to pick on sqlite, just pointing out that it really doesn't perform multi threading or even conncurrent access very well in a read/write environment. Read Only, its great. Single threaded Read/Write ... Very good as well.
 
 Regards,

 Ken
 
 
 
 

Doug Nebeker <[EMAIL PROTECTED]> wrote: > > Yes I did the same experiment with a lock that made thread A wait  


until B was finished. So actually only one thread can be active at


the time.

I don't see how the outcome of this experiment can be of  any 
interest, as there is no time reduction any longer. But your  guess


is 


right that, it works.


How would multiple threads be faster than a single one when you are


accessing a single resource?

Assumably the thread that is accessing the database either spends some
time gathering data to write
or processing data it read.  The single resource isn't in use during
that time.

This email was sent to you by Reuters, the global news and information company. 
To find out more about Reuters visit www.about.reuters.com


Any views expressed in this message are those of the individual sender, 
except where the sender specifically states them to be the views of Reuters Limited.


Reuters Limited is part of the Reuters Group of companies, of which Reuters 
Group PLC is the ultimate parent company.
Reuters Group PLC - Registered office address: The Reuters Building, South 
Colonnade, Canary Wharf, London E14 5EP, United Kingdom
Registered No: 3296375
Registered in England and Wales



-
To unsubscribe, send email to [EMAIL PROTECTED]
-






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] One more SQLite threading question

2007-05-19 Thread John Stanton

Doug Nebeker wrote:
Yes I did the same experiment with a lock that made thread A wait  
until B was finished. So actually only one thread can be active at


the time.

I don't see how the outcome of this experiment can be of  any 
interest, as there is no time reduction any longer. But your  guess


is 


right that, it works.


How would multiple threads be faster than a single one when you are


accessing a single resource?

Assumably the thread that is accessing the database either spends some
time gathering data to write
or processing data it read.  The single resource isn't in use during
that time.

I just notice that threading is often misunderstood.  It is a way of 
sharing resources, not of making single resources, like a CPU or a 
database handler using a single disk drive operate faster.


This email was sent to you by Reuters, the global news and information company. 
To find out more about Reuters visit www.about.reuters.com


Any views expressed in this message are those of the individual sender, 
except where the sender specifically states them to be the views of Reuters Limited.


Reuters Limited is part of the Reuters Group of companies, of which Reuters 
Group PLC is the ultimate parent company.
Reuters Group PLC - Registered office address: The Reuters Building, South 
Colonnade, Canary Wharf, London E14 5EP, United Kingdom
Registered No: 3296375
Registered in England and Wales



-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Sqlite Server

2007-05-27 Thread John Stanton
There is no Sqlite Server unless you use a third party product.  Sqlite 
is a library which links into your application.


noname wrote:

I am using SQL Server as a back end in my vb6 application i want to switch
over to sqlite server but terrainformatica.com site has not provided rates
for server. so i am confused that should i use it or not i enquired on sites
also but i am getting proper answer from them. My application process huge
data of hospital. please suggest me that what i should do may i can waint
for SQlite server or suggest me other server like SQLite server on which
Developer licence are available. My client should not need to purchase back
end. 



arvind parmar 
[EMAIL PROTECTED]





-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Embedded SQL in C

2007-05-27 Thread John Stanton
The Sqlite BIND capability makes implementing an embedded SQl interface 
quite a simple operation.


We use embedded SQL with Sqlite but do it within a proprietary language. 
 It was almost trivial to implement.


Leif Jensen wrote:

 Hi,

  In a larger project we are using PostgreSQL database and Embedded SQL 
in C (using ECPG) in a server daemon. We would like to be able to have 
this to work with SQLite for a stand-alone application. The Embedded SQL 
in C standard is as used by Oracle (and former Informix) and of course 
PostgreSQL.


  Does anyone know of an Embedded SQL in C (pre-compiler/interface) that 
will work with SQLite ?


Leif


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Embedded SQL in C

2007-05-27 Thread John Stanton
We have an application server which incorporates Sqlite.  It implements 
a proprietary language to generate embedded server pages and these use 
regular embedded SQL compiled into the executables.  It also has a 
remote procedure call interface which accesses a repository of SQL using 
embedded SQl syntax bound to variable names in Javascript running in 
browsers.


To implement a preprocessor which intercepts SQL in a C program and 
embeds calls to the Sqlite API would provide what you want.  The Sqlite 
Bind call seems to have been implemented in such a way as to facilitate 
embedded SQL.


Leif Jensen wrote:
  That sounds very interesting. Could you please elaborate a little more 
on that ?


Leif


John Stanton wrote:

The Sqlite BIND capability makes implementing an embedded SQl 
interface quite a simple operation.


We use embedded SQL with Sqlite but do it within a proprietary 
language.  It was almost trivial to implement.


Leif Jensen wrote:


 Hi,

  In a larger project we are using PostgreSQL database and Embedded 
SQL in C (using ECPG) in a server daemon. We would like to be able to 
have this to work with SQLite for a stand-alone application. The 
Embedded SQL in C standard is as used by Oracle (and former Informix) 
and of course PostgreSQL.


  Does anyone know of an Embedded SQL in C (pre-compiler/interface) 
that will work with SQLite ?


Leif


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: CAST

2007-05-28 Thread John Stanton

Robert Simpson wrote:

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Monday, May 28, 2007 9:11 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Re: CAST




SQLite does not have a dedicated DATE type.


I know that, but why it does't create appropriate column definition ?
create table tab(col date);
creates a table with "date" type.
create table tab2 as select * from tab;
also.
This type does't do much, but it can be queried with
sqlite3_column_decltype.
This way I know what to do with text stored in the database.
Currently the only way I see is to create table and then insert .. as
select
which seems to be weird in presence of a function that "changes the
datatype".



sqlite3_column_decltype() returns the declared type of the column in the
table, not the name of the type declared in a particular row in a particular
column of a table.

I wonder how feasible it'd be to implement an extensible type system into
SQLite's core.  We have extensibility through user-defined functions and
collating sequences, why not user-defined types?

Robert

We actually do that with our Sqlite interfaces.  We use the declared 
type to specify the type and perform a conversion when necessary.  For 
example if the declared type of a column is DATE we know that we 
actually have a FLOAT so when we call a Javascript SQL function for 
example the Sqlite user function transforms the floating point number 
declared as a DATE to be a Javascript Date object.


A FLOAT with declared type DATE may be transformed into a date string in 
accordance with the rules of the chosen locale when being embedded in an 
HTML page.


Our applications use DECIMAL arithmetic for accuracy so when a column is 
declared as DECIMAL its actual type is TEXT but arithmetic rules are 
applied.  If Sqlite has decided to make it an INTEGER or a FLOAT the 
correct conversion is made with the declared precision and scale.




-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: CAST

2007-05-28 Thread John Stanton

Read about manifest typing and it will become clear.

[EMAIL PROTECTED] wrote:
SQLite does not have a dedicated DATE type. 



I know that, but why it does't create appropriate column definition ?
create table tab(col date);
creates a table with "date" type. 
create table tab2 as select * from tab;

also.
This type does't do much, but it can be queried with sqlite3_column_decltype.
This way I know what to do with text stored in the database.
Currently the only way I see is to create table and then insert .. as select
which seems to be weird in presence of a function that "changes the datatype".



- Original Message -
From: "Igor Tandetnik" <[EMAIL PROTECTED]>
To: "SQLite" 
Date: Mon, 28 May 2007 10:36:50 -0400
Subject: [sqlite] Re: CAST



[EMAIL PROTECTED] wrote:


I'm wandering if CAST is supposed to work?


Yes.



sqlite> create table tab(col date);
sqlite> insert into tab values('1994-11-11');
sqlite> create table tab2 as select cast(col as DATE) from tab;
sqlite> .schema tab2
CREATE TABLE tab2("cast(col as DATE)");
sqlite> select * from tab2;
1994


SQLite does not have a dedicated DATE type. See 
http://sqlite.org/datatype3.html . When given an unknown type, SQlite 
assumes numeric. That's why CAST('1994-11-11' as DATE) produces 1994. So 
would CAST('1994-11-11' as ANY_RANDOM_STRING).


It is customary to store dates as strings in SQLite. Several functions 
are provided to manipulate dates in this representation.


Igor Tandetnik 







-


To unsubscribe, send email to [EMAIL PROTECTED]




-







--
Ile masz w domu niepotrzebnych rzeczy?
Wymien sie z sasiadami >> http://link.interia.pl/f1a93



-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: CAST

2007-05-29 Thread John Stanton

Robert Simpson wrote:

-Original Message-
From: John Stanton [mailto:[EMAIL PROTECTED]
Sent: Monday, May 28, 2007 4:21 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Re: CAST

We actually do that with our Sqlite interfaces.  We use the declared
type to specify the type and perform a conversion when necessary.  For
example if the declared type of a column is DATE we know that we
actually have a FLOAT so when we call a Javascript SQL function for
example the Sqlite user function transforms the floating point number
declared as a DATE to be a Javascript Date object.

A FLOAT with declared type DATE may be transformed into a date string
in
accordance with the rules of the chosen locale when being embedded in
an
HTML page.

Our applications use DECIMAL arithmetic for accuracy so when a column
is
declared as DECIMAL its actual type is TEXT but arithmetic rules are
applied.  If Sqlite has decided to make it an INTEGER or a FLOAT the
correct conversion is made with the declared precision and scale.



I use the declared type as well (where possible) in the SQLite .NET
provider.  Still, it'd be better if we could have an extensible type system
built into the engine itself so one could perform literal to column
comparisons and other SQL statements that are beyond the scope of a wrapper
to provide type adjustments for.

Your date example is a perfect example of a place where an extensible type
system would be ideal.  A SQL statement against a DATE field such as ...

  SELECT * FROM FOO WHERE [TheDate] < '2007/01/01'

...is pretty much impossible to fix in a wrapper -- but if we had an
extensible type system we could provide our own comparison func that takes
[TheDate]'s value and the literal value and figures out how to interpret and
compare the two values.

Robert

Your comments endorse the approach we took which was to avoid the 
wrapper concept entirely with its inherent limitations  We use Sqlite as 
an embedded database in an application server rather than trying to 
integrate an API which wraps Sqlite.


In the case of the date we implement date arithmetic and comparison 
functions building upon the excellent date primitives in the Sqlite 
source.  However your case does not require any special functions as it 
is handled simply by the regular Sqlite date functions.


An example of the use of date arithmetic might be:

SELECT * FROM mytab WHERE days_overdue(due_date) > 90;

We implement the user functions either as native code or as Javascript. 
 The Javascript has the advantage that the text is stored in the 
database so the functions can be state-driven.




-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Join

2007-05-29 Thread John Stanton

See the ATTACH statement.

Shilpa Sheoran wrote:

Does sqlite allow joining tables in different database files using
triggers or any other mechanism? Does it affect the performance?

Thanks
Shilpa

- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] How to restrict the peak heap usage during multiple inserts and updates?

2007-05-29 Thread John Stanton
Since you are only using part of Sqlite have you considered using a much 
smaller footprint storage system which only implements the functions you 
are using?


Kalyani Tummala wrote:

Hi joe,

Thanks for your response. 


In order to reduce the footprint size, I have bypassed parser completely
and using byte codes directly as my schema and queries are almost
compile time fixed. Hence I am not using sqlite3_prepare(). 
 
The following is the schema and inserts I am using. 
CREATE TABLE OBJECT(


PUOIINTEGER  PRIMARY KEY,
Storage_Id  INTEGER,
Object_Format		INTEGER, 
Protection_Status	INTEGER,

Object_Size INTEGER,
Parent_Object   INTEGER,
Non_Consumable  INTEGER,
Object_file_nameTEXT,
NameTEXT,
File_Path   TEXT
);

CREATE TABLE AUDIO(

PUOIINTEGER PRIMARY KEY,
Use_Count   INTEGER,
Audio_Bit_Rate  INTEGER,
Sample_Rate INTEGER,
Audio_Codec_TypeINTEGER,
Number_of_Channels  INTEGER,
Track   INTEGER,
Artist  TEXT,
Title   TEXT,
Genre   TEXT,
Album_Name  TEXT,
File_Path   TEXT
);

INSERT INTO OBJECT VALUES (
 7, 65537, 12297, 0,
 475805, 6, 0, 
 'ANJANEYASTOTRAM.mp3', NULL,

'C:\\MTPSim\\Store0\\Music\\Artist\\Album\\ANJANEYASTOTRAM.mp3'
);


INSERT INTO AUDIO VALUES (
 7, 6, 144100, 0,
 0, 0, 6, 
 NULL, NULL, NULL, NULL,

'C:\\MTPSim\\Store0\\Music\\Artist\\Album\\ANJANEYASTOTRAM.mp3'
);

INSERT INTO OBJECT VALUES (
 8, 65537, 12297, 0,
 387406, 6, 0, 
 'BHADRAM.mp3', NULL,

'C:\\MTPSim\\Store0\\Music\\Artist\\Album\\BHADRAM.mp3'
);


 INSERT INTO AUDIO VALUES (
 8, 6, 144100, 0,
 0, 0, 6, 
 NULL, NULL, NULL, NULL,

'C:\\MTPSim\\Store0\\Music\\Artist\\Album\\BHADRAM.mp3'
);


Warm regards
Kalyani

-Original Message-
From: Joe Wilson [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, May 29, 2007 9:42 AM

To: sqlite-users@sqlite.org
Subject: Re: [sqlite] How to restrict the peak heap usage during
multiple inserts and updates?



I am working at porting sqlite ( ver 3.3.8 ) on an embedded device


with


extremely low main memory.

I tried running select queries on the tables( with about 2k records


each


having about 5 strings) and they do well within 20kB of runtime heap
usage.

But, when I try new insertions, the heap usage grows tremendously


(about


70 kB at peak).



Perhaps preparing the statements (sqlite3_prepare) might decrease RAM 
use somewhat.


Can you post an example of your schema and these insert statements?



   


Choose the right car based on your needs.  Check out Yahoo!
Autos new Car Finder tool.
http://autos.yahoo.com/carfinder/


-
To unsubscribe, send email to [EMAIL PROTECTED]

-


**
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you have received this email in error please notify
[EMAIL PROTECTED]
**


-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: CAST

2007-05-29 Thread John Stanton

Robert Simpson wrote:

-Original Message-
From: John Stanton [mailto:[EMAIL PROTECTED]
Sent: Tuesday, May 29, 2007 6:18 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Re: CAST

Your comments endorse the approach we took which was to avoid the
wrapper concept entirely with its inherent limitations  We use Sqlite
as
an embedded database in an application server rather than trying to
integrate an API which wraps Sqlite.

In the case of the date we implement date arithmetic and comparison
functions building upon the excellent date primitives in the Sqlite
source.  However your case does not require any special functions as it
is handled simply by the regular Sqlite date functions.

An example of the use of date arithmetic might be:

SELECT * FROM mytab WHERE days_overdue(due_date) > 90;

We implement the user functions either as native code or as Javascript.
 The Javascript has the advantage that the text is stored in the
database so the functions can be state-driven.



Ok, I grant you that was a bad example -- I was in a hurry when I posted it.
It also requires that datetimes are stored in the database as a string.  If
someone used a numeric type instead, you'd be forced into using a
user-defined function for a comparison against a literal string.  If all
users were nice and tidy and used parameterized queries, the world would be
a better place -- but try as you might to drill it into folks, they still
throw their literals into a query and defeat your nice inline type
adjusters.

I've had to use a sturdy shoehorn and a lot of grease to squeeze SQLite into
the ADO.NET vNext (Entity Framework) recently, so a lot of my type
frustrations stem from that effort.

MS's Entity Framework uses CAST() liberally in its SQL construction, and
expects the returned values to match up to the type it was CAST() to --
which is currently impossible to do.  I can't cast to a datetime, guid,
int32, float, etc in SQLite and there's no way for me to tell what datatype
was mentioned in the CAST statement for a given returned column.  So when
I'm queried for the type of a column returned from one of these SELECT's,
there's no way for me to give back an accurate type.

I've hacked it up and done it, but its a bit ugly.  Fortunately users don't
need to see the underlying SQL generated from the Entity Framework :)

Robert

You have just given an excellent explanation of why the wrapper approach 
is flawed.  Think about it.




-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] How to restrict the peak heap usage during multiple inserts and updates?

2007-05-29 Thread John Stanton
In your case we would not use Sqlite and instead use a much simpler 
storage method.  Since your storage appears to be RAM resident that 
approach is indicated a fortiori.


We have had success with using storage based on AVL trees.  It is very 
fast and remains so despite repeated insertions and deletions.  The code 
footprint is tiny (10K) and there is no heap usage so memory leakage can 
never be a problem.  You do not have SQL in that environment but it 
would appear that you are not using it anyway.  Since your data is 
memory resident ACID compliance and logging are not an issue.


Even with quite detailed data manipulation you would be hard pressed to 
have a footprint greater than 30K.  You could cut that down by defining 
code like VDBE with a high information density and using a simple engine 
to interpret that metacode.  We have successfully used that approach at 
times.


Kalyani Tummala wrote:

Hi John,
I could not understand your query properly. Let me tell you my
application scenario. 


I am planning to use sqlite as a database for storing and retrieving
media data of about 5-10k records in a device whose main memory is
extremely small. A sequence of insert statements increasing the heap
usage to nearly 70K(almost saturating point) which is crashing my
application. I want to restrict this to 30K. 

I tried closing database and reopen after some inserts but of no use. 


I have observed that, when I open the database with about 1K to 2K
records in it, inserts and updates take more heap and also gradually
increase than a a database with less than 1k records in it. 


My objective is to reduce the peak heap usage during inserts, updates
and also deletes with little or no performance degradation.

Please suggest me if I can do anything to do so.

Thank you in advance
Kalyani



  


-Original Message-
From: John Stanton [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, May 29, 2007 6:51 PM

To: sqlite-users@sqlite.org
Subject: Re: [sqlite] How to restrict the peak heap usage during
multiple inserts and updates?

Since you are only using part of Sqlite have you considered using a much

smaller footprint storage system which only implements the functions you

are using?

Kalyani Tummala wrote:


Hi joe,

Thanks for your response. 


In order to reduce the footprint size, I have bypassed parser


completely


and using byte codes directly as my schema and queries are almost
compile time fixed. Hence I am not using sqlite3_prepare(). 

The following is the schema and inserts I am using. 
CREATE TABLE OBJECT(


PUOIINTEGER  PRIMARY KEY,
Storage_Id  INTEGER,
Object_Format		INTEGER, 
Protection_Status	INTEGER,

Object_Size INTEGER,
Parent_Object   INTEGER,
Non_Consumable  INTEGER,
Object_file_nameTEXT,
NameTEXT,
File_Path   TEXT
);

CREATE TABLE AUDIO(

PUOIINTEGER PRIMARY KEY,
Use_Count   INTEGER,
Audio_Bit_Rate  INTEGER,
Sample_Rate INTEGER,
Audio_Codec_TypeINTEGER,
Number_of_Channels  INTEGER,
Track   INTEGER,
Artist  TEXT,
Title   TEXT,
Genre   TEXT,
Album_Name  TEXT,
File_Path   TEXT
);

INSERT INTO OBJECT VALUES (
7, 65537, 12297, 0,
475805, 6, 0, 
'ANJANEYASTOTRAM.mp3', NULL,

'C:\\MTPSim\\Store0\\Music\\Artist\\Album\\ANJANEYASTOTRAM.mp3'
);


INSERT INTO AUDIO VALUES (
7, 6, 144100, 0,
0, 0, 6, 
NULL, NULL, NULL, NULL,

'C:\\MTPSim\\Store0\\Music\\Artist\\Album\\ANJANEYASTOTRAM.mp3'
);

INSERT INTO OBJECT VALUES (
8, 65537, 12297, 0,
387406, 6, 0, 
'BHADRAM.mp3', NULL,

'C:\\MTPSim\\Store0\\Music\\Artist\\Album\\BHADRAM.mp3'
);


INSERT INTO AUDIO VALUES (
8, 6, 144100, 0,
0, 0, 6, 
NULL, NULL, NULL, NULL,

'C:\\MTPSim\\Store0\\Music\\Artist\\Album\\BHADRAM.mp3'
);


Warm regards
Kalyani

-Original Message-
From: Joe Wilson [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, May 29, 2007 9:42 AM

To: sqlite-users@sqlite.org
Subject: Re: [sqlite] How to restrict the peak heap usage during
multiple inserts and updates?




I am working at porting sqlite ( ver 3.3.8 ) on an embedded device


with



extremely low main memory.

I tried running select queries on the tables( with about 2k records


each



having about 5 strings) and they do well within 20kB of runtime heap
usage.

But, when I try new insertions, the heap usage grows tremendously


(about



70 kB at peak).



Perhaps preparing the statements (sqlite3_prepare) might decrease RAM 
use somewhat.


Can you post an example of your schema and these insert statements?



  






Choose the right car based on your needs.  Check out


Yahoo!


Autos new Car Finder tool.
http://autos.yahoo.com/carfinder/







-
To unsubscribe

Re: [sqlite] Re: CAST

2007-05-29 Thread John Stanton

Robert Simpson wrote:

-Original Message-
From: John Stanton [mailto:[EMAIL PROTECTED]
Sent: Tuesday, May 29, 2007 8:40 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Re: CAST

You have just given an excellent explanation of why the wrapper
approach
is flawed.  Think about it.



Every approach not deliberately designed around a specific module's exposed
API is fundamentally flawed in that it is limited by the constraints of the
wrapper.  The answer is not "don't use a wrapper" though.

"Think about it"
In my case, I can only speak for .NET and Microsoft environments, so here
goes:

1.  The complexities of interop'ing with SQLite's API are enormous -- an
insurmountable feat for a novice programmer to get right.  As a matter of
fact, most would simply get it wrong, blame it on SQLite, and switch to a
database engine that works better with their chosen design environment.

2.  A mountain of custom code has to be written to interact with SQLite, all
to use an engine who's familiar SQL92 syntax is so tantalizingly close to
the other databases they've used that one can't help to wonder "why do I
have to go to all this trouble just to use the same SQL syntax as my other
database programs?"  A wrapper solves this problem.

3.  A user can learn one API and apply it to multiple databases, instead of
learning one database API and applying it to a single type of database.

4.  The user can write engine-agnostic code and concentrate on the SQL
variations rather than wondering if they get the API code right and called
things in the right sequence.

5.  The wrapper expands the influence and usage of SQLite significantly.

Lets face it ... everyone likes potato chips, but if you had to make them
yourself everytime you got a craving, you'd probably think twice.  On the
other hand, if someone already made them and all you had to do was pick them
off the shelf, you'd be much more inclined to eat them.

So yes, homemade potato chips do taste better than the ones you buy at the
store, provided you get good potatoes, prepare them, peel them with a proper
tool, and watch them in the fryer carefully to avoid burning them.  Of
course you stink up the house and make an enormous mess, but that's the
price you pay for good chips.

So yes, the wrapper approach is flawed, but most folks don't want to stink
up their house trying to roll their own access layer.

Robert

You are looking for a fit to one particular restrictive, proprietary 
environment.  Our approach has been to work with the spirit of Sqlite 
and to its strengths and to that end we designed out environment 
accordingly.  Sqlite's typing has become a major asset, not a difficulty.




-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: CAST

2007-05-31 Thread John Stanton
You have explained the problem, which is .NET not Sqlite.  You have 
apparently done a fine job marrying the two but it might be more logical 
to suggest that .NET be made more flexible.


As for flexibility, programs in C or Assembler are only inflexible at 
the level of the underlying machine or operating system.  They are a 
tool to use to build an environment.


Initially the typing rules in Sqlite appeared to be a nusisance but on 
deeper thought their utility became apparent and it was possible to use 
them to great advantage.  To bloat or impair that advantage just to 
match a particular concept like .NET would be a tragedy.


You might consider developing an SQL engine ideally adapted to .NET.

Robert Simpson wrote:

-Original Message-
From: John Stanton [mailto:[EMAIL PROTECTED]
Sent: Tuesday, May 29, 2007 3:56 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Re: CAST

You are looking for a fit to one particular restrictive, proprietary
environment.  Our approach has been to work with the spirit of Sqlite
and to its strengths and to that end we designed out environment
accordingly.  Sqlite's typing has become a major asset, not a
difficulty.



All environments, proprietary or not, are restrictive in one way or another
- including C.  SQLite is flexible and adaptable, and capable of being
wedged into quite a few places -- which is what makes it a great little
engine.

For you that means captializing on SQLite's strengths and using its
typelessness as an asset.  For me, it means bringing SQLite to a mass of
.NET folks (Mono and MS) who would otherwise pass it by.  If that means
kludging a type system together to hide SQLite's typelessness, so be it.
I'd rather blunt the edge than throw the whole knife out.

Robert




-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] How to restrict the peak heap usage during multiple inserts and updates?

2007-05-31 Thread John Stanton
Your design trade-off is memory against development time.  It obviously 
depends upon your product.  If you are making millions of them spend the 
time and save memory, otherwise add memory.


If I used the method described earlier I would memory map it to a disk 
file if the underlying OS supports that.  That would leave you with a 
power fail strategy which could be journalling.  If you have unreliable 
hardware which corrupts data you might look to better equipment rather 
than use error correcting algorithms.


Kalyani Tummala wrote:

Hi John,

My main memory is very limited but I have large disk to keep the
database. I need to serialize the data when the device is in switch off
mode or in a different application mode where database is not required.
I need to take care of power failure, data corruption etc.,

I consider your advice but how extensible and flexible it is for future
modifications? 


Regards
Kalyani

-Original Message-
From: John Stanton [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, May 29, 2007 9:25 PM

To: sqlite-users@sqlite.org
Subject: Re: [sqlite] How to restrict the peak heap usage during
multiple inserts and updates?

In your case we would not use Sqlite and instead use a much simpler 
storage method.  Since your storage appears to be RAM resident that 
approach is indicated a fortiori.


We have had success with using storage based on AVL trees.  It is very 
fast and remains so despite repeated insertions and deletions.  The code


footprint is tiny (10K) and there is no heap usage so memory leakage can

never be a problem.  You do not have SQL in that environment but it 
would appear that you are not using it anyway.  Since your data is 
memory resident ACID compliance and logging are not an issue.


Even with quite detailed data manipulation you would be hard pressed to 
have a footprint greater than 30K.  You could cut that down by defining 
code like VDBE with a high information density and using a simple engine


to interpret that metacode.  We have successfully used that approach at 
times.


Kalyani Tummala wrote:


Hi John,
I could not understand your query properly. Let me tell you my
application scenario. 


I am planning to use sqlite as a database for storing and retrieving
media data of about 5-10k records in a device whose main memory is
extremely small. A sequence of insert statements increasing the heap
usage to nearly 70K(almost saturating point) which is crashing my
application. I want to restrict this to 30K. 

I tried closing database and reopen after some inserts but of no use. 


I have observed that, when I open the database with about 1K to 2K
records in it, inserts and updates take more heap and also gradually
increase than a a database with less than 1k records in it. 


My objective is to reduce the peak heap usage during inserts, updates
and also deletes with little or no performance degradation.

Please suggest me if I can do anything to do so.

Thank you in advance
Kalyani



 


-Original Message-
From: John Stanton [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, May 29, 2007 6:51 PM

To: sqlite-users@sqlite.org
Subject: Re: [sqlite] How to restrict the peak heap usage during
multiple inserts and updates?

Since you are only using part of Sqlite have you considered using a


much


smaller footprint storage system which only implements the functions


you


are using?

Kalyani Tummala wrote:



Hi joe,

Thanks for your response. 


In order to reduce the footprint size, I have bypassed parser


completely



and using byte codes directly as my schema and queries are almost
compile time fixed. Hence I am not using sqlite3_prepare(). 

The following is the schema and inserts I am using. 
CREATE TABLE OBJECT(


PUOIINTEGER  PRIMARY KEY,
Storage_Id  INTEGER,
Object_Format		INTEGER, 
Protection_Status	INTEGER,

Object_Size INTEGER,
Parent_Object   INTEGER,
Non_Consumable  INTEGER,
Object_file_nameTEXT,
NameTEXT,
File_Path   TEXT
);

CREATE TABLE AUDIO(

PUOIINTEGER PRIMARY KEY,
Use_Count   INTEGER,
Audio_Bit_Rate  INTEGER,
Sample_Rate INTEGER,
Audio_Codec_TypeINTEGER,
Number_of_Channels  INTEGER,
Track   INTEGER,
Artist  TEXT,
Title   TEXT,
Genre   TEXT,
Album_Name  TEXT,
File_Path   TEXT
);

INSERT INTO OBJECT VALUES (
7, 65537, 12297, 0,
475805, 6, 0, 
'ANJANEYASTOTRAM.mp3', NULL,

'C:\\MTPSim\\Store0\\Music\\Artist\\Album\\ANJANEYASTOTRAM.mp3'
);


INSERT INTO AUDIO VALUES (
7, 6, 144100, 0,
0, 0, 6, 
NULL, NULL, NULL, NULL,

'C:\\MTPSim\\Store0\\Music\\Artist\\Album\\ANJANEYASTOTRAM.mp3'
);

INSERT INTO OBJECT VALUES (
8, 65537, 12297, 0,
387406, 6, 0, 
'BHADRAM.mp3', NULL,

'C:\\MTPSim\\Store0\\Music\\Artist\\Album\\BHADRAM.mp3'
);


INSERT INTO AUDIO VALUES (
8, 6, 144100, 0,
0, 0, 6, 
NULL

Re: [sqlite] Re: CAST

2007-05-31 Thread John Stanton

John Elrick wrote:

Michael Schlenker wrote:


A. Pagaltzis schrieb:


* Samuel R. Neff <[EMAIL PROTECTED]> [2007-05-30 14:55]:


SQLite's typelessness is an asset if you work only with SQLite
but in any application that uses multiple database engines of
which SQLite is only one supported engine, the non-standard
typelessness is something that has to be worked around.



Can you give an example of such a case? I work with several
different DBMSs, myself, and I have yet to run into trouble with
SQLite’s approach. Can you give a reallife example?



Start by already having a wrapper that allows type introspection (
e.g. DESCRIBE on oracle and reading the result set to find out what 
types are returned)

such a feature is basically non-portable to SQLite.



Although it would require some work, a thought that comes to mind would 
be to build a wrapper for create table which would permit you define any 
types you wish and then store the information in a separate table which 
in some way mirrors sqlite_master.


Introspection would occur via this mechanism and would even move all 
introspection for any given system behind a common interface.


Just a thought.


John Elrick

CREATE TABLE already stores the type as its declared type.  The user has 
that available to enforce type restrictions or to direct type conversions.


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: CAST

2007-05-31 Thread John Stanton

John Elrick wrote:

John Stanton wrote:


John Elrick wrote:




SNIP

Introspection would occur via this mechanism and would even move all 
introspection for any given system behind a common interface.


Just a thought.


John Elrick

CREATE TABLE already stores the type as its declared type.  The user 
has that available to enforce type restrictions or to direct type 
conversions.



My apologies, I meant "any type" as in types which are not defined by 
Sqlite, such as DATE, TIMESTAMP, ADDRESS, LEFT_HANDED_SPANNER, or 
DEAD_PARROT. 


John

Sqlite lets you put in anything as the declared type.  "DEAD PARROT", 
"MONGOOSE", "GODZILLA" or "DECIMAL(6,1)" are all acceptable declared 
types.  Sqlite makes the underlying type TEXT if it is not obviously 
numeric.
- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: CAST

2007-05-31 Thread John Stanton

Robert Simpson wrote:

-Original Message-
From: John Stanton [mailto:[EMAIL PROTECTED]
Sent: Thursday, May 31, 2007 4:08 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Re: CAST

You have explained the problem, which is .NET not Sqlite.  You have
apparently done a fine job marrying the two but it might be more
logical
to suggest that .NET be made more flexible.

As for flexibility, programs in C or Assembler are only inflexible at
the level of the underlying machine or operating system.  They are a
tool to use to build an environment.

Initially the typing rules in Sqlite appeared to be a nusisance but on
deeper thought their utility became apparent and it was possible to use
them to great advantage.  To bloat or impair that advantage just to
match a particular concept like .NET would be a tragedy.

You might consider developing an SQL engine ideally adapted to .NET.



I don't really know how we got here :) I think SQLite is a fantastic engine,
fast, free and flexible -- I wouldn't have spent the last 2 years
maintaining an ADO.NET provider for it if I didn't love it.  I'm certainly
not going to throw the baby out with the bath water and try and roll my own
SQL engine -- it misses the point entirely.  I'm here to support SQLite.

I'm not complaining about SQLite's lack of typing in general.  Could I use a
little help in making SQLite fit more seamlessly into these typed
environments?  Sure, the more help I get the better!

I'd love it if I could issue a CAST([MyCol] AS HAIRYLLAMA) and have some way
to yank HAIRYLLAMA out of the statement's metadata so I could tell what
specific type that column was CAST to.

Heck, I'd love it even more if there was an additional parameter in the
user-def function callback to provide SQLite a string name of a type
returned from a function so we could have more descriptive typing in there
as well.

The problem isn't .NET.  The problem is that every other database on the
planet enforces type restrictions, and hence every generic database access
layer (ODBC, OLEDB, ADO, ADO.NET) is designed around that concept.  "Don't
use a wrapper" doesn't work for every circumstance, and I can't change the
ADO.NET spec -- though I do get around it where I can to support SQLite's
typelessness.

My opinion is: Since SQLite is typeless, then it should probably have a few
more functions dedicated to type description.  Preserving the destination
typename on a CAST as well as in a userdef function would go a long way
toward accomplishing that.  Or even better, some kind of extensible type
system that enabled us to have type-specific comparison callbacks.

However -- we're not there, and may never get there, so I'll continue to
hack.  That's what open source is for, afterall :)

Robert

There was a lot of fuss on the financial news tonight concerning Google 
Gears and its impact.  Sergey Brin was interviewed saying guilelessly 
that Google doesn't think of other companies, only users but the 
resident experts were saying that this is a blow at Microsoft.


Its is significant that the Sqlite based Google Gears seems to take 
advantage of typeless storage to produce a more general storage model. 
That was the point of my comments, Sqlite lets us advance our storage 
capabilities into a more flexible world.


The way we have chosen to use Sqlite has many parallels with the Google 
approach, presumably a logical development of the storage concepts.


We use PostgreSQL to map into environments requiring its capabilities 
and Sqlite where it fits well, such as in an application specific 
language we developed and with Javascript.  We don't use TCL but I 
understand it matches beautifully with Sqlite, by design.


If we were to rigidly lock into existing technologies we would still be 
using punch cards or perhaps clay tablets.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Concurrency

2007-06-01 Thread John Stanton
We find that synchronizing access to database writes using a mutex works 
well.  You could think of implementing read and write locks using the 
thread primitives and achieve a better result.


If you do poll to resolve busy checks a spinlock is certainly a bad 
idea.  When we use that approach we will yield the thread after each busy.


[EMAIL PROTECTED] wrote:

Hi,

I am having a scenario where I have one reader/writer and many writer threads.
All writers are pretty basic (single INSERT INTO; some sort of a logging info
what a thread has done).

I believe I will receive many BUSY return codes and I don't like these
spinlock-like retries. The problem I am having with this design is that I would
like to complete the thread ASAP, so that I don't have many threads idling and
consuming resources of my embedded system.

I was thinking to either:

a. Use mutex/semaphore before writting to the database or

b. Have a (thread safe) list of INSERT INTO strings that every writer thread
populates and the main reader/writer thread later executes.

Is this a good approach? Does anyone have a better design? I don't want to use
other database, because I think Sqlite is great for an embedded system that I
am using.
___
Najhitrejši brezplačni klicni dostop :: Varno in zanesljivo po internetu
Obiščite http://www.dialup386.com/

-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Concurrency

2007-06-01 Thread John Stanton

Tom Briggs wrote:
 


I don't want to use
other database, because I think Sqlite is great for an 
embedded system that I

am using.



   I think that your own questions about concurrency prove this
incorrect.  If you need high concurrency and you don't like retries,
SQLite is not the database for you.

   -T



If you require ACID type data integrity and have a single disk there is 
no such thing as a "high concurrency database".  They all share a single 
disk resource in some way.  With Sqlite it is up to the designer to 
build in concurrency and that cna be done by single streaming.  Better 
performance is achieved by using a single database connection so that 
cache hits are maximized.


If your design is such that you never get a busy then you have an 
effective allocation of your disk resource.  Using mutexes between 
threads and semaphores between processes gives you that capability.


To my mind the only time you should use the busy logic is when you are 
working across a network with shared files.



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Concurrency

2007-06-01 Thread John Stanton

Samuel R. Neff wrote:

If option (b), using a single thread for writing and a multi-threaded write
queue works in your situation, then that would probably provide best
concurrency and performance.  The only downside to this is the delayed
writes mean you don't as easily get feedback to the original writer if a
write fails (bad data for example).  You could build delayed
feedback/callback into the system, just depends on your application
architecture.

Also, I would strongly suggest doing some actual testing and confirming
there is a bottleneck without any custom code to increase concurrency before
programming workarounds for an expected problem.  What we've found is that
SQLite is so much faster than other databases (MSSQL in particular) that
concurrency is greatly improved simply because each write is faster and thus
the time the db is locked is less even though the whole db is locked.

HTH,

Sam

---
We're Hiring! Seeking a passionate developer to join our team building
products. Position is in the Washington D.C. metro area. If interested
contact [EMAIL PROTECTED]
 
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Friday, June 01, 2007 5:18 AM

To: sqlite-users@sqlite.org
Subject: [sqlite] Concurrency

Hi,

I am having a scenario where I have one reader/writer and many writer
threads.
All writers are pretty basic (single INSERT INTO; some sort of a logging
info
what a thread has done).

I believe I will receive many BUSY return codes and I don't like these
spinlock-like retries. The problem I am having with this design is that I
would
like to complete the thread ASAP, so that I don't have many threads idling
and
consuming resources of my embedded system.

I was thinking to either:

a. Use mutex/semaphore before writting to the database or

b. Have a (thread safe) list of INSERT INTO strings that every writer thread
populates and the main reader/writer thread later executes.

Is this a good approach? Does anyone have a better design? I don't want to
use
other database, because I think Sqlite is great for an embedded system that
I
am using.



You make a very good point.  Someone called it the "cat running over the 
road" situation.  A cat runs across a busy road without looking by going 
as fast as possible and rarely does one get run over.  On the other hand 
roads are lined with dead armadillos.


If you synchronization logic is sound and you avoid polling or waits the 
faster your database runs the more concurrent transactions per second 
you will achieve.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Concurrency

2007-06-01 Thread John Stanton
With other applications you can use semaphores for synchronization and 
achieve minimal latency and a low overhead.


If you have networked files you are dependent upon the file locking and 
on the effectiveness of the cross network file locking.  This does not 
necessarily work as expected and in particular notorious cases does not 
work at all.


If you want maximal performance across a nwetwork you need to implement 
a server to manage the database on one machine.


Israel Figueroa wrote:

What if the database is locked by another application, or by another box?

The driver should poll untill it get an "idle database" and then thow an
event... and then, we're polling again.

I'm coding some similar to the original post... I'm not that good with
threads yet, but I did foresee that even with my tiny inserts maybe some
moment I could get a Sqlite_busy. Is that bad just keep inserting until i
get a Sqlite_ok ? Assuming that is not a multiuser enviroment, just one app
accesing the database.

2007/6/1, Ian Frosst <[EMAIL PROTECTED]>:



On the topic of a more efficient busy handler, one approach I considered
was
to implement an event which was signalled when a database unlock 
occurred.

That way, the busy handler could just wait on the event (which is an
efficient wait state), and be guaranteed of a wake up when the lock is
released (the event would be signalled at this time.)  However, I wasn't
at
the time familiar enough with SQLite's innards to implement such a beast.
Can anyone see any pitfalls to such an approach?

Ian

On 6/1/07, John Stanton <[EMAIL PROTECTED]> wrote:
>
> Tom Briggs wrote:
> >
> >
> >>I don't want to use
> >>other database, because I think Sqlite is great for an
> >>embedded system that I
> >>am using.
> >
> >
> >I think that your own questions about concurrency prove this
> > incorrect.  If you need high concurrency and you don't like retries,
> > SQLite is not the database for you.
> >
> >-T
> >
>
> If you require ACID type data integrity and have a single disk there is
> no such thing as a "high concurrency database".  They all share a 
single

> disk resource in some way.  With Sqlite it is up to the designer to
> build in concurrency and that cna be done by single streaming.  Better
> performance is achieved by using a single database connection so that
> cache hits are maximized.
>
> If your design is such that you never get a busy then you have an
> effective allocation of your disk resource.  Using mutexes between
> threads and semaphores between processes gives you that capability.
>
> To my mind the only time you should use the busy logic is when you are
> working across a network with shared files.
>
>
>
>
- 


> To unsubscribe, send email to [EMAIL PROTECTED]
>
>
- 


>
>








-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] extracting and comparing dates

2007-06-04 Thread John Stanton

Chris Fonnesbeck wrote:

I'm at a complete loss about how to work with dates in SQLite. The
documentation doesnt seem to be helping me. I have a table with some date
fields, in the proper -mm-dd format:

sqlite> select First_Capture from PIT_manatees limit 5;
1997-6-17
1998-5-6
1997-6-24
1998-5-6
2000-3-15

however, it does not respond properly to comparisons to dates:

sqlite> select First_Capture from PIT_manatees where First_Capture<2000-1-1
limit 5;
sqlite>

this doesnt work either:

sqlite> select First_Capture from PIT_manatees where
First_Capture

note that these columns are in the 'date' format.

How does one extract years, months, dates from tables in SQLite? The
'extract' function also seems not to work.

Any help most appreciated.


Store the dates in the internal Sqlite julian mode.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] baffled by dates

2007-06-04 Thread John Stanton
Sqlite does have a date format, it is physically a 64 bit floating point 
number.  There are functions to transform in and out of that format to 
present dates as required by the user.  The Sqlite date format uses a 
magib epoch which matches all of the major internaional date systems.


P Kishor wrote:

There is no "DATE" format in SQLite. Dates are stored as strings. The
only formats SQLite knows and understands are TEXT, REAL, INTEGER,
BLOB, and NULL (see the link on datatypes). On the other hand, there
are built-in functions that can act on your date strings and convert
them back and forth, manipulate them, etc. Once again, see the link on
working with dates on the SQLite wiki.

On 6/4/07, Chris Fonnesbeck <[EMAIL PROTECTED]> wrote:


I'm at a complete loss about how to work with dates in SQLite. The
documentation doesnt seem to be helping me. I have a table with some date
fields, in the proper -mm-dd format:

sqlite> select First_Capture from PIT_manatees limit 5;
1997-6-17
1998-5-6
1997-6-24
1998-5-6
2000-3-15

however, it does not respond properly to comparisons to dates:

sqlite> select First_Capture from PIT_manatees where 
First_Capture<2000-1-1

limit 5;
sqlite>

this doesnt work either:

sqlite> select First_Capture from PIT_manatees where
First_Capture

note that these columns are in the 'date' format.

Yet more strangeness:


sqlite> select date(First_Capture) from PIT_manatees limit 10;







1999-10-13


sqlite>

Not sure what that is all about -- all the results are blank except one,
ewven though every record is in exactly the same format.

I'm totally perplexed by date handling in SQLite, so any help is most
appreciated.

Thanks,
--
Christopher J. Fonnesbeck

Population Ecologist, Marine Mammal Section
Fish & Wildlife Research Institute (FWC)
St. Petersburg, FL

Adjunct Assistant Professor
Warnell School of Forestry and Natural Resources
University of Georgia
Athens, GA

T: 727.235.5570
E: Chris dot Fonnesbeck at MyFWC dot com







-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] extracting and comparing dates

2007-06-04 Thread John Stanton
We use declared types of DATE, TIMESTAMP and DATETIME and store dates as 
floating point using the Sqlite date conversion functions.  The 
applications get dates formatted as ISO8601 or according to the declared 
locale.  Functions do date artithmetic.


Samuel R. Neff wrote:

SQLite doesn't have any internal notion of a date, only numbers or strings.
You can choose to store dates as numbers and SQLite has some conversion
functions to help you work with dates.  Personally I store dates as strings.

These are not properly formatted dates:

1997-6-17
1998-5-6
1997-6-24
1998-5-6
2000-3-15

should be

1997-06-17
1998-05-06
1997-06-24
1998-05-06
2000-03-15 


this

sqlite> select First_Capture from PIT_manatees where First_Capture<2000-1-1

should be

sqlite> select First_Capture from PIT_manatees where
First_Capture<'2000-01-01'

and this

sqlite> select First_Capture from PIT_manatees where
First_Capture select First_Capture from PIT_manatees limit 5;
1997-6-17
1998-5-6
1997-6-24
1998-5-6
2000-3-15

however, it does not respond properly to comparisons to dates:

sqlite> select First_Capture from PIT_manatees where First_Capture<2000-1-1
limit 5;
sqlite>

this doesnt work either:

sqlite> select First_Capture from PIT_manatees where
First_Capture

note that these columns are in the 'date' format.

How does one extract years, months, dates from tables in SQLite? The
'extract' function also seems not to work.

Any help most appreciated.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] baffled by dates

2007-06-04 Thread John Stanton

Chris Fonnesbeck wrote:

On 6/4/07, P Kishor <[EMAIL PROTECTED]> wrote:



There is no "DATE" format in SQLite. Dates are stored as strings. The
only formats SQLite knows and understands are TEXT, REAL, INTEGER,
BLOB, and NULL (see the link on datatypes). On the other hand, there
are built-in functions that can act on your date strings and convert
them back and forth, manipulate them, etc. Once again, see the link on
working with dates on the SQLite wiki.





Wow, thanks. So, why does this table creation work then, and not give an
error?

CREATE TABLE PIT_manatees(Manatee_Log INTEGER, Name text, First_Capture
date, Recovered date, Coast text)

Because Sqlite stores any type you choose as the declared type.  Read 
more about type affinity and manifest typring.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] sqlite function list?

2007-06-04 Thread John Stanton

Scott Baker wrote:

Is there a list somewhere (I can't find it on the wiki) of all the
functions (specifically math) functions that sqlite understands?

I'm thinking things like: int, round, floor, ceil, sqrt etc.

You have the source.  They are all presented there and you can add more 
if you choose.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] baffled by dates

2007-06-04 Thread John Stanton
Look at the date functions, the file date.c is self explanatory and 
lists the reference for the date type.  The underlying type for a date 
is a float, so that may be how you missed the date details.


P Kishor wrote:

On 6/4/07, John Stanton <[EMAIL PROTECTED]> wrote:


Sqlite does have a date format, it is physically a 64 bit floating point
number.



If that is the case, I would love to see the documentation on that. As
far as I can see, there are only TEXT, REAL, INTEGER, BLOB, NULL types
supported.


There are functions to transform in and out of that format to
present dates as required by the user.  The Sqlite date format uses a
magib epoch which matches all of the major internaional date systems.

P Kishor wrote:
> There is no "DATE" format in SQLite. Dates are stored as strings. The
> only formats SQLite knows and understands are TEXT, REAL, INTEGER,
> BLOB, and NULL (see the link on datatypes). On the other hand, there
> are built-in functions that can act on your date strings and convert
> them back and forth, manipulate them, etc. Once again, see the link on
> working with dates on the SQLite wiki.
>
> On 6/4/07, Chris Fonnesbeck <[EMAIL PROTECTED]> wrote:
>
>> I'm at a complete loss about how to work with dates in SQLite. The
>> documentation doesnt seem to be helping me. I have a table with 
some date

>> fields, in the proper -mm-dd format:
>>
>> sqlite> select First_Capture from PIT_manatees limit 5;
>> 1997-6-17
>> 1998-5-6
>> 1997-6-24
>> 1998-5-6
>> 2000-3-15
>>
>> however, it does not respond properly to comparisons to dates:
>>
>> sqlite> select First_Capture from PIT_manatees where
>> First_Capture<2000-1-1
>> limit 5;
>> sqlite>
>>
>> this doesnt work either:
>>
>> sqlite> select First_Capture from PIT_manatees where
>> First_Capture<date('2000-1-1') limit 5;
>> sqlite>
>>
>> note that these columns are in the 'date' format.
>>
>> Yet more strangeness:
>>
>>
>> sqlite> select date(First_Capture) from PIT_manatees limit 10;
>>
>>
>>
>>
>>
>>
>>
>> 1999-10-13
>>
>>
>> sqlite>
>>
>> Not sure what that is all about -- all the results are blank except 
one,

>> ewven though every record is in exactly the same format.
>>
>> I'm totally perplexed by date handling in SQLite, so any help is most
>> appreciated.
>>
>> Thanks,
>> --
>> Christopher J. Fonnesbeck
>>
>> Population Ecologist, Marine Mammal Section
>> Fish & Wildlife Research Institute (FWC)
>> St. Petersburg, FL
>>
>> Adjunct Assistant Professor
>> Warnell School of Forestry and Natural Resources
>> University of Georgia
>> Athens, GA
>>
>> T: 727.235.5570
>> E: Chris dot Fonnesbeck at MyFWC dot com
>>
>
>


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 










-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] baffled by dates

2007-06-04 Thread John Stanton
OK Sqlite doesn't have a date type, period.  Just as it doesn't have 
other types, but don't let it it stop you from using dates very 
successfully with Sqlite by defining a date type and using the 
functionally built into the product.


The Sqlite storage model plus its style of declared types is ingenious 
and permits the user to embed it in neatly into a broad range of 
environments from untyped to flexibly typed to strongly typed.  For 
example we recently integrated Sqlite with Javascript and were gratified 
at how well these two tools mapped together.  We also use it integrated 
into business applications were it supports a fixed point decimal type 
with defined precision and scale.


Joe Wilson wrote:

--- John Stanton <[EMAIL PROTECTED]> wrote:

Sqlite does have a date format, it is physically a 64 bit floating point 
number.  There are functions to transform in and out of that format to 
present dates as required by the user.  The Sqlite date format uses a 
magib epoch which matches all of the major internaional date systems.



SQLite does not have a date type. Period.

What you're describing is not a true DATE type, but your own 
programming convention when dealing with dates. Your programming 
convention relies on functions, date strings and epoch-based floating 
point numbers.


If you were correct and SQLite currently supported a DATE type,
then why was this Proposed Incompatible Changes entry added to 
the SQLite wiki by DRH?


 "Support A DATE Type" 
 http://www.sqlite.org/cvstrac/wiki?p=ProposedIncompatibleChanges


Because rehashing the old arguments is pointless, here's the old thread:

 http://www.mail-archive.com/sqlite-users@sqlite.org/msg20589.html



P Kishor wrote:


There is no "DATE" format in SQLite. Dates are stored as strings. The
only formats SQLite knows and understands are TEXT, REAL, INTEGER,
BLOB, and NULL (see the link on datatypes). On the other hand, there
are built-in functions that can act on your date strings and convert
them back and forth, manipulate them, etc. Once again, see the link on
working with dates on the SQLite wiki.





 

We won't tell. Get more on shows you hate to love 
(and love to hate): Yahoo! TV's Guilty Pleasures list.
http://tv.yahoo.com/collections/265 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Sorted index

2007-06-05 Thread John Stanton

B-Tree indices are in sorted sequence.  Just raise an index on the column.

[EMAIL PROTECTED] wrote:



I would like to maintain a sorted list of ~3000 entries.
I know that I can create a table and the SELECT from it with the ORDER BY clause
in order to sort it.
However I do not want the overhead of doing this after adding a new entry.
It would be good if I could create an index that was sorted, and then when I
add a new entry to the table it would automatically be inserted in the index at
the correct position.
Is this possibe?
If not, can anyone suggest any other solution, baring in mind that RAM memory
needs to be kep to a minimum?

Thanks
Clive



-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] disk I/O error

2007-06-06 Thread John Stanton

Arun Bhalla wrote:

Hello,

This message may be off-topic in that I don't think the following issue
 corresponds to a bug with SQLite, but it's something I've discovered
while using SQLite.  Perhaps someone on the list has had a similar
experience or could make a suggestion.

A Linux 2.6/x86_64 system reports a "disk I/O error" (SQLITE_IOERR)
while generating a specific report from a SQLite database (SQLite
3.3.6).  The database and temporary files are accessed through an NFS
mount.  After running the program again with SQLite tracing enabled
(plus a bit more I added), I see that SQLITE_IOERR is returned by
unixRead() because read() unexpectedly returned 0!

Here's some relevant strace output:

open("/nfs/tmp/sqlite_dBjTG5bZdsqFVPb", O_RDWR|O_CREAT|O_EXCL, 0644) = 8

[...]

lseek(8, 193536, SEEK_SET)= 193536
write(8, "\n\0\0\0\30\0\222\0\0\266\0\332\0\376\1\"\1F\1l\1\222\1"...,
1024) = 1024

[...]

lseek(8, 226304, SEEK_SET)= 226304
write(8, "\n\0\0\0\30\0\240\0\0\240\0\304\0\350\1\f\0010\1T\1x\1"...,
1024) = 1024
lseek(8, 193536, SEEK_SET)= 193536
read(8, "", 1024) = 0
fstat(8, {st_mode=S_IFREG|0644, st_size=227328, ...}) = 0

The read() call shouldn't fail -- the same page was written to at the
beginning of the transaction!  At least by the time fstat() is called,
the file is 227328 bytes long, so a read at an offset of 193536 should
not fail.

I'm suspecting that the NFS server in question is buggy or
misconfigured.  Unfortunately I don't have access to either the NFS
server or the host running the program, so mainly all I can access is
some strace and SQLite tracing output.

Thanks for any suggestions,
Arun

Run it witho8ut NFS.  If it doesn;t fail then go looking fot the NFS 
problem.


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: Re: Re: Does sqlite3_step searches for a row in the table / or some results buffer?

2007-06-07 Thread John Stanton

Igor Tandetnik wrote:

B V, Phanisekhar <[EMAIL PROTECTED]> wrote:


Why it's unpredictable?


Why can't the unpredictable be made predictable?



Please feel free to submit a patch, if you believe it's that easy.


Assume I update the column of a row that meets the criteria of some
select stmt and I am yet to encounter that row in sqlite3_step. So the
sqlite3_step on that row will return the old column value or new
column
value for that row?



It depends on whether the query reads live data from the table or uses a 
temporary resultset.



E.g. a query with ORDER BY on a table that
doesn't have a suitable index has no choice but to retrieve all


records

In case of complex queries for e.g. ORDER BY (as mentioned by you), I
assume I will not be able to see the updated column value. Doesn't u
think there is an inconsistency?



That's why I said the outcome is unpredictable. Anyway, what are you 
driving at? What exactly is your suggestion?



Also, is there a better function to retrieve all (rows) of the results
in one call? Sqlite3_get_table does that, but it uses internally
sqlite3_step, which takes lot of time.



I, too, would like a database engine that always retrieves all rows in a 
resultset in the same small amount of time, regardless of how large the 
resultset is or how many rows must be looked at to calculate it. I also 
want a pony. However, I realize that I'm highly unlikely to ever get 
either, and go on with my life.


Which I guess is a long way of saying that no, sadly there is no magical 
way to retrieve all rows in a resultset instantaneously.


Igor Tandetnik

Igor, as soon as you achieve that goal, let us know.  We need someone to 
bring peace to the Middle East and you would be the best qualified.


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: Re: Does sqlite3_step searches for a row in the table / or some results buffer?

2007-06-07 Thread John Stanton
Predictability is ensured by using transactions.  By using BEGIN and 
COMMIT to make transactions atomic you enforce a predictable state.


B V, Phanisekhar wrote:

Thanks Igor,



Why it's unpredictable? 


Why can't the unpredictable be made predictable?

Assume I update the column of a row that meets the criteria of some
select stmt and I am yet to encounter that row in sqlite3_step. So the
sqlite3_step on that row will return the old column value or new column
value for that row?


E.g. a query with ORDER BY on a table that 
doesn't have a suitable index has no choice but to retrieve all


records

In case of complex queries for e.g. ORDER BY (as mentioned by you), I
assume I will not be able to see the updated column value. Doesn't u
think there is an inconsistency? 


Also, is there a better function to retrieve all (rows) of the results
in one call? Sqlite3_get_table does that, but it uses internally
sqlite3_step, which takes lot of time. So is there a way that I can get
all the result rows in one step rather then stepping for each row.

Regards,
Phani


 




-Original Message-
From: Igor Tandetnik [mailto:[EMAIL PROTECTED] 
Sent: Thursday, June 07, 2007 5:28 PM

To: SQLite
Subject: [sqlite] Re: Re: Does sqlite3_step searches for a row in the
table / or some results buffer?

B V, Phanisekhar <[EMAIL PROTECTED]> wrote:


Why it's unpredictable?



You are enumerating rows in some order. A new row may be inserted before

or after your current position in this order. If it is inserted before, 
you will not see it in this enumeration. If it is inserted after, you 
will eventually reach it.


With simple enough queries, you may guess (or explicitly specify wit 
ORDER BY) the order in which rows are enumerated, and can predict 
whether a newly inserted order will be seen. With complex queries, it 
may be difficult to make such a prediction.




In sqlite 3.3.8, since it allows INSERT while SELECT statement is
running, I assume that it will return an 11th row. Can you explain how
step operation works interiorly on a table?



It walks a B-Tree, moving from current node to the next.



(Does it gets all the
results at one time



Not most of the time, but some queries require such a temporary 
intermediate resultset. E.g. a query with ORDER BY on a table that 
doesn't have a suitable index has no choice but to retrieve all records,


sort them in a temporary resultset, then enumerate that resultset. In 
this case, by the way, a row inserted into the table in the middle of 
enumeration will not be seen.




or it searches for the rows matching the criteria one after the other



SQLite tries hard to do it this way, but for some queries it is not 
possible.


Igor Tandetnik 




-
To unsubscribe, send email to [EMAIL PROTECTED]

-


-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] FTS-2

2007-06-07 Thread John Stanton
I have just started to use FTS2 and it is working well but I would like 
to ask any other users if they have had good or bad experiences and why 
they would use FTS2 rather than FTS1.  The software is new and I have 
not seen any feedback at this stage and we are yet to apply large data 
sets and high volumes of transactions.


We have developed some user functions to generate key strings and these 
might be of interest to other users.  Basically they concanentate the 
columns of interest into a meta tag string then strip out noise words 
such as prepositions and punctuation, upshift the words and then update 
a column used for the FTS2 index.  A complementary function similarly 
processes a string to provide a compatible MATCH string.  Soundex 
versions of the words can be added.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] FTS-2

2007-06-07 Thread John Stanton

Scott Hess wrote:

On 6/7/07, John Stanton <[EMAIL PROTECTED]> wrote:


I have just started to use FTS2 and it is working well but I would like
to ask any other users if they have had good or bad experiences and why
they would use FTS2 rather than FTS1.  The software is new and I have
not seen any feedback at this stage and we are yet to apply large data
sets and high volumes of transactions.



The entire fts subproject was only begun about a year ago, meaning
that fts2 has existed for much longer than fts1 had when it was
released - fts2 has certainly had more thorough testing at this point.
I can't think of any good reason to use fts1 at this point.

Admittedly, I haven't updated the wiki with this info.  I'll put that
on my list to get done.

-scott
Scott,  Thankyou for the prompt response and for some elegant work in 
realizing FTS2.

JS


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: Re: Re: Does sqlite3_step searches for a row in the table / or some results buffer?

2007-06-07 Thread John Stanton

Igor Tandetnik wrote:

John Stanton <[EMAIL PROTECTED]> wrote:


Predictability is ensured by using transactions.  By using BEGIN and
COMMIT to make transactions atomic you enforce a predictable state.



Not if you modify the same data you are iterating over, on the same DB 
connection and thus within the same transaction.


Igor Tandetnik

A good point to be aware of when designing for transactional integrity. 
   Read locks are important.


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Syntax help with UPDATE in SQLite Database Browser

2007-06-07 Thread John Stanton
You don't seem to be positioning on a row in the Parameter table with a 
WHERE clause.


Ellis Robin (Bundaberg) wrote:

Could I please get some help on the syntax required to perform my UPDATE
based on a selection from multiple tables? I've been through the
archives but can't seem to make much sense of the examples given.
 
I have a table containing 'new' paramater values, I need to update the

relevant records in the 'existing' parameter table, however my selection
is based on a handful of tables. Here's the selection I run to view my
old and new parameters side by side, this works fine:
 
select Scenario.ScenarioID, Scenario.NetworkID, Link.LinkID,

Catchment.CatchmentID,Catchment.Name, FunctionalUnit.FunctionalUnitID,
FunctionalUnitDefinition.Name, StandardFU.RainfallRunoffModelID,
Parameter.Parameter, Parameter.ParameterValue, NewParams.parametervalue
from Scenario, Link, Catchment, FunctionalUnit,
FunctionalUnitDefinition, StandardFU, Parameter, NewParams where
Scenario.ScenarioID = 1004 and Scenario.NetworkID = Link.NetworkID and
Link.LinkID = Catchment.LinkID and FunctionalUnit.CatchmentID =
Catchment.CatchmentID and FunctionalUnit.FunctionalUnitDefinitionID =
FunctionalUnitDefinition.FunctionalUnitDefinitionID and
FunctionalUnit.FunctionalUnitID = StandardFU.StandardFUID and
StandardFU.RainfallRunoffModelID = Parameter.ModelID and Catchment.Name
= NewParams.Subcatchments and FunctionalUnitDefinition.Name =
NewParams.FU_name and Parameter.Parameter = NewParams.parameter
 
However when I use this selection to update the relevent field in the

parameters table I don't get a 'correct' update, instead I get the first
value encountered in the NewParams table over writing ALL of the old
parameter values. Here's my syntax:
 
Update Parameter set ParameterValue = (select NewParams.parametervalue

from Scenario, Link, Catchment, FunctionalUnit,
FunctionalUnitDefinition, StandardFU, Parameter, NewParams where
Scenario.ScenarioID = 1004 and Scenario.NetworkID = Link.NetworkID and
Link.LinkID = Catchment.LinkID and FunctionalUnit.CatchmentID =
Catchment.CatchmentID and FunctionalUnit.FunctionalUnitDefinitionID =
FunctionalUnitDefinition.FunctionalUnitDefinitionID and
FunctionalUnit.FunctionalUnitID = StandardFU.StandardFUID and
StandardFU.RainfallRunoffModelID = Parameter.ModelID and Catchment.Name
= NewParams.Subcatchments and FunctionalUnitDefinition.Name =
NewParams.FU_name and Parameter.Parameter = NewParams.parameter)
 
Can anyone shed some light on this for me? I've tried adding more WHERE

statements after the nested selection, but with no better results.
 
Thanks
 
Rob
 
Robin Ellis

Natural Resources & Water
PO Box 1167
Bundaberg QLD 4670
Ph: +617 4131 5771
Fax: +617 4131 5823
 



The information in this email together with any attachments is
intended only for the person or entity to which it is addressed
and may contain confidential and/or privileged material.
Any form of review, disclosure, modification, distribution
and/or publication of this email message is prohibited, unless
as a necessary part of Departmental business.
If you have received this message in error, you are asked to
inform the sender as quickly as possible and delete this message
and any copies of this message from your computer and/or your
computer system network.






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Truncate Issue

2007-06-07 Thread John Stanton
If you don't truncate a file then you may have untruncated files.  Why 
can't you truncate a file?  It is implemented one way or another on 
pretty much every OS.


Sqlite uses truncate in it b-tree logic and probably elsewhere so you 
would very likely encounter problems with no truncate.


Jimmy Mathew Ambalathuruthel wrote:

Hi all,

  I am working in a porting project of SQLite from windows.

  I Could not port the Truncate () function (For Eg: WinTruncate () in
windows code in the file os_win.c ) as the same logic cannot be applied
to my platform.

  Can I make it a dummy function by just returning SQLITE_OK?

  Please help.

 


Regards

Jimmy Mathew

 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: Why is there no sqlite3_exec16() method?

2007-06-11 Thread John Stanton

You can keep the prepared SQl and re-use it by using sqlite3_reset.

Rob Richardson wrote:

Igor,

Thank you very much for your reply.  My naïve impression was that sqlite3_prepare/step/finalize are used for SELECT statements, where there would be a result set one would want to step through, and that one would use sqlite3_exec() for statements where no result set is expected, such as UPDATE, DELETE or INSERT.  


So, let's say we want to delete a record using "DELETE FROM my_table WHERE my_key = 
my_unwanted_value".  I would just pass that string into sqlite3_prepare16(), then 
call sqlite3_step() to actually do the deletion (and return SQLITE_DONE), and then clean 
up by calling sqlite_finalize?

Rob Richardson

-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Database replication question

2007-06-12 Thread John Stanton

[EMAIL PROTECTED] wrote:

- Original Message 
From: Joe Wilson <[EMAIL PROTECTED]>
To: sqlite-users@sqlite.org
Sent: Monday, June 11, 2007 8:36:32 PM
Subject: Re: [sqlite] Database replication question


Large bulk inserts with more than one index (implicit or explicit) 
is not SQLite's strong suit.


If you search the mailing list archives you'll find a few suggestions:

- "BEGIN EXCLUSIVE" (or is it "BEGIN IMMEDIATE"?) on the 
 database file and then copy the file over - fastest way



What do you mean by "copy the file over"? A straight copy of the binary content 
of the file? If so, I can't really do that because the version of sqlite are potentially 
different on the two machines.



or

- increasing cache sizes 
- pre-sorting the data in index order prior to bulk insert

- creating the other indexes after all the data is inserted

If you do not require a live backup you could use the copy trick
and augment that with a daily archive via 


sqlite3 file.db .dump | gzip etc...

in case the database file becomes corrupted.



If the performance problem is with the seconday index, is there a way to "pause" indexing before a 
large bulk insert and then "resume" it later without rebuilding the entire index (to avoid doing: 
drop index + inserts + create index)? Maybe it's a stupid question, but I am guessing that there is some sort 
of version number for the rows in the db, so playing "catchup" on an index could work?

Nicolas
If you have incompatible Sqlite versions you can still perform a 
snapshot replication by doing a file copy then running a background job 
to dump the old version database and rebuild it to the latest version.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] MMap On Solaris

2007-06-12 Thread John Stanton

Mitchell Vincent wrote:

Working with some data conversion here (that will eventually go into
an SQLite database). I'm hoping you IO wizards can offer some help on
a question that I've been trying to get answered.

I'm using Solaris 10 for this.

If I mmap a large file and use madvise with MADV_SEQUENTIAL and
MADV_WILLNEED, then start processing the file, when will the system
discard pages that have been referenced? I guess what I'm wondering is
if there is any retention of "back" pages?

Say for example I start reading the file, and after consuming 24,576
bytes, will the first or second pages still be in memory (assuming
8192 byte pages)?

Thanks!

In general it means that the file is mapped into virtual memory.  How 
much of it remains in actual memory depends upon the memory demands on 
the OS at the time.  If the sequential and random advice is used by the 
OS it is most likely to implement a look ahead for requential access. 
Not all OS's pay attention to those advisory settings.


What you are doing is to access the file as if it were an executing 
program image.  Similar rules apply.


The answer is that you cannot assume that pages you have read are in 
actual memory and you cannot assume that they are not.  When you access 
a page not currently in memory the OS will read it in and find space for 
it somehow, maybe by discarding some other page.


This is an excellent way to read files because you avoid one level of 
buffer shadowing and get cacheing adjusted to currently available memory.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] MMap On Solaris

2007-06-13 Thread John Stanton
MMAP just lets you avoid one or two layers of buffering and APIs.  If 
you were to use fopen/fread you go to function calls then open/read plus 
buffering and function calls then to to the VM to actually access the 
data.  Going direct to the VM and getting a pointer to the VM pages is 
more efficient.  I got about 30% better speed out of one of my compilers 
just by removing the reads and local buffering and using mmap.  A b-tree 
index almost doubled in speed by removing local reads and buffering and 
using mmap.


Mitchell Vincent wrote:

Hi John! Thanks for the reply!

I think that makes a good point that the vm page fault is probably
faster than the overhead of copying the data to a local buffer.  So, page
fault or not, I think that's the way I'm going to do it.

Again, thanks very much for your input!

On 6/12/07, John Stanton <[EMAIL PROTECTED]> wrote:


Mitchell Vincent wrote:
> Working with some data conversion here (that will eventually go into
> an SQLite database). I'm hoping you IO wizards can offer some help on
> a question that I've been trying to get answered.
>
> I'm using Solaris 10 for this.
>
> If I mmap a large file and use madvise with MADV_SEQUENTIAL and
> MADV_WILLNEED, then start processing the file, when will the system
> discard pages that have been referenced? I guess what I'm wondering is
> if there is any retention of "back" pages?
>
> Say for example I start reading the file, and after consuming 24,576
> bytes, will the first or second pages still be in memory (assuming
> 8192 byte pages)?
>
> Thanks!
>
In general it means that the file is mapped into virtual memory.  How
much of it remains in actual memory depends upon the memory demands on
the OS at the time.  If the sequential and random advice is used by the
OS it is most likely to implement a look ahead for requential access.
Not all OS's pay attention to those advisory settings.

What you are doing is to access the file as if it were an executing
program image.  Similar rules apply.

The answer is that you cannot assume that pages you have read are in
actual memory and you cannot assume that they are not.  When you access
a page not currently in memory the OS will read it in and find space for
it somehow, maybe by discarding some other page.

This is an excellent way to read files because you avoid one level of
buffer shadowing and get cacheing adjusted to currently available memory.

- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 










-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] How can I convert from Julian time to a tm structure?

2007-06-14 Thread John Stanton
We just lifted the routines out of Sqlite to do that.  They are in 
date.c.  By making an Sqlite-style date type and a small library of date 
manipulation routines we move date conversion to the application.  It is 
handy when handling ISO8601 and HTTP date formats plus integrating with 
file agees.


Rob Richardson wrote:

Greetings!

I am trying to speed up an application that queries a table with three
columns and around a million records, and one of the fields is a
timestamp.  One thing I want to do is to move the conversion of the
timestamp from a Julian time to a human-readable time from the query to
my application.  (By the way, this is a C++ app written in MS Visual
Studio 6.)  I could build a query and have SQLite execute it, something
like "SELECT datetime(123456.789) AS timestring", but that has the
overhead of preparing the query, executing it and finalizing it, plus
the overhead of converting from a string representation into the tm
structure once I get the result of the query.

I didn't see any little utility function in the SQLite library that just
exposes whatever routine SQLite uses to do the conversion.  Does one
exist?

There must be plenty of algorithms out there to do this conversion.  A
quick search revealed a few, but they were obviously faulty (assuming
every year has 365.25 days, for instance) or not precise enough
(returning only the day).  I need an algorithm that is accurate to the
nearest second.  What should I use?  Or is the SQLite query the best I'm
going to do?

Thank you very much.

Rob Richardson

-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] MMap On Solaris

2007-06-14 Thread John Stanton
The behaviour depends on whether you map shared or not.  If for map 
shared multiple users can read and write to the file simultaneously.  If 
you have a situation where you access he same bytes you need to use some 
form of synchronization, just as you do with read and write.


You can map for exclusive access and also for private.  In the provate 
case other users do not see your changes to the file.


If the file has been extended by another user past the area you have 
mapped you will not access it unless you mmap to the new length.  If the 
file is growing fast that could make using read and write more appropriate.


Ken wrote:
John, 
 
 You seem pretty knowledgable regarding MMAP. I was wondering if you could help me with this MMAP scenario:
 
  I'm curious as to how the OS and multple processes interact regarding file i/o and mmap.
 
Process A --- Writes to a file sequentially using either pwrite or kaio.
 
 I would like to write a process B. That  performs a read against what was written by A.

 I'm able to coordinate where to stop the read in other words I don't want to read more 
than what has been written by A.  Currently I'm just using os calls to "read" 
but I thought that maybe MMAP might give better performance especially if the OS would 
just provide the written buffers performed by Process A to Process  B's address space 
that is MMAPed.
 
 Thanks for any guidance.

 Ken
 

John Stanton <[EMAIL PROTECTED]> wrote: MMAP just lets you avoid one or two layers of buffering and APIs.  If 
you were to use fopen/fread you go to function calls then open/read plus 
buffering and function calls then to to the VM to actually access the 
data.  Going direct to the VM and getting a pointer to the VM pages is 
more efficient.  I got about 30% better speed out of one of my compilers 
just by removing the reads and local buffering and using mmap.  A b-tree 
index almost doubled in speed by removing local reads and buffering and 
using mmap.


Mitchell Vincent wrote:


Hi John! Thanks for the reply!

I think that makes a good point that the vm page fault is probably
faster than the overhead of copying the data to a local buffer.  So, page
fault or not, I think that's the way I'm going to do it.

Again, thanks very much for your input!

On 6/12/07, John Stanton  wrote:



Mitchell Vincent wrote:


Working with some data conversion here (that will eventually go into
an SQLite database). I'm hoping you IO wizards can offer some help on
a question that I've been trying to get answered.

I'm using Solaris 10 for this.

If I mmap a large file and use madvise with MADV_SEQUENTIAL and
MADV_WILLNEED, then start processing the file, when will the system
discard pages that have been referenced? I guess what I'm wondering is
if there is any retention of "back" pages?

Say for example I start reading the file, and after consuming 24,576
bytes, will the first or second pages still be in memory (assuming
8192 byte pages)?

Thanks!



In general it means that the file is mapped into virtual memory.  How
much of it remains in actual memory depends upon the memory demands on
the OS at the time.  If the sequential and random advice is used by the
OS it is most likely to implement a look ahead for requential access.
Not all OS's pay attention to those advisory settings.

What you are doing is to access the file as if it were an executing
program image.  Similar rules apply.

The answer is that you cannot assume that pages you have read are in
actual memory and you cannot assume that they are not.  When you access
a page not currently in memory the OS will read it in and find space for
it somehow, maybe by discarding some other page.

This is an excellent way to read files because you avoid one level of
buffer shadowing and get cacheing adjusted to currently available memory.

- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 










-
To unsubscribe, send email to [EMAIL PROTECTED]
-






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Trigger update of multiple columns

2007-06-18 Thread John Stanton
A traditional goal in database design is to place data in "Third Normal 
Form" which means in essence that each data element is only stored in 
one place.  Acesses join the rows to deliver data.


A normalized database does not hold redundant data and changing the 
value of one element changes its value everywhere.


You are proposing a database which is more like :First Normal Form" and 
is deprecated.


T wrote:

Hi John,

Thanks for your reply.

You would have a better database if you normalize it and not do  what 
you propose.



Meaning what, exactly?

Obviously, I've only given rough indications as to my data source,  such 
as:


the huge Products table (which is actually a UNION ALL of various  
supplier catalogs)



without detailing where all the data in there comes from, since that  
would distract too much from my question. But suffice to say that  
"normalizing" beyond the current structure is not straight forward.


In any case, at least in my ignorance of what you propose, it's  beside 
the point.


In essence, I'm asking: is it possible to update multiple columns in  a 
row, where all those values come from a single related row, without  
SQLite having to find (eg via SELECT) that related row multiple times?


Or, put another way, I want to get SQLite to:

1. Locate the related row.

2. Grab the desired columns from that row, putting each in the  related 
row.


I hope that clarifies.

Thanks,
Tom


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Journal File Optimization

2007-06-18 Thread John Stanton

Andre du Plessis wrote:

How can one optimize the creation of the journal file. The problem is
this, for our system which is an event based one each message needs to
be insterted and committed to the database (guaranteed), this results in
a commit per insert, this was obviously unacceptably slow and according
to the lists by design you need to do more bulk transactions, and with
some efford I modified it with a combination of bulk and temporary
tables to do quite a few at a time, but the problem remains that
committing is terribly slow. 

 


I'm thinking the journal file, the fact that it is created written and
deleted each time which is slowing it down, Is there not a way to create
and set up a permanent journal file pre-allocated to a certain size, so
sqlite does not have to go through the OS each time to create and delete
this file?

 


Along the same lines of this question, is there a way to fix the initial
size of the DB, ie set it to pre-allocate 700mb for instance so that no
growing of the db file is needed until the space is exceeded, may also
speed things up.

 

 


Thank you very much in advance.


I suspest the you will find that the sync process is the logjam.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Trigger update of multiple columns

2007-06-18 Thread John Stanton
A general rule of database design is to seperate reference and 
transactional data.  Then you can have a normalized database in a 
dynamic environment.


T wrote:

Hi Puneet and John,

You each respectively said:

Why are you repeating the Code, Buy, Sell, and Description columns  in 
the Sale_Products table when they already exists in the Products  table?



A traditional goal in database design is to place data in "Third  
Normal Form" which means in essence that each data element is only  
stored in one place



Good question/point. Sorry I thought from my introverted world that  
this would be obvious, so didn't elaborate. So let me clarify:


The Products table is changing all the time. What might be listed  today 
when a Sale is made, might no longer be listed in Products in a  couple 
of weeks (but needs to retain the details in the Sale). And  even if it 
is still listed in Products, it's very likely that the  price and 
probably description will have changed.


So the Sale_Products table needs to capture the values of the Buy  
Price, Sell Price and Description (and others) when the sale is made,  
and cannot simply be dynamically related to their occurrence in the  
Products table (since it will change, but the Sale_Products extract  for 
the current sale must remain unchanged).


assume that the "buy" column is the price I paid for the item, and  
"sell" column is the price I get for it) --



Yes, that's right.


CREATE TABLE products (
 product_id INTEGER PRIMARY KEY,
 buy REAL,
 sell REAL,
 desc TEXT
);

CREATE TABLE customers (
 customer_id INTEGER PRIMARY KEY,
 .. other customer info columns ..
);

CREATE TABLE sales (
 sale_id INTEGER PRIMARY KEY,
 product_id INTEGER,   -- FK to products table
 customer_id INTEGER   -- FK to customes table
);



One sale may involve several products, so it's more like this:

CREATE TABLE sales (
 sale_id INTEGER PRIMARY KEY,
 product_id INTEGER,   -- FK to products table
 customer_id INTEGER   -- FK to customes table
);

CREATE TABLE sale_products (
 sale_id INTEGER,  -- FK to sales table
 product_id INTEGER,   -- FK to products table
 buy REAL,
 sell REAL,
 desc TEXT
);

Now, when a new product is added to a Sale, I do this:

INSERT INTO sale_products(sale_id, product_id) VALUES( ? , ? )

And what I need is some kind of trigger that will automatically fill  in 
the buy, sell and desc columns for me.


Something like:

CERATE TRIGGER update_sale_products
AFTER UPDATE OF product_id
on sale_products
BEGIN
  UPDATE sale_products
SET
  buy  = (SELECT buy  FROM products WHERE products.product_id =  
NEW.product_id)
, sell = (SELECT sell FROM products WHERE products.product_id =  
NEW.product_id)
, desc = (SELECT desc FROM products WHERE products.product_id =  
NEW.product_id)

  WHERE
ROWID=NEW.ROWID
  ;
END

which is basically just a rewrite of my original post, but using your  
capitalization and entity names.


But I want something without the multiple lookups on the products  table 
of the same thing, ie the:

WHERE products.product_id = NEW.product_id

Any ideas?

Thanks,
Tom


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLITE_CORRUPT recover

2007-06-18 Thread John Stanton
The problem is fairly straight forward.  Sqlite is a single resource 
being shared by multiple thyreads so you just use fundamental 
synchronization logic as you would when sharing any resource between 
competing threads.


Sabyasachi Ruj wrote:

Hi,

I am using sqlite in a multithreaded environment. I have take
reasonable like not sharing sqlite* handles. I am creating a new sqlite* 
for

every thread.

Where can we get more info on working with SQLite in a multithreaded
environment?

The application is working as a service in windows.

sqlite3_step() is failing with the following error message:

* SQL error or missing database
SQL logic error or missing database*

I am getting this message after running the application for quite a long
time (few days).
And then if I execute *PRAGMA INTEGRITY_CHECK* on that table, I get the 
same

error message.

My application updates the database very 2 mins and the corruption happend
randomly

I dont have any clue how to debug this!

Thanks.

On 5/11/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:


"Sabyasachi Ruj" <[EMAIL PROTECTED]> wrote:
> Hi,
> Is there any way to programmatically fix a corrupted sqlite database?
> I am using sqlite version 3.3.8 with C APIs
>

Sometimes VACUUM or REINDEX will help, but usually not.
You can also try to recover using:

   sqlite3 OLD.DB .dump | sqlite3 NEW.DB

But that doesn't always work either.  The best approach
is to avoid corruption in the first place.
--
D. Richard Hipp <[EMAIL PROTECTED]>



- 


To unsubscribe, send email to [EMAIL PROTECTED]

- 










-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLITE_CORRUPT recover

2007-06-18 Thread John Stanton
Threadsafe only means that threads do not access global data elements or 
that they synchronize (serialize) access to global data.  It does 
nothing to synchronize threads.  That is up to the application 
programmer.  Sqlite uses POSIX file locks for synchronixation but if you 
are in a totally threaded environment you can use thread sync functions 
like mutexes or the finer grained read and write lock thread primitives.


If you are accessing Sqlite across a network file locks are the way to 
go, but do depend upon network implementations and settings.  If you 
have multiple processes on one OS you can sync using semaphores.


Using textbook style synchronization ensures that you have minimal 
problems and optimal performance.


Sabyasachi Ruj wrote:

But I think we do not have to take care of synchronizing sqlite access.
sqlite internally does if it is compiled with THREADSAFE=1.

On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:



The problem is fairly straight forward.  Sqlite is a single resource
being shared by multiple thyreads so you just use fundamental
synchronization logic as you would when sharing any resource between
competing threads.

Sabyasachi Ruj wrote:
> Hi,
>
> I am using sqlite in a multithreaded environment. I have take
> reasonable like not sharing sqlite* handles. I am creating a new 
sqlite*


> for
> every thread.
>
> Where can we get more info on working with SQLite in a multithreaded
> environment?
>
> The application is working as a service in windows.
>
> sqlite3_step() is failing with the following error message:
>
> * SQL error or missing database
> SQL logic error or missing database*
>
> I am getting this message after running the application for quite a 
long

> time (few days).
> And then if I execute *PRAGMA INTEGRITY_CHECK* on that table, I get the
> same
> error message.
>
> My application updates the database very 2 mins and the corruption
happend
> randomly
>
> I dont have any clue how to debug this!
>
> Thanks.
>
> On 5/11/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
>> "Sabyasachi Ruj" < [EMAIL PROTECTED]> wrote:
>> > Hi,
>> > Is there any way to programmatically fix a corrupted sqlite 
database?

>> > I am using sqlite version 3.3.8 with C APIs
>> >
>>
>> Sometimes VACUUM or REINDEX will help, but usually not.
>> You can also try to recover using:
>>
>>sqlite3 OLD.DB .dump | sqlite3 NEW.DB
>>
>> But that doesn't always work either.  The best approach
>> is to avoid corruption in the first place.
>> --
>> D. Richard Hipp <[EMAIL PROTECTED]>
>>
>>
>>
>>
- 



>>
>> To unsubscribe, send email to [EMAIL PROTECTED]
>>
>>
- 



>>
>>
>>
>
>



- 


To unsubscribe, send email to [EMAIL PROTECTED]

- 










-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Trigger update of multiple columns

2007-06-18 Thread John Stanton
I mean something else.  You have a reference data set which is accessed 
to get the current value of reference elements and store transactions to 
record events. The transaction trails provide event history.


A price is in the reference data, its value transferred to a transaction 
is no longer a price, it is a sale which represents the value of the

price at the time the event occurred.

How about reading your price data etc and just inserting a transaction 
into your sales table?  You do not need to update anything.  You also 
have integrity for multiple users.


T wrote:

Hi John,

A general rule of database design is to seperate reference and  
transactional data.  Then you can have a normalized database in a  
dynamic environment.



Yes, I think that's what I am designing.

The reference data is the products table (and potentially customer  
table etc)


The transactional data is the sale_products table which lists what  
products went with each sale.


Or do you mean something else?

Thanks,
Tom


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLITE_CORRUPT recover

2007-06-18 Thread John Stanton
If you knew the answer then why did you ask the question?  You can apply 
regular CS or look for miracles.


You might be better off using a server based RDBMS like Oracle or 
PostgreSQL where the application programmer is insulated from 
synchronization issues.  Sqlite has the "lite" in its name for a very 
good reason.  It is designed for embedded applications, not as an 
enterprise DBMS.


Sabyasachi Ruj wrote:

But the following link
http://www.sqlite.org/cvstrac/wiki/wiki?p=MultiThreading
says nothing that I have to synchronize at the application level to create
multiple connections, until the same database connection is being shared!
Ref: The four points in 'Short Answer' section.

BTW: for a DBMS it does not make sense if the application programmer has to
synchronize to create multiple connection.
And synhing will have considerable performance drop also.


On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:



Threadsafe only means that threads do not access global data elements or
that they synchronize (serialize) access to global data.  It does
nothing to synchronize threads.  That is up to the application
programmer.  Sqlite uses POSIX file locks for synchronixation but if you
are in a totally threaded environment you can use thread sync functions
like mutexes or the finer grained read and write lock thread primitives.

If you are accessing Sqlite across a network file locks are the way to
go, but do depend upon network implementations and settings.  If you
have multiple processes on one OS you can sync using semaphores.

Using textbook style synchronization ensures that you have minimal
problems and optimal performance.

Sabyasachi Ruj wrote:
> But I think we do not have to take care of synchronizing sqlite access.
> sqlite internally does if it is compiled with THREADSAFE=1.
>
> On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:
>
>>
>> The problem is fairly straight forward.  Sqlite is a single resource
>> being shared by multiple thyreads so you just use fundamental
>> synchronization logic as you would when sharing any resource between
>> competing threads.
>>
>> Sabyasachi Ruj wrote:
>> > Hi,
>> >
>> > I am using sqlite in a multithreaded environment. I have take
>> > reasonable like not sharing sqlite* handles. I am creating a new
>> sqlite*
>>
>> > for
>> > every thread.
>> >
>> > Where can we get more info on working with SQLite in a multithreaded
>> > environment?
>> >
>> > The application is working as a service in windows.
>> >
>> > sqlite3_step() is failing with the following error message:
>> >
>> > * SQL error or missing database
>> > SQL logic error or missing database*
>> >
>> > I am getting this message after running the application for quite a
>> long
>> > time (few days).
>> > And then if I execute *PRAGMA INTEGRITY_CHECK* on that table, I get
the
>> > same
>> > error message.
>> >
>> > My application updates the database very 2 mins and the corruption
>> happend
>> > randomly
>> >
>> > I dont have any clue how to debug this!
>> >
>> > Thanks.
>> >
>> > On 5/11/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>> >
>> >> "Sabyasachi Ruj" < [EMAIL PROTECTED]> wrote:
>> >> > Hi,
>> >> > Is there any way to programmatically fix a corrupted sqlite
>> database?
>> >> > I am using sqlite version 3.3.8 with C APIs
>> >> >
>> >>
>> >> Sometimes VACUUM or REINDEX will help, but usually not.
>> >> You can also try to recover using:
>> >>
>> >>sqlite3 OLD.DB .dump | sqlite3 NEW.DB
>> >>
>> >> But that doesn't always work either.  The best approach
>> >> is to avoid corruption in the first place.
>> >> --
>> >> D. Richard Hipp <[EMAIL PROTECTED]>
>> >>
>> >>
>> >>
>> >>
>>
- 


>>
>>
>> >>
>> >> To unsubscribe, send email to [EMAIL PROTECTED]
>> >>
>> >>
>>
- 


>>
>>
>> >>
>> >>
>> >>
>> >
>> >
>>
>>
>>
>>
- 


>>
>> To unsubscribe, send email to [EMAIL PROTECTED]
>>
>>
- 


>>
>>
>>
>
>



- 


To unsubscribe, send email to [EMAIL PROTECTED]

- 










-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Trigger update of multiple columns

2007-06-18 Thread John Stanton
If you can automatically enter data then you are violating the 
normalization rules.  Maybe you should get a book on database design and 
become familiar with some of the fundamentals.


T wrote:

Hi John,

You have a reference data set which is accessed to get the current  
value of reference elements and store transactions to record  events. 
The transaction trails provide event history.



Yes, agreed.

A price is in the reference data, its value transferred to a  
transaction is no longer a price, it is a sale which represents the  
value of the price at the time the event occurred.



Yes.

How about reading your price data etc and just inserting a  
transaction into your sales table?



Yes, that's what I'm doing. I just want to make it more efficient.

Technically it's the sale_products table (since each sale has many  
products etc), but yes, I want to insert the transactions data  
(product_id, buy, sell, desc) into the sale_products table. But I  want 
a mechanism whereby if I enter the product_id, then the buy,  sell, desc 
columns are auto entered (copied) from their corresponding  row in the 
products table.


Tom



- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLITE_CORRUPT recover

2007-06-18 Thread John Stanton
It is fundamental computer science, CS101 you might say.  Pick up a 
textbook on basic computing.


Sabyasachi Ruj wrote:

But can you tell me where is this documented please?

On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:



If you knew the answer then why did you ask the question?  You can apply
regular CS or look for miracles.

You might be better off using a server based RDBMS like Oracle or
PostgreSQL where the application programmer is insulated from
synchronization issues.  Sqlite has the "lite" in its name for a very
good reason.  It is designed for embedded applications, not as an
enterprise DBMS.

Sabyasachi Ruj wrote:
> But the following link
> http://www.sqlite.org/cvstrac/wiki/wiki?p=MultiThreading
> says nothing that I have to synchronize at the application level to
create
> multiple connections, until the same database connection is being
shared!
> Ref: The four points in 'Short Answer' section.
>
> BTW: for a DBMS it does not make sense if the application programmer 
has

to
> synchronize to create multiple connection.
> And synhing will have considerable performance drop also.
>
>
> On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:
>
>>
>> Threadsafe only means that threads do not access global data elements
or
>> that they synchronize (serialize) access to global data.  It does
>> nothing to synchronize threads.  That is up to the application
>> programmer.  Sqlite uses POSIX file locks for synchronixation but if
you
>> are in a totally threaded environment you can use thread sync 
functions

>> like mutexes or the finer grained read and write lock thread
primitives.
>>
>> If you are accessing Sqlite across a network file locks are the way to
>> go, but do depend upon network implementations and settings.  If you
>> have multiple processes on one OS you can sync using semaphores.
>>
>> Using textbook style synchronization ensures that you have minimal
>> problems and optimal performance.
>>
>> Sabyasachi Ruj wrote:
>> > But I think we do not have to take care of synchronizing sqlite
access.
>> > sqlite internally does if it is compiled with THREADSAFE=1.
>> >
>> > On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:
>> >
>> >>
>> >> The problem is fairly straight forward.  Sqlite is a single 
resource

>> >> being shared by multiple thyreads so you just use fundamental
>> >> synchronization logic as you would when sharing any resource 
between

>> >> competing threads.
>> >>
>> >> Sabyasachi Ruj wrote:
>> >> > Hi,
>> >> >
>> >> > I am using sqlite in a multithreaded environment. I have take
>> >> > reasonable like not sharing sqlite* handles. I am creating a new
>> >> sqlite*
>> >>
>> >> > for
>> >> > every thread.
>> >> >
>> >> > Where can we get more info on working with SQLite in a
multithreaded
>> >> > environment?
>> >> >
>> >> > The application is working as a service in windows.
>> >> >
>> >> > sqlite3_step() is failing with the following error message:
>> >> >
>> >> > * SQL error or missing database
>> >> > SQL logic error or missing database*
>> >> >
>> >> > I am getting this message after running the application for quite
a
>> >> long
>> >> > time (few days).
>> >> > And then if I execute *PRAGMA INTEGRITY_CHECK* on that table, I
get
>> the
>> >> > same
>> >> > error message.
>> >> >
>> >> > My application updates the database very 2 mins and the 
corruption

>> >> happend
>> >> > randomly
>> >> >
>> >> > I dont have any clue how to debug this!
>> >> >
>> >> > Thanks.
>> >> >
>> >> > On 5/11/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>> >> >
>> >> >> "Sabyasachi Ruj" < [EMAIL PROTECTED]> wrote:
>> >> >> > Hi,
>> >> >> > Is there any way to programmatically fix a corrupted sqlite
>> >> database?
>> >> >> > I am using sqlite version 3.3.8 with C APIs
>> >> >> >
>> >> >>
>> >> >> Sometimes VACUUM or REINDEX will help, but usually not.
>> >> >> You can also try to recover using:
>> >> >>
>> >>

Re: [sqlite] SQLITE_CORRUPT recover

2007-06-18 Thread John Stanton
You said that you were sharing Sqlite between threads by opening a 
connection in each thread.  Sqlite is a single resource and must be 
sync'd somehow if it has multiple users.


Complaining about it is like complaining that the sky is blue.  It is 
blue because of Rayleigh scattering, a fundamental phenomenon which 
cannot be changed by wishes.  You just have to get used to it.


We successfully share Sqlite between threads.  Sometimes we use a 
connection per thread and sometimes a single connection.  The single 
connection is optimal because it enhances the cache usage.  We sync with 
mutexes.  The logic is very simple.


Sabyasachi Ruj wrote:

I still fail to understand what should I synchronize on. I am *not* sharing
sqlite* across multiple threads.

On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:



It is fundamental computer science, CS101 you might say.  Pick up a
textbook on basic computing.

Sabyasachi Ruj wrote:
> But can you tell me where is this documented please?
>
> On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:
>
>>
>> If you knew the answer then why did you ask the question?  You can
apply
>> regular CS or look for miracles.
>>
>> You might be better off using a server based RDBMS like Oracle or
>> PostgreSQL where the application programmer is insulated from
>> synchronization issues.  Sqlite has the "lite" in its name for a very
>> good reason.  It is designed for embedded applications, not as an
>> enterprise DBMS.
>>
>> Sabyasachi Ruj wrote:
>> > But the following link
>> > http://www.sqlite.org/cvstrac/wiki/wiki?p=MultiThreading
>> > says nothing that I have to synchronize at the application level to
>> create
>> > multiple connections, until the same database connection is being
>> shared!
>> > Ref: The four points in 'Short Answer' section.
>> >
>> > BTW: for a DBMS it does not make sense if the application programmer
>> has
>> to
>> > synchronize to create multiple connection.
>> > And synhing will have considerable performance drop also.
>> >
>> >
>> > On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:
>> >
>> >>
>> >> Threadsafe only means that threads do not access global data
elements
>> or
>> >> that they synchronize (serialize) access to global data.  It does
>> >> nothing to synchronize threads.  That is up to the application
>> >> programmer.  Sqlite uses POSIX file locks for synchronixation 
but if

>> you
>> >> are in a totally threaded environment you can use thread sync
>> functions
>> >> like mutexes or the finer grained read and write lock thread
>> primitives.
>> >>
>> >> If you are accessing Sqlite across a network file locks are the way
to
>> >> go, but do depend upon network implementations and settings.  If 
you

>> >> have multiple processes on one OS you can sync using semaphores.
>> >>
>> >> Using textbook style synchronization ensures that you have minimal
>> >> problems and optimal performance.
>> >>
>> >> Sabyasachi Ruj wrote:
>> >> > But I think we do not have to take care of synchronizing sqlite
>> access.
>> >> > sqlite internally does if it is compiled with THREADSAFE=1.
>> >> >
>> >> > On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:
>> >> >
>> >> >>
>> >> >> The problem is fairly straight forward.  Sqlite is a single
>> resource
>> >> >> being shared by multiple thyreads so you just use fundamental
>> >> >> synchronization logic as you would when sharing any resource
>> between
>> >> >> competing threads.
>> >> >>
>> >> >> Sabyasachi Ruj wrote:
>> >> >> > Hi,
>> >> >> >
>> >> >> > I am using sqlite in a multithreaded environment. I have take
>> >> >> > reasonable like not sharing sqlite* handles. I am creating a
new
>> >> >> sqlite*
>> >> >>
>> >> >> > for
>> >> >> > every thread.
>> >> >> >
>> >> >> > Where can we get more info on working with SQLite in a
>> multithreaded
>> >> >> > environment?
>> >> >> >
>> >> >> > The application is working as a service in windows.
>> >> >> >
>> >> >> > sqlite3_step() is failing with the following error mess

Re: [sqlite] Proper way to transfer a live sqlite database

2007-06-18 Thread John Stanton
One of the most endearing features of Sqlite is that it is a single 
file.  You can copy it with impunity.  If it is in use while you are 
copying you can launch an exclusive transaction to block other users and 
copy it and  be assured of its state.


Rich Rattanni wrote:

The databases will be in flux, and I didnt necessairly want to suspend
the application that is performs reads and writes into the database.
A simple copy worries me because it seems like messing with SQLITE on
the file level is dangerous since you circumvent all the protection
mechanisms that provide fault tolerance.  I didnt want to have to
worry about if the database has a journal file that needs copied, or
any other situation like that.  I figured using the SQLITE API to do
the copy would award me some protection against corruption.

On 6/18/07, Fred Williams <[EMAIL PROTECTED]> wrote:


It would most likely be much quicker (and simpler) just to utilize the
OS's file coping feature to copy the table.  What would be gained with
the attaching databases approach over just a straight file copy?

Fred

> -Original Message-
> From: Rich Rattanni [mailto:[EMAIL PROTECTED]
> Sent: Monday, June 18, 2007 10:20 AM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] Proper way to transfer a live sqlite database
>
>
> I was writing to ask some opinions on how to perform a download of a
> live sqlite database.  Basically I have a device, which stores all
> manner of data in a sqlite database.  Periodically I want to download
> the data to a central server for viewing.  I discussed it with my
> colleagues, and they felt that I should just copy the file to the
> server.  However I was thinking of having a blank database with
> identical schema to the database I am copying.  Then when the download
> occurs, I would ATTACH the live database to the blank database, and
> query the data from one to the other.  Then I would close the cloned
> version and offload that to the server.
>
> The standard questions now follow...
> Is this an acceptable way?
> Is there a better/best way?
>
> Thanks for any input,
> Rich Rattanni
>
> --
> ---
> To unsubscribe, send email to [EMAIL PROTECTED]
> --
> ---
>


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] FTS2 Experiences?

2007-06-18 Thread John Stanton
We have just started to use it.  So far it is performing well, but we 
have not subjected it to high volume and large data sets yet.


I have written a simple function which helps in our application.  The 
function concanenates columns to produce a block of text then strips out 
punctuation and prepositions etc and upshifts the case.  The resulting 
string of words is used to create the FTS2 index.  Search strings go 
through the same function.  The FTS2 index is maintained by triggers and 
is transparent to the applications thanks to the user function.


Russell Leighton wrote:


Could folks that have used fts2 in production apps/systems relate their 
experiences to the group?


I would very much be interested in how folks are using it, how well it 
performs with large data and general impressions.


Thanks in advance.

Russ


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Custom Aggregation Not Working

2007-06-19 Thread John Stanton
We have implemented a decimal arithmetic module to handle money in 
Sqlite.  It uses the regular SQL definitions and maintains precison and 
scale.  The data are stored as TEXT and in "display format", right 
aligned with leading spaces so that they display without editing or 
radix transformation when inserting into HTML pages or reports.  The 
arithmetic maintains the fixed point precision and applies correct 
rounding using the classic algorithm which minimizes skew.


Sqlite tries to torpedo these types by wanting to transform decimal 
numbers into integers or floating point.  Some care is needed at the 
interface point to stop that happening.


The alternative way to store money in Sqlite is to use integers with an 
implied decimal point and perform appropriate rounding and point 
position maintenance when performing multiplication and division.  This 
is just like the traditional COMP3 BCD integer usage.


Using floating point for money in exacting applications is a trap for 
young players.  Once bitten, twice shy.


wcmadness wrote:
Hey, Folks: 


I'm writing a financial application and MUST have exact math decimals (no
floats). So, I'm using Python's decimal module. 


My database is Sqlite (and my language is Python with Pysqlite); Sqlite
doesn't offer a non-floating point decimal type. But, it does have adapters
and converters to store data as a native Sqlite type (string / text) in the
database and bring it out of the database and into memory as something else
(Python Decimal, in my case). That works great, but it does NOT seem to
apply to aggregation operations. I need it to. 


So, I tried using another Sqlite feature, custom aggregation functions, but
to no avail. Does anyone know how to fix this? What am I doing wrong? 


I am submitting all of my test code, below. Following that, I am submitting
my results. 

Thanks for your time in helping with this! 

Here's the code: 

import sqlite3 
import decimal 


# This way will store the value as float, potentially losing
precision. 

print '-' * 25 
print 'Testing native data types, no adapters / converters.' 
con = sqlite3.connect('test1.db') 
cur = con.cursor() 
cur.execute("create table test (pkey integer primary key,somenumber
Decimal);") 
cur.execute("insert into test values (null,.1);") 
cur.execute("insert into test values (null,.2);") 
cur.execute("select * from test;") 
rows = cur.fetchall() 
for row in rows: 
print row[0], type(row[0]) 
print row[1], type(row[1]) 
cur.close() 
con.close() 
print '-' * 25 

# This way will store the value as decimal, keeping exact precision. 

def AdaptDecimal(pdecValue): 
return str(pdecValue) 
def ConvertDecimal(pstrValue): 
return decimal.Decimal(pstrValue) 

decimal.getcontext().precision = 50 
sqlite3.register_adapter(decimal.Decimal, AdaptDecimal) 
sqlite3.register_converter("Decimal", ConvertDecimal) 


print 'Testing data type with adapters / converters. Decimal numbers should
be Python Decimal types.' 
con = sqlite3.connect('test2.db',detect_types = sqlite3.PARSE_DECLTYPES) 
cur = con.cursor() 
cur.execute("create table test (pkey integer primary key,somenumber
Decimal);") 
cur.execute("insert into test values (null,.1);") 
cur.execute("insert into test values (null,.2);") 
cur.execute("select * from test;") 
rows = cur.fetchall() 
for row in rows: 
print row[0], type(row[0]) 
print row[1], type(row[1]) 
cur.close() 
con.close() 
print '-' * 25 

# OK. That works. Now for the real test. Let's try an equality test. 
# Classic float equality failure .1 + .1... 10 times should NOT
equal 1. 
# As predicted, this will FAIL the equality test 


print 'Testing Sum aggregation on native data types. Should be float and
should fail equality test.' 
con = sqlite3.connect('test3.db') 
cur = con.cursor() 
cur.execute("create table test (pkey integer primary key,somenumber
Decimal);") 
for x in range(10): 
cur.execute("insert into test values (null,.1);") 
cur.execute("select sum(somenumber) as total from test;") 
rows = cur.fetchall() 
print rows[0][0], type(rows[0][0]) 
if rows[0][0] == 1: 
print 'equal' 
else: 
print 'NOT equal' 
cur.close() 
con.close() 
print '-' * 25 


# Now, we try the exact same equality test, using adapters and
converters, substituting 
# the Python exact precision decimal type for float. 


# Probably don't need to re-register. We did that above. We probably just
need to parse declared types when 
# we open the connection. 
# H... This fails whether I re-register or not. 
# sqlite3.register_adapter(decimal.Decimal, AdaptDecimal) 
# sqlite3.register_converter("Decimal", ConvertDecimal) 
print "Testing Sum aggregation with adapters / converters registered. Result

SHOULD BE Python Decimal type, but is NOT. Should PASS equality test, but
doesn't." 
con = sqlite3.connect('test4.db',detect_types = sqlite3.PARSE_DECLTYPES) 
cur = con.cursor() 
cur.execute("create table test (pkey integer primary key,somenumber
Decimal);") 
for 

Re: [sqlite] Recommend server for Windows?

2007-06-19 Thread John Stanton


Gilles Ganault wrote:

At 11:20 19/06/2007 -0400, Clay Dowling
 wrote:


I'm going to recommend PostgreSQL.



Thanks for the idea, but if possible, we'd rather something really 
basic, typically a single EXE. Besides, using eg. PostgreSQL would 
require rewriting our application.


I went through the list of servers on the wiki, and the following 
solutions look interesting:


http://users.libero.it/irwin/ : uSQLiteServer - An SQLite network wrapper

http://www.oneledger.co.uk/sql4sockets.html

dhRPCServer + dhSQLite-COM-Wrapper http://www.thecommon.net/8.html

http://www.protracksource.com/node/42 : "SQLite Server is a free 
database server that is the central data store for Pro Track Source 
Connected. It comes with your purchase of Pro Track Source Connected, 
but we also offer it here for a free download in case you have lost it." 
(CHECK if server can be used with any SQLite client)


=> BTW, I though of a simple solution: a web-like server process that 
uses HTTP for the protocol, and commands are sent with the POST method. 
HTTP is tried and true, and since we use TCP, the server can be 
configured to only accept a single connection at once, taking care of 
concurrency.


Does someone know if something like this has been done?

Thank you.

I wrote such a server.  It uses HTTP and embeds Sqlite.  It is multi 
threaded and works very effectively.  It handles file serving for 
regular WWW purposes, and RPC mechanism for AJAX style database access 
from a browser.  It also includes an embedded server pages processor.


Embedding Sqlite in such a server is an effective way of getting a 
highly eficient network database.  In operation the server barely 
registers CPU usage, an indication of the effectiveness of the approach. 
 I carefully use sendfile/TransmitFile for network traffic to get good 
throughput.


Such a server can be made simpler then mine by making it single threaded.


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Recommend server for Windows?

2007-06-20 Thread John Stanton

Gilles Ganault wrote:

At 20:47 19/06/2007 -0500, John Stanton wrote:


Such a server can be made simpler then mine by making it single threaded.



Is it publicly available from http://www.viacognis.com?

Thanks
G.


No, but I can give you some code which might help your project.

The components which service SQL requests as www-url-encode messages and 
return XML or JSON encapsulated DB rows might suit your purpose.


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Data structure for versioned data

2007-06-20 Thread John Stanton
We perform some versioning by holding column material in XML and using 
RCS to maintain reverse deltas and versions.


Samuel R. Neff wrote:

Not specific to SQLite, but we're working on an app that needs to keep
versioned data (i.e., the current values plus all previous values).  The
versioning is integral to the app so it's more than just an audit trail or
history.

Can anyone share experiences with the database structure for this type of
requirement or point me to helpful resources?

Thanks,

Sam



---
We're Hiring! Seeking a passionate developer to join our team building
products. Position is in the Washington D.C. metro area. If interested
contact [EMAIL PROTECTED]
 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Alternative index methods (hash key)

2007-06-20 Thread John Stanton

Andrew Finkenstadt wrote:

On 6/20/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:



"Scott Hess" <[EMAIL PROTECTED]> wrote:
> On 6/20/07, Andrew Finkenstadt <[EMAIL PROTECTED]> wrote:
> > How difficult do you think it would be to support an alternative
method of
> > indexing within SQLite specifically to support O(1) retrieval of the
rowid
> > for a table, and then potentially O(1) retrieval of the row data 
for a

> > table, when in-order retrieval is undesired?

It's a major rewrite.  SQLite assumes btree access in a number of
important places.




Thanks for that info.  I will save myself the heartache then


>
> > My database design is highly specialized to never, ever retrieve data
except
> > by whole-key equality, and would benefit greatly from this
optimization.
> > I'm willing to code it up myself, although I'm not quite set up to do
SQL
> > syntax parser changes, preferring to use the amalgamation.
>
> Would you be able to live with converting your primary key to a 64-bit
> hash value?  In development of fts2, I found that direct rowid -> row
> access is pretty fast.  It's still technically O(logN), but the log's
> base is absurdly large, so it comes reasonably close to O(1).

I concur with Scott's assessment.  The typical fanout in the table
btrees used by SQLite is around 100 (assuming a 1K page) so you can
get to one of 10 million pages in only 3 page reads.  And the first
page is almost guaranteed to be in cache, so really only 2 page
reads.



My default page size is 32k, with the number of rows entries at about a
hundred thousand... assuming base-100 for 1k, that would be base 3000 or so
for 32k, which would mean 1 I/O if uncached, followed by a search for the
actual row containing the data.  (I use an indirect table in order to keep
my small data separate from my blobs, perhaps this was unnecessary.)

create table index_data ( id integer not null primary key autoincrement, 
rod

blob not null unique /* alternate key of 20-bytes fixed */, ... );
create table file_data ( id integer not null primary key, 
compression_method

integer not null, file_size integer not null, file_bytes blob not null);

The retrieval of looking up index_data.id via rod equality takes longer 
than

I would like.  file_data retrieval has its own issues which I am still
tuning.

--a

I ran some trials on b-tree versus hashing on another project.  It 
convinced me that the hashing access was only beneficial in a very low 
memory environment such as a database designed 40 years ago.  With the 
upper levels of the B-tree always in cache and no clashes it was the 
clearly the right approach.


What did give improved speed was larger node sizes and a binary search 
on keys within the node.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] querying number of open connections

2007-06-27 Thread John Stanton

Nate Constant wrote:

Hello, is there a way to query the number of open database connections?


An open file monitor like lsof will do it.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] FW: BLOB data retrieval

2007-06-27 Thread John Stanton

Krishnamoorthy, Priya (IE10) wrote:

Hi,

 


I have a database which has a table that contains BLOB data. The table
has two columns - one is "Row_Num" which is of type AUTO_INCREMENT
(INTERGER_PRIMARY_KEY) and the other column "Data" contains BLOB data.

 


I am writing a program (in MS VC++) which has to read the blob data in
all the rows of the above mentioned table one after another. I am using
cppSQLite3 which is C++ wrapper function for accessing sqlite3 db. Mine
is an offline program that reads the data in the database which is
updated by another program.

 


I wrote the following code for reading from the database.

 


 CppSQLite3DBdb;

CppSQLite3Buffer bufSQL;

 


// Open the database. The name is provided by the user

db.open(database_name);

// Read the entire binary table

bufSQL.format("select * from %s order by 1;",table_name);

CppSQLite3Table b = db.getTable(bufSQL);

CppSQLite3Binary blobz;

 


// Read binary records one at a time from the database until all
the records are read

for (int i=1;i <= b.numRows() ; i++)

{

   CppSQLite3Query q; 


   CppSQLite3Buffer sql_command;

   long length = 0;

  


   // Read binary record from row number "i"

   sql_command.format("select Data from %s where Row_Num =
%d;",table_name,i);

   q = db.execQuery(sql_command);

   


   if (!q.eof())

   {

   blobz.setEncoded((unsigned char*)q.fieldValue("Data"));

   cout << "Retrieved binary Length: " <<
blobz.getBinaryLength() << endl;

   }

 


   const unsigned char* pbin = blobz.getBinary();

}

 


This method works fine only when the table size is small. For example, I
have a database of size over 2GB in which case I get an error
SQLITE_NOMEM when I try to do db.gettable(). But i need to know the
number of rows in the table for reading BLOB data in each of the rows
one after another.

 


Please let me know how I can go about doing this?

Secondly, the column "Row_Num" is of type INTEGER_PRIMARY_KEY. So, it
will take negative value after 2GB as mine is a 32 bit machine.

So, how do I access data beyond 2 GB?

 


Please help me in this regard.

 


Regards,

Priya

If you use the Sqlite API with sqlite3_prepare/sqlite3_step logic you 
will not have a memory problem.
 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



  1   2   3   4   5   6   7   8   9   10   >