Re: [sqlite] Most wanted features of SQLite ?

2009-09-22 Thread John Stanton
P Kishor wrote:
> On Sat, Sep 19, 2009 at 9:21 AM, John Stanton <jo...@viacognis.com> wrote:
>> Alexey Pechnikov wrote:
>>> Hello!
>>>
>>> On Saturday 19 September 2009 00:43:18 Noah Hart wrote:
>>>
>>>> Stored Procedures
>>>>
>>> There are Tiny C compiler extension and realization of
>>> stored procedures for SQLite 2 and Lua extension and other.
>>> So you can use one or all of these.
>>>
>>> Best regards, Alexey Pechnikov.
>>> http://pechnikov.tel/
>>>
>> There is a PL/SQL implementation available and we use Javascript as a
>> stored procedure capability in Sqlite.  It integrates nicely with WWW
>> applications.
> 
> 
> well, can you share the above with the rest of the community?
> 
I distributed it a while ago.  I shall hunt it down again and put up a 
webpage.  Ibasically used Spidermonkey and wrote an Sqlite interface. 
The Javascript plus Sqlite can run stand alone or from a Javascript 
extension to Sqlite.  It is very handy because an algorithm in 
Javascript can run server or client side in WWW applications.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Most wanted features of SQLite ?

2009-09-22 Thread John Stanton
Alexey Pechnikov wrote:
> Hello!
> 
> On Saturday 19 September 2009 18:21:22 John Stanton wrote:
>> There is a PL/SQL implementation available and we use Javascript as a 
>> stored procedure capability in Sqlite.  It integrates nicely with WWW 
>> applications.
> 
> I don't know this. Can you show link to docs and examples? Thx.
> 
> Best regards, Alexey Pechnikov.
> http://pechnikov.tel/
> 
I don't have a link but the developer has recent email on this list.

> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Most wanted features of SQLite ?

2009-09-19 Thread John Stanton
Alexey Pechnikov wrote:
> Hello!
>
> On Saturday 19 September 2009 00:43:18 Noah Hart wrote:
>   
>> Stored Procedures
>> 
>
> There are Tiny C compiler extension and realization of
> stored procedures for SQLite 2 and Lua extension and other.
> So you can use one or all of these.
>
> Best regards, Alexey Pechnikov.
> http://pechnikov.tel/
>   
There is a PL/SQL implementation available and we use Javascript as a 
stored procedure capability in Sqlite.  It integrates nicely with WWW 
applications.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] AIX performance

2009-08-29 Thread John Stanton
What machine runs AIX and what machine runs Linux and what FS are you 
using on Linux?

Ken wrote:
> Im looking for some ideas as to why the performance for a select statement 
> takes roughly 5 times longer on AIX as on a linux system.
>
> The disk drive on the aix platform is a 10k scsi vs the 7200 rpm drive on 
> linux.
>
> AIX:
> sqlite> .timer on
> sqlite> .output ken.out
> sqlite> .read kdo.sql
> CPU Time: user 26.321955 sys 6.498729
> sqlite> 
>
> Linux:
> sqlite> .timer on
> sqlite> .output ken.out
> sqlite> .read kdo.sql
> CPU Time: user 4.648290 sys 0.888056
> sqlite> 
>
> Thanks,
> Ken
>
> Adding pragma temp_store=2 does seem to help, it does reduce the sys time 
> from 6 to .5
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLJet - pure Java implementation of SQLite

2009-08-13 Thread John Stanton
Our approach in the 1980s was to make the compiler.

Interestingly you mention a Databus to C product wich did not flourish 
(as one would expect for a translator)..  We produced a complete Databus 
compiler to a target code like Java.  This product sold well and is 
still viable thanks to the large base of legacy Databus source.  The 
virtual machine, like the Java VM was implemented on many architectures 
and would run the target code unchanged, like Java.  All this predated 
Java and found use with OEMs distributing applications across many 
architectures as well as with outfits preserving their legacy ptograms.

Native code implementations were merely a matter of making a native code 
generator for the compiler backend.

Fred Williams wrote:
> I think you are talking "Open Source" world now.  In the late 80's, early
> 90's things like Open Source Databas/Datashare compilers weren't even in
> anyone's dreams.  What you saw was what you got.  With no open source
> compilers to monkey with you just did what you had to, pretty or not.
>
> Somebody did later write a Databus/Datashare to C translator and the
> Datapoint world beat a very deep pathway to his door!  Shortly after that
> Datapoint was no more and his market evaporated.
>
> Sorry, this ain't SQLite at all.  I'm done here.
>
> Fred
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org]on Behalf Of John Stanton
> Sent: Thursday, August 13, 2009 1:32 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] SQLJet - pure Java implementation of SQLite
>
>
> In this situation it has been our approach to never try to translate
> source in one language into another.  It is a  pointlesss activity when
> you think about it.  Much better to have the C or whatever compiler have
> a different code generator, for example Java target code.
>
> You are not rev locked that way and if your C compiler optimizes well
> you have an efficient executable.  Your JIT compiler still works.
>
> People would spend months translating program into error ridden monsters
> instead of spending less time working on a compiler and ending up with
> unchanged source with no added errors and which can be easily maintained
> in the origibal code with the comments still meaningful.
>
> Fred Williams wrote:
>   
>> Having had the unfortunate opportunity to use a couple of language
>> translators as well as spending about six fruitless months developing one
>> which in the end was no better, I say there is no known translation that
>> would allow the three SQLite, "Small, Fast, Reliable" adjectives to
>> translate into any regurgitated language output, with the exception of
>> compiling SQLite source with a C++ compiler :-)
>>
>> Fred
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org]on Behalf Of Roger Binns
>> Sent: Tuesday, August 11, 2009 7:15 PM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] SQLJet - pure Java implementation of SQLite
>>
>>
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA1
>>
>> Alexander Kitaev wrote:
>>
>> 
>>> Not to depend on native SQLite binaries or
>>> opaque NestedVM code,
>>>
>>>   
>> As a matter of interest what problem exactly do you have with NestedVM?
>> It's output is indeed opaque (not human comprehensible) but the same is
>> 
> true
>   
>> of Java source versus bytecode.  In both cases the input source is
>> 
> readable.
>   
>> It would also be interesting if anyone has built something that
>> 
> comprehends
>   
>> the SQLite C source and then does the conversion into other languages
>> 
> based
>   
>> on that.  It would make updates a lot easier, the generation of
>> 
> instrumented
>   
>> and test code easier, and the search for issues or optimisations easier.
>>
>> Roger
>> -BEGIN PGP SIGNATURE-
>> Version: GnuPG v1.4.9 (GNU/Linux)
>> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>>
>> iEYEARECAAYFAkqCCWgACgkQmOOfHg372QQXqwCeJ4pqKa89vcCAxTQOelMyoPU6
>> cuQAoK6Feey6AL3pdzMgv983tn8Yg1ML
>> =TKoq
>> -END PGP SIGNATURE-
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite

Re: [sqlite] SQLJet - pure Java implementation of SQLite

2009-08-13 Thread John Stanton
In this situation it has been our approach to never try to translate 
source in one language into another.  It is a  pointlesss activity when 
you think about it.  Much better to have the C or whatever compiler have 
a different code generator, for example Java target code.

You are not rev locked that way and if your C compiler optimizes well 
you have an efficient executable.  Your JIT compiler still works.

People would spend months translating program into error ridden monsters 
instead of spending less time working on a compiler and ending up with 
unchanged source with no added errors and which can be easily maintained 
in the origibal code with the comments still meaningful.

Fred Williams wrote:
> Having had the unfortunate opportunity to use a couple of language
> translators as well as spending about six fruitless months developing one
> which in the end was no better, I say there is no known translation that
> would allow the three SQLite, "Small, Fast, Reliable" adjectives to
> translate into any regurgitated language output, with the exception of
> compiling SQLite source with a C++ compiler :-)
>
> Fred
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org]on Behalf Of Roger Binns
> Sent: Tuesday, August 11, 2009 7:15 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] SQLJet - pure Java implementation of SQLite
>
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Alexander Kitaev wrote:
>   
>> Not to depend on native SQLite binaries or
>> opaque NestedVM code,
>> 
>
> As a matter of interest what problem exactly do you have with NestedVM?
> It's output is indeed opaque (not human comprehensible) but the same is true
> of Java source versus bytecode.  In both cases the input source is readable.
>
> It would also be interesting if anyone has built something that comprehends
> the SQLite C source and then does the conversion into other languages based
> on that.  It would make updates a lot easier, the generation of instrumented
> and test code easier, and the search for issues or optimisations easier.
>
> Roger
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.9 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>
> iEYEARECAAYFAkqCCWgACgkQmOOfHg372QQXqwCeJ4pqKa89vcCAxTQOelMyoPU6
> cuQAoK6Feey6AL3pdzMgv983tn8Yg1ML
> =TKoq
> -END PGP SIGNATURE-
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Merging blobs on disk without taking up memory???

2009-08-09 Thread John Stanton
If you map each BLOB into virtual memory and create an anonymous mapped 
file to receive the concatentaed result toy can handle the blobs without 
using your stack and heap memory.  Of course this assumes that you are 
using a POSIX OS (e.g. Windows or Unix/Linux).

sorka wrote:
> Hi. I have a table that stores pieces of an image as a bunch of blobs. I get
> the pieces out of order and store them in a table until I get all of the
> pieces I need. I then want to assemble them in order and store the resulting
> complete image in in another table entirely. 
>
> Is there a smart way to go about this so that memory use is limited? A
> select and order by the part number will return each blob in order. Is there
> a way to use concat to build up the final blob in the other table without
> having to store all the pieces in memory first?
>
> Thanks.
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ANN: SQLite 3.6.16.C#

2009-08-02 Thread John Stanton
Maybe the author could explain the reason for C# translation.  Surely a 
better approach if the JIT is required would be to use something like 
gcc and change the code generator to the C# metacode.  Such a product 
may already exist.

A translated program is rev locked.

Kosenko Max wrote:
> Seems like I've missed something...
> 
> Well, if there would be a team dedicated to supporting managed
> implementation of SQLite which can be at any time quickly updated to reflect
> all changes of SQLite native - anyone can always transfer such requests to
> that team. Same happens i.e. with SQLite.NET which might look like managed
> implementation of SQLite, but it's just embedding native part in .NET DLL
> which in turn implements full scale ADO.NET support.
> 
> Isn't SQLite project in general will benefit from having line by line
> (except critical paths requiring another implementation) managed version of
> product? It looks for me like a contribution and not as a headache.
> 
> I can say that my inspection shows that this is pure managed C# with just 10
> P/Invokes those can be easily eliminated. Some inefficiencies exists and
> they also could be removed later. Silverlight, Moonlight, Mono can benefit
> from managed SQLite. But I think that managed version can also give an
> ability to be more flexible in some tryouts of further improvement and
> optimization of SQLite itself.
> 
> Max
> 
> Tim Anderson-2 wrote:
>>> I don't know why he insists on that (he actually can answer for himself
>>> here) while there are a lot of SQLite based projects with that name
>>> usage.
>> Well, he's already answered it: he doesn't want to get support requests
>> for the port.
>>
>> I think I'm right in saying that most of the SQLite-named projects out
>> there are wrappers rather than ports, albeit some of them link the code
>> into their own executables. Are there any ports to other languages called
>> SQLite?
>>
>> It may also be a kind of compliment - that he thinks the port could prove
>> popular. If it is pure C# with no interop, I agree - would be very useful
>> for Silverlight, for example.
>>
>> Tim
>>
> 
> 
> -
> Best Regards.
> Max Kosenko.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DATETIME and storage type

2009-07-28 Thread John Stanton
Sqlite does not have a DATETIME type.  It stores the decalred type but 
ignored it.  You can use it in your application.

Rael Bauer wrote:
> Hi,
>  
> If I declare a field as DATETIME default "2001-01-01", ( e.g. alter table 
> "notes" add column "last_modified" DATETIME default "2001-01-01";) will the 
> declared default value be stored as a string or real value?
>  
> Also, more generally, how can I find out what storage type field values have 
> been stored in?
>  
> Thanks
> Rael Bauer
>
>
>   
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Updating a database by email

2009-07-13 Thread John Stanton
Bruce Robertson wrote:
>> We use a system of updating an Sqlite database buit use port 80 to send
>> data in XML format.
>>
>> Is there a reason you have access to the email port and not the HTTP
>> port 80?  It is very complex to identify a missing email but using port
>> 80 and HTTP the process is almost trivial
>>
>> Our system maintains sync'd databases and compensates for network
>> outages quite robustly.  Remote work sites can continue to operate when
>> the communication link is down and thus deliver 24/7 availability..
>> 
>
> Could you describe this in more detail?
>   
The application is actually a set of remote weighstations.  The load 
cell is interfaced to a PC which has an internet connection.  The local 
PC maintains a mirror of the parts of the central database relevant to 
that location.  Transcaction data is embedded in an XML document and 
uses port 80 to penetrate a firewall and reach the central server.  An  
ACK or NAK comes back in XML.  Reference data updates come from the 
central server in XML and update the local mirror DB.  Locally added 
reference data ytiggers an XML message to the central server.

Each table in the central server has a signature word which is updated 
at each modification.  The remote mirros use that signature to verify 
their sync state.

When the internet connection is broken local transactions are queued and 
when the connection is restored the queue empties.

The remote application is a Windows or Linux program which implements 
HTTP protocol.  The central server just uses a regular web server.  XML 
is used to carry the data so that changes in database schema and message 
content will not break deployed systems.

This system is easy to install because it will work anywhere a browser 
will work to support this simple distributed database.
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Updating a database by email

2009-07-12 Thread John Stanton
We use a system of updating an Sqlite database buit use port 80 to send 
data in XML format.

Is there a reason you have access to the email port and not the HTTP 
port 80?  It is very complex to identify a missing email but using port 
80 and HTTP the process is almost trivial

Our system maintains sync'd databases and compensates for network 
outages quite robustly.  Remote work sites can continue to operate when 
the communication link is down and thus deliver 24/7 availability..

Sylvain Pointeau wrote:
> what about to use a unique identifier for each row?but a unique identifier
> like UUID in sqlserver or Quuid in Qt?
> as it is unique over the world it could be enough?
>
> at the worst case, prefix an id by the name of your machine if you have only
> 2 machines.
>
> then merge the records together?
>
> cheers,
> Sylvain
>
>
> On Sun, Jul 12, 2009 at 2:56 PM, Simon Slavin
> wrote:
>
>   
>> On 11 Jul 2009, at 8:03am, raf wrote:
>>
>> 
>>> you could also send individual sql statements via email but
>>> email doesn't always arrive. that is a major problem. you
>>> would need an acknowledgement and retry mechanism. perhaps
>>> there's an april fool's RFC for implementing TCP/IP via email :)
>>>
>>> once you've got that sorted, you'll need to make sure
>>> that the updates are digitally signed and probably encrypted
>>> and verify the signatures at the recipient's end.
>>>
>>> and i'm only half joking :-)
>>>   
>> Well of course you're joking.  It's blatantly obvious that the correct
>> transport protocol for SQL messages would be an RSS feed.  Although
>> come to think of it, you could Tweet them.
>>
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] general question

2009-07-08 Thread John Stanton
For searching text files grep can be very handy.  A regular expression 
is powerful.

Jay A. Kreibich wrote:
> On Wed, Jul 08, 2009 at 11:53:13AM +0100, Simon Slavin scratched on the wall:
>   
>> On 8 Jul 2009, at 11:40am, _h_ wrote:
>>
>> 
>>> Does any mechanism is available via which I can bind the db with  
>>> text file
>>> and can use the db apis to access that text file, and can perform  
>>> the i/o.
>>>   
>> For SQLite to be useful for you, the data must be in a SQLite database  
>> file.  You cannot manipulate data in a text file.
>> 
>
>   If you're willing to write a bit of code, you can do whatever you
>   want.  I've written virtual-table extensions that can read Apache
>   logs, one for Squid web-cache logs, and several for various binary
>   file formats we used at my last job.  It's hugely useful to just
>   attach the database to the existing log files (including live files)
>   on the occasions I needed to search for stuff, rather than import
>   the files first.  In the case of the binary files, performance was
>   actually better than imported data (nearly all our queries required
>   full table scans).
>
>   I've never done a virtual-table that supported writing to a
>   text-based file format, but there is no reason why it couldn't be
>   done.
>
>-j
>
>
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Repost: Accessing a DB while copying it causes Windows to eat virtual memory

2009-07-08 Thread John Stanton
A valid backup needs to be a snapshot,  It is of no value if other 
processes were able to modify the file as it was being backed up.  An 
exclusive (write) lock of some type is required or else the file needs 
to be organized differently.

Neville Franks wrote:
> A proper backup program that opened the file for r/o, non-exclusive
> use may be able to copy the file.
>
> I thought SQLite has a live backup capability now. Surely that is the
> best way to handle this.
>
>
> Wednesday, July 8, 2009, 6:16:41 PM, you wrote:
>
> JS> It looks like you are trying to copy from a process other than the one
> JS> which holds the lock.
>
> JS> Think abnout it - the lock gives exlusive access to the file to the 
> JS> process which sets it and blocks all other processes.
>
> JS> Stan Bielski wrote:
>   
>>> Just to make sure I understood you correctly, is this what you are 
>>> suggesting?
>>>
>>> * Open file with sqlite app (in my case sqlite3)
>>> * execute BEGIN EXCLUSIVE TRANSACTION;
>>> * initiate the file copy
>>> * COMMIT; after the copy is finished.
>>>
>>> I tried doing this, but very early into the copy Windows issues the
>>> following error:
>>>
>>> Errror 0x80070021: The process cannot access the file because another
>>> process has locked a portion of the file.
>>>
>>>
>>> On Tue, Jul 7, 2009 at 1:03 PM, John Stanton<jo...@viacognis.com> wrote:
>>>   
>>>   
>>>> You should synchronize your backup (copy).  Try surrounding it with an
>>>> exclusive transaction.
>>>>
>>>> Stan Bielski wrote:
>>>> 
>>>> 
>>>>> Sorry for the repost, but the original thread was hijacked by another
>>>>> list user. This is a serious problem IMHO; it looks like the DB can't
>>>>> be backed-up without rendering the machine unusable if a query hits it
>>>>> while a copy is in progress.
>>>>>
>>>>> Hello,
>>>>>
>>>>> In the course of copying a largish (20 GB) database file while
>>>>> accessing it via sqlite3, the machine became very unresponsive. I
>>>>> opened task manager and found that the system was using a huge amount
>>>>> of virtual memory, causing it to thrash. Per-process memory usage
>>>>> looked normal and did not add up to anywhere near system-wide VM
>>>>> usage.
>>>>>
>>>>> I ran into this issue at a customer site and was able to reproduce it
>>>>> using a local Windows 2008 installation. I have not installed any
>>>>> backup software or a virus scanner. Storage is local disk, SQLite
>>>>> version is 3.3.17.
>>>>>
>>>>> At first I thought that this was a general Windows problem involving a
>>>>> process accessing a file that is being copied, but other binaries I
>>>>> tested do not cause the same behavior that sqlite3 does. I performed
>>>>> the following experiments to try to diagnose the issue.
>>>>>
>>>>> Case 1:
>>>>>
>>>>> * I copy a 20 GB sqlite DB using Windows' own copy utility (e.g. via 
>>>>> explorer).
>>>>> * At any point during the copy, I open the file being copied in sqlite3
>>>>> * I exit sqlite3.
>>>>> * During the rest of the copy the OS will consume virtual memory
>>>>> linear (seemingly identical) to the amount of data copied since the
>>>>> process opened the file.
>>>>>
>>>>> I repeated this experiment using a similarly-sized file created from
>>>>> /dev/zero (i.e. an invalid DB) and the results were the same.
>>>>>
>>>>> Case 2:
>>>>>
>>>>> * I copy the sqlite DB using Windows' own copy utility (e.g. via 
>>>>> explorer).
>>>>> * At any point during the copy, I run 'strings' with the file as an 
>>>>> argument.
>>>>> * I exit strings.
>>>>> * The copy does not result in the OS consuming additional virtual memory.
>>>>>
>>>>> Case 3:
>>>>>
>>>>> * I open the DB in sqlite3
>>>>> * I let sqlite3 idle and do not input any commands.
>>>>> * I copy a sqlite DB using Windows' own copy utility (e.g. via explorer).
>>>>> * I continue to let sqlite3 idle and do not input any commands.
>>>>> * During the rest of the copy the OS will consume virtual memory
>>>>> linear (seemingly identical) to the amount of data copied since the
>>>>> process opened the file.
>>>>>
>>>>> Is there a workaround for this issue? Any assistance or info is 
>>>>> appreciated.
>>>>>
>>>>> Thanks,
>>>>> -Stan
>>>>>   
>
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Repost: Accessing a DB while copying it causes Windows to eat virtual memory

2009-07-08 Thread John Stanton
It looks like you are trying to copy from a process other than the one 
which holds the lock.

Think abnout it - the lock gives exlusive access to the file to the 
process which sets it and blocks all other processes.

Stan Bielski wrote:
> Just to make sure I understood you correctly, is this what you are suggesting?
>
> * Open file with sqlite app (in my case sqlite3)
> * execute BEGIN EXCLUSIVE TRANSACTION;
> * initiate the file copy
> * COMMIT; after the copy is finished.
>
> I tried doing this, but very early into the copy Windows issues the
> following error:
>
> Errror 0x80070021: The process cannot access the file because another
> process has locked a portion of the file.
>
>
> On Tue, Jul 7, 2009 at 1:03 PM, John Stanton<jo...@viacognis.com> wrote:
>   
>> You should synchronize your backup (copy).  Try surrounding it with an
>> exclusive transaction.
>>
>> Stan Bielski wrote:
>> 
>>> Sorry for the repost, but the original thread was hijacked by another
>>> list user. This is a serious problem IMHO; it looks like the DB can't
>>> be backed-up without rendering the machine unusable if a query hits it
>>> while a copy is in progress.
>>>
>>> Hello,
>>>
>>> In the course of copying a largish (20 GB) database file while
>>> accessing it via sqlite3, the machine became very unresponsive. I
>>> opened task manager and found that the system was using a huge amount
>>> of virtual memory, causing it to thrash. Per-process memory usage
>>> looked normal and did not add up to anywhere near system-wide VM
>>> usage.
>>>
>>> I ran into this issue at a customer site and was able to reproduce it
>>> using a local Windows 2008 installation. I have not installed any
>>> backup software or a virus scanner. Storage is local disk, SQLite
>>> version is 3.3.17.
>>>
>>> At first I thought that this was a general Windows problem involving a
>>> process accessing a file that is being copied, but other binaries I
>>> tested do not cause the same behavior that sqlite3 does. I performed
>>> the following experiments to try to diagnose the issue.
>>>
>>> Case 1:
>>>
>>> * I copy a 20 GB sqlite DB using Windows' own copy utility (e.g. via 
>>> explorer).
>>> * At any point during the copy, I open the file being copied in sqlite3
>>> * I exit sqlite3.
>>> * During the rest of the copy the OS will consume virtual memory
>>> linear (seemingly identical) to the amount of data copied since the
>>> process opened the file.
>>>
>>> I repeated this experiment using a similarly-sized file created from
>>> /dev/zero (i.e. an invalid DB) and the results were the same.
>>>
>>> Case 2:
>>>
>>> * I copy the sqlite DB using Windows' own copy utility (e.g. via explorer).
>>> * At any point during the copy, I run 'strings' with the file as an 
>>> argument.
>>> * I exit strings.
>>> * The copy does not result in the OS consuming additional virtual memory.
>>>
>>> Case 3:
>>>
>>> * I open the DB in sqlite3
>>> * I let sqlite3 idle and do not input any commands.
>>> * I copy a sqlite DB using Windows' own copy utility (e.g. via explorer).
>>> * I continue to let sqlite3 idle and do not input any commands.
>>> * During the rest of the copy the OS will consume virtual memory
>>> linear (seemingly identical) to the amount of data copied since the
>>> process opened the file.
>>>
>>> Is there a workaround for this issue? Any assistance or info is appreciated.
>>>
>>> Thanks,
>>> -Stan
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>>   
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Repost: Accessing a DB while copying it causes Windows to eat virtual memory

2009-07-07 Thread John Stanton
You should synchronize your backup (copy).  Try surrounding it with an 
exclusive transaction.

Stan Bielski wrote:
> Sorry for the repost, but the original thread was hijacked by another
> list user. This is a serious problem IMHO; it looks like the DB can't
> be backed-up without rendering the machine unusable if a query hits it
> while a copy is in progress.
>
> Hello,
>
> In the course of copying a largish (20 GB) database file while
> accessing it via sqlite3, the machine became very unresponsive. I
> opened task manager and found that the system was using a huge amount
> of virtual memory, causing it to thrash. Per-process memory usage
> looked normal and did not add up to anywhere near system-wide VM
> usage.
>
> I ran into this issue at a customer site and was able to reproduce it
> using a local Windows 2008 installation. I have not installed any
> backup software or a virus scanner. Storage is local disk, SQLite
> version is 3.3.17.
>
> At first I thought that this was a general Windows problem involving a
> process accessing a file that is being copied, but other binaries I
> tested do not cause the same behavior that sqlite3 does. I performed
> the following experiments to try to diagnose the issue.
>
> Case 1:
>
> * I copy a 20 GB sqlite DB using Windows' own copy utility (e.g. via 
> explorer).
> * At any point during the copy, I open the file being copied in sqlite3
> * I exit sqlite3.
> * During the rest of the copy the OS will consume virtual memory
> linear (seemingly identical) to the amount of data copied since the
> process opened the file.
>
> I repeated this experiment using a similarly-sized file created from
> /dev/zero (i.e. an invalid DB) and the results were the same.
>
> Case 2:
>
> * I copy the sqlite DB using Windows' own copy utility (e.g. via explorer).
> * At any point during the copy, I run 'strings' with the file as an argument.
> * I exit strings.
> * The copy does not result in the OS consuming additional virtual memory.
>
> Case 3:
>
> * I open the DB in sqlite3
> * I let sqlite3 idle and do not input any commands.
> * I copy a sqlite DB using Windows' own copy utility (e.g. via explorer).
> * I continue to let sqlite3 idle and do not input any commands.
> * During the rest of the copy the OS will consume virtual memory
> linear (seemingly identical) to the amount of data copied since the
> process opened the file.
>
> Is there a workaround for this issue? Any assistance or info is appreciated.
>
> Thanks,
> -Stan
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] very large SQLite tables

2009-06-27 Thread John Stanton
I have actually implemented such a structure, and it worked well.

Kosenko Max wrote:
> You're talking about db size much less than 1 billion records.
>
> In 1 billion records db with described scenario cache hit ratio so small
> that everything you're talking about just very close to zero difference in
> effect. Because 1 uncached random IO operation is 10ms. Any reasonable
> calculations (in the current scenario) or even saving new page near current
> page far less than 10ms.
>
> And that's what I've said - that proposal won't help and will make things a
> bit worse and more complex.
>
> In future, when we all will forget 100 IOPS wall and will be hitting 100K-1M
> IOPS, your assumptions might become in place with large DB. But I'm sure -
> tricks like splitting table into 100-1 tables with hash wheel in mind
> won't give any additional significant benefit. Hashtables can be faster in
> case you don't need range operations, but placing hash on top of B-Tree to
> eleminate single b-tree page shouldn't give any speedup.
>
> If you have proven that this trick still works - I will be glad to see code
> sample with benchmarks.
>
> Thanks.
> Max.
>
>
> John Stanton-3 wrote:
>   
>> Quite wrong.   Searching a B-Tree is relatively inexpensive but node 
>> splits are expensive.
>>
>> Inserting a non-terminal key in a part filled leaf node is cheap, 
>> inserting a terminal key is more expensive and a split is more expensive 
>> again
>>
>> The reason we spend the extra resources maintaining B-tree indices is 
>> because they maintain the keys in sorted sequence.  If maintaining keys 
>> in order is not required a hashing method can be faster.
>>
>> Our fastest B-Tree indices use the virtual memory capability of the OS 
>> as cache and perform very well.by avoiding buffer shadowing and 
>> maximizing utilization of physical memory..
>> 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] very large SQLite tables

2009-06-27 Thread John Stanton
Quite wrong.   Searching a B-Tree is relatively inexpensive but node 
splits are expensive.

Inserting a non-terminal key in a part filled leaf node is cheap, 
inserting a terminal key is more expensive and a split is more expensive 
again

The reason we spend the extra resources maintaining B-tree indices is 
because they maintain the keys in sorted sequence.  If maintaining keys 
in order is not required a hashing method can be faster.

Our fastest B-Tree indices use the virtual memory capability of the OS 
as cache and perform very well.by avoiding buffer shadowing and 
maximizing utilization of physical memory..

Kosenko Max wrote:
> Expenses in B-Tree not in the node splitting (that is really not that often
> and takes small amount of time). As I've said - it's in finding right place
> to insert.
>
> Root level which takes 1 page will do the same as your hash index. And will
> use much less space in cache. This root page in such DB will always be in
> cache. So you won't gain any advantage at all. And multi-threading also
> won't use the benefit of multiply tables. At least in SQLite.
>
> That method called partitioning. It gives advantages when partitions divided
> by some logic and there is a high chance to hit fewer partitions in average.
> It also can benefit a bit in case RDBMS supports real parallel execution and
> you have a lot of hard drives. That is not the case with SQLite (well you
> can compile without thread safety and try to do own locks).
>
> I have actually posted a real proposal to make DB much faster. That will
> work.
> Proposal with 100 tables as a hash buckets doesn't works and I've checked
> that a lot of time ago.
> You have a sample where it works and gives any visible benefit? I'd like to
> see that.
>
> My another addition to proposal is to use SSD with as small as possible
> average access time. Some of them can easily do 50-100x faster. And that
> will give 20-50x times faster inserts.
>
> Thank you.
> Max.
>
>
> John Stanton-3 wrote:
>   
>> This technique is used extensively in disk cacheing and in maintaining 
>> file directories with huge numbers of files..
>>
>> I would expect it toincrease key insertion speed because it removes a 
>> level of index in the B-tree of each index.  The expensive activity in a 
>> B-tree index insertion is a node split which requires that key 
>> information be updated in each internal node level and possibly a new 
>> level added.  Fewer levels mean faster performance.
>>
>> This method could also be used to add parallelism by having multiple 
>> threads or processes perform insertions concurrently.  Having each 
>> database in a separate databases would help this approach.
>> It would also help with concurrent read accesses.
>>
>> If this application only has one table and does not need SQL then there 
>> are better solutions than using Sqlite and paying the price for its many 
>> features but not using them.
>> 
>
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] very large SQLite tables

2009-06-27 Thread John Stanton
This technique is used extensively in disk cacheing and in maintaining 
file directories with huge numbers of files..

I would expect it toincrease key insertion speed because it removes a 
level of index in the B-tree of each index.  The expensive activity in a 
B-tree index insertion is a node split which requires that key 
information be updated in each internal node level and possibly a new 
level added.  Fewer levels mean faster performance.

This method could also be used to add parallelism by having multiple 
threads or processes perform insertions concurrently.  Having each 
database in a separate databases would help this approach.
It would also help with concurrent read accesses.

If this application only has one table and does not need SQL then there 
are better solutions than using Sqlite and paying the price for its many 
features but not using them.

Kosenko Max wrote:
> John Stanton-3 wrote:
>   
>> Why would it not work?  It is just adding an extra top level to the 
>> index.  A tried and true method.
>> 
>
> It will work. But won't give performance benefit. And from my undestanding
> it will even slow down things.
> You can place parts of index in different DB and on different HDD thinking
> it will boost the performance.
>
> But the problem is that we aren't talking about multiply selects at the same
> time... We are talking about updating index in sqlite which is
> single-threaded and even under load that wouldn't give you any advantages.
>
> Moreover you're emulating part of B-Tree with that approach and making it
> slower and more space consumptive. So should it work as a solution? No. 
>
> You have an idea why it should work? Tell me so.
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] very large SQLite tables

2009-06-26 Thread John Stanton
Why would it not work?  It is just adding an extra top level to the 
index.  A tried and true method.

Kosenko Max wrote:
> Have you ever tested such proposal?
> I believe that doesn't works.
>
>
> Doug Fajardo wrote:
>   
>> One approach might be to split the big, monolithic table into some number
>> of hash buckets, where each 'bucket' is separate table. When doing a
>> search, the program calculates the hash and accesses reads only the bucket
>> that is needed.
>>
>> This approach also has the potential for allowing multiple databases,
>> where tables would be spread across the different databases. The databases
>> could be spread across multiple drives to improve performance.
>>
>> 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Tips to access SQLite with multiple clients?

2009-06-21 Thread John Stanton
Sqlite is NOT a server, it is a library which is linked into an 
application.  It is not a repl;acement for Oracle,  DB2,   PostgreSQL 
etc, but is a replacement for local file reads and writes.

Sqlite shines embedded in applications like Firefox, various Adobe 
products for example.

Gilles Ganault wrote:
> Hello
>
>   I know that SQLite is not meant to be used by more than one
> simultaneous client, but I don't know of another SQL server that is so
> light, easy to use, and available for Linux and Windows. If you know
> of such a beast, I'm interested.
>
> Otherwise... I'm willing to take the risk of using it to work with
> two-three users with moderate use. MySQL et al. are simply overkill
> for this type of use.
>
> What are the tips you would suggess to minimize the risks of sharing
> an SQLite database between a few clients?
>
> Thank you for any feedback.
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-18 Thread John Stanton
Jay A. Kreibich wrote:
> On Wed, Jun 17, 2009 at 11:52:45AM +1000, John Machin scratched on the wall:
>   
>> On 17/06/2009 6:17 AM, Hoover, Jeffrey wrote:
>>
>> 
>>> One other note, if you have a primary key whose value is continually
>>> increasing your pk index can become imbalanced and therefore
>>> inefficient.
>>>   
>> A B-tree becomes imbalanced? How so?
>>
>> http://www.sqlite.org/fileformat.html#btree_structures says: "The tree 
>> is always of uniform height, meaning the number of intermediate levels 
>> between each leaf node page and the root page is the same."
>>
>> Do you have any evidence to the contrary?
>> 
>
>   It won't become imbalanced, but if you're inserting rows with an
>   explicit INTEGER PRIMARY KEY value in a non-increasing order, the
>   tree will require sorting and re-balancing.  That takes time and
>   requires additional disk writes (and, as others have pointed out,
>   disk writes are VERY expensive due to their transactional nature).
>
>   Also, depending on just how mixed up the pattern is, you can get into
>   situations where a very large index will over-flow the default 1500
>   page cache-size.  It is well known that if you want to build an index
>   on a large table, increasing the cache size will help make that
>   process faster.  It might be true here as well.  Try setting the page
>   cache to something nice and huge, like 10x or 100x the default, and
>   see if that helps.
>
>-j
>
>   
You are correct that a split in a B-Tree is expensive, more so if there 
are many levels.  Also the B-Tree algorithm keeps the index balanced but 
it does not prevent fragmentation.  An addition to the B-Tree logic to 
perform node merges where possible will limit fragmentation and limit 
the creation of unnecessary levels.

B-Trees which are expected to grow substantially can achieve a speed 
increase by partially filling the nodes so that  insertions can occur 
without forcing a split.

My guess is if you have a large number of Sqlite insertions you might 
find that to drop the index and re-raise it when the insertions are 
complete will be faster.

Note that the Sqlite rowids are organized as a B-Tree, but because they 
are consecutive numbers they make a well organized index.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Advice on which to use (SQLite or SQL Server) for the following app.

2009-06-11 Thread John Stanton
Aqlite is not the DB for your application.  You need a server like 
PostgreSQL or Oracle.

Robel Girma wrote:
> Hello,
>
> I am in need for a database to hold a couple of tables with max 10,000 rows
> each that I will update frequently and query frequently. 
>
> Example, 5000 users connect to our server every 10 seconds and each time
> they connect, I need to update a table with their IP and Last_connect_time. 
>
> Also, every 10 seconds or so, a couple of hundred users will query this
> table with simple select statements (Select ip from table1 where
> last_connect_time is greater than 20seconds).
>
>  
>
> I'm trying to choose the most efficient db for this application and my main
> criteria is response time. Will  SQLite do this more efficiently than SQL
> Server. I'm planning to allocate upto 1GB memory. 
>
> I appreciate any input we can give on this.
>
> Robel
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Direct row insert from C (without SQL) / Database speed

2009-06-09 Thread John Stanton
Use sqlite prepare and bind.

Mark Flipphi wrote:
> I'm new in using SQLite and have some questions before I start 
> implementing the SQL lite in a project.
>
> Is there a way to insert a row of data into a table without having to 
> convert the data to SQL ?
> I need to store 1024 point into a row, but converting it first to text 
> and then let the database store it as binairy is a bit to mutch 
> overhead
> Can we create a object with data and store that into the table ?
>
> Wat kind of speed can we expect of the database ?
> At some error situation we need to insert over 500.000 records a second. 
> I know this is a speed no database will handle, so we have
> a very large buffer in the appliaction, but i need to be sure we can 
> handle this in data in a reasonable time.
> I read some thing about 50.000 inserts a second (with ease) in the FAQ 
> with START and COMMIT. Any experience ?
>
> Is there a difference in speed between running the database on linux or 
> windows ?
> And what about stroing the database onto a Solid state disk or on a Raid 
> 6 array normal HD
>
> Best regards,
>
> Mark Flipphi
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] synchronizing sqlite local data base to a remote data base

2009-06-09 Thread John Stanton
I missed the mssql reference.  Disregard my suggestion to copy files.

You could log the SQL  used to modify the DBs and pass it between 
DBs...Updating Sqlite would require a daemon for real time operation.

Simon Slavin wrote:
> On 8 Jun 2009, at 7:30pm, Mohey Eldin Hamdy wrote:
>
>   
>> I am trying to synchronize an sqlite local data base with a remote  
>> mssql
>> data base. I wasn't able to find any listed function at
>> http://www.sqlite.org/c3ref/funclist.html to do something like that.  
>> Any
>> ideas please.
>> 
>
>
> The sqlite3 library would not have functions to access MSSQL  
> databases.  But that's the least of your problems.
>
> Synchronising two copies of a database is a whole twisty basket of  
> fish.  Very difficult to program and complicated to keep running.   
> Consider what happens if one user deletes a record from their copy of  
> the database and then the other user updates the record in theirs.   
> Synchronisation would have to be done at the application level anyway,  
> not in the library.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] synchronizing sqlite local data base to a remote data base

2009-06-09 Thread John Stanton
An SQL database is just a file.  A copy will sync it, and so will a 
process like rsync.

Real time synchronization requires that you implement some form of daemon.

Mohey Eldin Hamdy wrote:
> Hey got a question,
>
> I am trying to synchronize an sqlite local data base with a remote mssql
> data base. I wasn't able to find any listed function at
> http://www.sqlite.org/c3ref/funclist.html to do something like that. Any
> ideas please.
>
> Thanks,
> Mohy
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-06-09 Thread John Stanton
We are happy with the results using JFS on AIX and SUSE Linux.  Also no 
concerns with EXT3 on various Linuxes.  We have learned to avoid Windows.

Christian Smith wrote:
> On Wed, May 27, 2009 at 08:05:00AM -0400, pyt...@bdurham.com wrote:
>   
>>
>> In your experience, which Linux file system(s) provide the high
>> performance platform for SQLite?
>>
>> 

>> I can't speak for DRH, but I have found that ext3 with the option of
>> "data=journal" gives a massive improvement speed wise than default ext3
>> options, mainly because the journal is contiguous and ext3 can avoid seeks
>> while still ensuring data is written safely to disk. This is a big win for
>> rotating platter disks.
>>
>> I did an informal benchmark of various filesystem types on
>> Linux (note this is 2 1/2 years ago) as part of an OSNews thread here:
>> http://www.osnews.com/permalink?184137
>>
>> I'd be interested in how ext4 compares to ext3. Perhaps an evening project
>> beckons.
>>
>> 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Importing text file of numbers into INTEGER PRIMARY KEY aka rowid

2009-06-06 Thread John Stanton
It looks as if one of your test fields is not formatted as a valid 
nteger or is pperhaps a duplicate.

Try initially loading your data into a table with a column specified as 
a real number and not defined as the primary key.  If it succeeds use 
the table to search for duplicates and non-integer numbers.


Kelly Jones wrote:
> I have a text file onenum.txt with just "1234\n" in it, and a db w/
> this schema:
>
> sqlite> .schema
> CREATE TABLE test (foo INTEGER PRIMARY KEY);
>
> When I import, it fails as follows:
>
> sqlite> .import onenum.txt test
> Error: datatype mismatch
>
> Is sqlite3 treating "1234" as a string or something? Short of doing
> "INSERT INTO test VALUES (1234);", how do I import numbers into
> sqlite3's rowid column? [1]
>
> [1] Since foo is INTEGER PRIMARY KEY, it's just an alias for rowid.
>
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite programmed in C++ ...

2009-06-04 Thread John Stanton
To make technical decisions using emotion rather than logic is not a 
good approach.  One should choose the best methodology for the 
particular project, not be self indulgent and choose the one you like.

Recently we wrote an embedded application in C, permitting us to use the 
available C compiler, but organized it very like a C++ program with 
classes etc.  The benefit was that the code was more attuned to being 
maintained and modified by programmers with a low skill level without 
risking catastrophic side effects  from modifications.

Sylvain Pointeau wrote:
> Hi,
> I also like C very much even if I prefer C++
> Please note that I was asking to introduce C++ not to re-write (which would
> be a total non-sense)
>
> I just repeat that I use sqlite every days, and I find it awesome, I love
> it.
> C or C++? at the end : who cares?
>
> I strongly believe that the persons are essential, more than the programming
> language
>
> Cheers,
> Sylvain
>
> On Wed, Jun 3, 2009 at 11:53 PM, Rob Sciuk <r...@controlq.com> wrote:
>
>   
>> John Stanton wrote:
>> 
>>> This is something of a digression but is pertinent.  Colleagues who
>>> worked with Bjarne Thorstrup (inventer of C++) tell me that Bjarne was
>>> disillusioned with C++ and its wide deployment and would encourage
>>> people not to use it unless there were clear advantages.
>>>
>>> In our own company we came to the same conclusion as Dr Hipp and used
>>> ANSI C for our compilers and database software.  C can be anything you
>>> want it to be.  For example you can ensure portability by incorporating
>>> your own  memory management system and tightly manage your use of
>>> foreign libraries. for quality assurance  You have access to highly
>>> optimizing compilers which can produce executables as good as those
>>> written by a skilled Assembler programmer.
>>>   
>> Good points.
>>
>> IIRC, Firebird, once a C based database system (Interbase by Borland), was
>> re-written in C++ by a team of people who simply "liked" C++.  The change
>> happened between version 1.x and 2.x I think.  The upshot is, there was a
>> fork in the project, but the 1.x code lives on.  It seemed to me a lot of
>> effort and I'm not sure what gains they are claiming, because I lost all
>> interest in Firebird when I realized that the project team were about to
>> pour a whole bunch of resources into re-writing it for its own sake, when
>> better returns could have been made on upgrading the existing code.
>>
>> Personally, I like SQLite (and ANS C for that matter) just fine the way
>> they
>> are.  Perhaps Sylvain would prefer to have a look at the Firebird 2.x
>> project
>> for a C++ based DBMS should the implementation language be an issue.
>>
>> Cheers,
>> Rob Sciuk
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite programmed in C++

2009-06-03 Thread John Stanton
Indeed. Very good reply.
> To Sylvain, once again: speculating on what went into the minds of the
> developers, when they set out to develop SQLite, they chose the best,
> most concise, most portable, most universally compilable, mother of
> almost all languages. Once they developed something that was free,
> fast and cheap, there was no reason to change. Case closed.
>
> If you thing C++ can do a better job at doing what SQLite does on all
> the variety of platforms that it runs on flawlessly, well, the source
> code is available in public domain -- go ahead and create SQLite++ by
> transcribing each function into the language of your choice.
>
> May the better plan win.
>
>   
This is something of a digression but is pertinent.  Colleagues who 
worked with Bjarne Thorstrup (inventer of C++) tell me that Bjarne was 
disillusioned with C++ and its wide deployment and would encourage 
people not to use it unless there were clear advantages.

In our own company we came to the same conclusion as Dr Hipp and used 
ANSI C for our compilers and database software.  C can be anything you 
want it to be.  For example you can ensure portability by incorporating 
your own  memory management system and tightly manage your use of 
foreign libraries. for quality assurance  You have access to highly 
optimizing compilers which can produce executables as good as those 
written by a skilled Assembler programmer.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Db design question (so. like a tree)

2009-06-02 Thread John Stanton
Something to investigate is to use an AVL tree structure with rowids as 
the pointers.  It would stay balanced and you could present family trees 
quite simply as well as use SQL to extract data on individuals and sets 
of individuals.

Jay A. Kreibich wrote:
> On Tue, Jun 02, 2009 at 11:16:20PM +0200, Jan scratched on the wall:
>   
>> Hi,
>>
>> I am planning a database for animal breeding. I need to store the 
>> relations between individuals and therefore I have to build something 
>> like a tree structure. But of course with two parents (There wont be 
>> cloned animals in the database .-) afaik)
>> 
>
>
>
>   
>> - adjacency list (not very difficult to understand)
>> 
>
>   Also easy to work with two parents, just have a "father" column and a
>   "mother" column.
>
>   Adjacency lists are quick to update, but many queries can't be done
>   in (standard) SQL by itself.  That usually isn't a problem, and it
>   doesn't sound like you'll have millions of records.
>
>   
>> - nested sets (hm, more difficult)
>> 
>
>   Nested sets only work with one parent.  They're also extremely
>   expensive to update, although you can do some interesting tricks with
>   queries.
>
>   
>> - b tree (to difficult)
>> 
>
>   B-trees are more of a way to sort and access large amount of linear
>   data.  They're not really designed to represent data that is
>   inherently tree like.
>
>   
>> - ? (something I missed?)
>> 
>
>   You might want to see if you can find a copy of one of Joe Celko's
>   books... either "SQL for Smarties" (which has a chapter on trees) or
>   the book "Joe Celko's Trees and Hierarchies in SQL for Smarties"
>   which is all about trees.
>
>   That said, it sounds like you'd be well serviced by a two parent
>   adjacency list and a bit of custom programming/scripting.
>
>-j
>
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite programmed in C++

2009-06-02 Thread John Stanton
A good tool ltries to be "bette"r, not "easier".  Libraries compiled 
from C are compatible with C++.  In general optimization is better with 
C compilers and the support libraries are more compact and effcient.  
Producing Sqlite in well written and documented ANSI C was a very sound 
decision.

If you want to incorporate Sqlite in a C++ environment use C++ wrappers 
for the API calls.

Sylvain Pointeau wrote:
> ... because I experienced C++ to be easier with the classes and resource
> management via the destructor.I was just wondering why C++ is not used?
>
> was it for a performance issue?
> or a compiler issue?
> or anything else?
>
> I just read the Linus Torvalds comment on the C++ for Git
> What do you think?
>
> Cheers,
> Sylvain
>
> On Tue, Jun 2, 2009 at 6:16 PM, John Stanton <jo...@viacognis.com> wrote:
>
>   
>> Why wuld you want to do such a thing?
>>
>> Sylvain Pointeau wrote:
>> 
>>> Hello,
>>> I would like to know if someone already though about to introduce C++ in
>>> SQLite?
>>> I just think about a minimal subset of C++ that will not make any
>>> performance penalty
>>> (like C with classes)
>>>
>>> is it a performance issue?
>>> is it a deployment/compiler issue?
>>> or any issue?
>>>
>>> Please don't make any aggressive reply, I am a very nice guy :-)
>>>
>>> Cheers,
>>> Sylvain
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>>   
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite programmed in C++

2009-06-02 Thread John Stanton
Why wuld you want to do such a thing?

Sylvain Pointeau wrote:
> Hello,
> I would like to know if someone already though about to introduce C++ in
> SQLite?
> I just think about a minimal subset of C++ that will not make any
> performance penalty
> (like C with classes)
>
> is it a performance issue?
> is it a deployment/compiler issue?
> or any issue?
>
> Please don't make any aggressive reply, I am a very nice guy :-)
>
> Cheers,
> Sylvain
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Populating dyadic dataset

2009-06-01 Thread John Stanton
You might consider using a composite key  "1963 CAN ABB" when you design 
your data structure.

Igor Tandetnik wrote:
> Vincent Arel 
> wrote:
>   
>> I'm very, very new to SQLite, and would appreciate any help I can get.
>>
>> I have 3 long vectors that look like this:
>> {"ALB","CAN", "DZA",...}
>> {"ALB","CAN", "DZA",...}
>> {"1961","1962", "1963",...}
>>
>> And I want to create a table that looks like this:
>>
>> IDVar1Var2Var3
>> 1ALBCAN1961
>> 1ALBCAN1962
>> 1ALBCAN1963
>> 2ALBDZA1961
>> 2ALBDZA1962
>> 2ALBDZA1963
>> 3CANALB1961
>> 3CANALB1962
>> 3CANALB1963
>> 4CANDZA1961
>> 4CANDZA1962
>> 4CANDZA1963
>> 5DZAALB1961
>> 5DZAALB1962
>> 5DZAALB1963
>> 6DZACAN1961
>> 6DZACAN1962
>> 6DZACAN1963
>>
>> In short, I need to include every possible pair of Var1/Var2 values
>> (where Var1/Var2 != Var2/Var1. I want to keep permutations.). For
>> each of these pairs, I need to create separate rows for each
>> different value of Var3. I also need to drop rows where Var1 == Var2.
>> Finally, I would like to generate a unique ID number for each
>> Var1/Var2 pair.
>> 
>
> What exactly do you mean when you say you "have vectors"? Are you 
> writing a program in some language that has a notion of a vector? In 
> this case, I'm pretty sure this language also supports loops iterating 
> over those vectors, as well as incrementing integers.
>
> The easiest solution would be a set of nested loops iterating over those 
> vectors. Inside the most nested loop, you would have all the data 
> necessary to generate one record in the database, at which point you can 
> run a regular vanilla INSERT statement using SQLite API.
>
> Igor Tandetnik 
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-06-01 Thread John Stanton
Simon Slavin wrote:
> On 31 May 2009, at 11:56pm, John Stanton wrote:
>
>   
>> Try studying basic database theory and technology to get a better
>> understanding of the problem.
>> 
>
> I have a pretty good understanding, I think.  Plus 25 years experience.
>   
Is it 23 years experience or 1 year 23 times?  This forum is to share 
information on Sqlite, for mutual advantage, not get into pointless 
arguments.  Let us keep it that way.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread John Stanton
Try studying basic database theory and technology to get a better 
understanding of the problem.  You will then understand the reason for 
absolute transactional integrity and why Sqlite must use fsync or 
similar and expects fsync to be a complete implementation which ensures 
that each transaction is permanently stored once a COMMIT has 
succeeded.  That ties the transaction rate to the physical rotation rate 
of the disk.

If you want Sqlite to be ACID your design needs to be centered around 
the disk you use since it is the absolute bottleneck.  A 15,000 rpm disk 
will give you more than a 4,500 rpm one.  Writing non-critical data to 
regular files or a non-ACID database will also up the throughput.

 From the early days of IT transaction processors have been used to 
ensure data integrity.   An example is the development of CICS by IBM 
about 40 years or more ago.  It made it possible to run important 
applications on computers without requiring some form of manual system 
to enforce integrity.  Sqlite gives that capability to small distributed 
and embedded applications.
> My impression is that COMMIT/ROLLBACK is for data integrity: don't  
> update the account balance if the transaction didn't work.  That sort  
> of stuff.  Not to deal with hardware failure.
>
> I think that these days your computer can never know when things that  
> reached permanent storage.  Permanent storage is too far away, behind  
> too many levels of caching for the CPU to ever know about it.  Your  
> CPU may be virtual.  Your storage medium may be virtual: a disk image,  
> a RAID controller, emulated hardware, or something like that.  If you  
> talk to a hard disk by IDE it will be using onboard caching and not  
> admitting it to the motherboard (most of them do this these days).
>
> Unless you're going to disable caching all the way down the line from  
> CPU to physical hard disk, you're never going to get any real idea of  
> what the hardware's doing.  And if you did that it would low things  
> down too much anyway.  Cached writes do a really good job of speeding  
> things up.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Deleting database content : SQLite3 API

2009-05-30 Thread John Stanton
What would be the point of a function which just performs a syste call 
like unlink?  Bloat?

souvik.da...@wipro.com wrote:
> Yes , I understand that. Infact I was doing that through a script during 
> system startup. I wanted to know whether SQLite provides any API to do the 
> same.  
>
> Thanks and Regards,
> Souvik
> -Original Message-
> From: sqlite-users-boun...@sqlite.org on behalf of John Stanton
> Sent: Sat 5/30/2009 5:30 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Deleting database content : SQLite3 API
>  
> An Sqlite database is just  a file.  Delete the file  when you start 
> your program and when you open the database Sqlite will create  a fresh 
> dne, a very low overhead process..
> .
> souvik.da...@wipro.com wrote:
>   
>> Hello,
>>  
>> Please let me know if we have an API in SQLite3 which allows me to
>> retain the database but delete it's content at runtime. The problem I am
>> facing is that : Every time I restart my system , I need to create the
>> database. If the database exits already it's contents need to be
>> deleted. The issue is that the contents of the database varies during
>> one power ON - Power OFF cycle . As a result , after finding that the
>> database already exits at the system startup, I cannot just drop the
>> tables. ( As the table which are present in the existing data base is
>> not known. )
>>  
>> I am using sqlite3wrapped C++ library APIs.
>>  
>> Thanks and Regards,
>> Souvik
>>  
>>
>> Please do not print this email unless it is absolutely necessary. 
>>
>> The information contained in this electronic message and any attachments to 
>> this message are intended for the exclusive use of the addressee(s) and may 
>> contain proprietary, confidential or privileged information. If you are not 
>> the intended recipient, you should not disseminate, distribute or copy this 
>> e-mail. Please notify the sender immediately and destroy all copies of this 
>> message and any attachments. 
>>
>> WARNING: Computer viruses can be transmitted via email. The recipient should 
>> check this email and any attachments for the presence of viruses. The 
>> company accepts no liability for any damage caused by any virus transmitted 
>> by this email. 
>>
>> www.wipro.com
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>   
>> 
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
> Please do not print this email unless it is absolutely necessary. 
>
> The information contained in this electronic message and any attachments to 
> this message are intended for the exclusive use of the addressee(s) and may 
> contain proprietary, confidential or privileged information. If you are not 
> the intended recipient, you should not disseminate, distribute or copy this 
> e-mail. Please notify the sender immediately and destroy all copies of this 
> message and any attachments. 
>
> WARNING: Computer viruses can be transmitted via email. The recipient should 
> check this email and any attachments for the presence of viruses. The company 
> accepts no liability for any damage caused by any virus transmitted by this 
> email. 
>
> www.wipro.com
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Deleting database content : SQLite3 API

2009-05-30 Thread John Stanton
An Sqlite database is just  a file.  Delete the file  when you start 
your program and when you open the database Sqlite will create  a fresh 
dne, a very low overhead process..
.
souvik.da...@wipro.com wrote:
> Hello,
>  
> Please let me know if we have an API in SQLite3 which allows me to
> retain the database but delete it's content at runtime. The problem I am
> facing is that : Every time I restart my system , I need to create the
> database. If the database exits already it's contents need to be
> deleted. The issue is that the contents of the database varies during
> one power ON - Power OFF cycle . As a result , after finding that the
> database already exits at the system startup, I cannot just drop the
> tables. ( As the table which are present in the existing data base is
> not known. )
>  
> I am using sqlite3wrapped C++ library APIs.
>  
> Thanks and Regards,
> Souvik
>  
>
> Please do not print this email unless it is absolutely necessary. 
>
> The information contained in this electronic message and any attachments to 
> this message are intended for the exclusive use of the addressee(s) and may 
> contain proprietary, confidential or privileged information. If you are not 
> the intended recipient, you should not disseminate, distribute or copy this 
> e-mail. Please notify the sender immediately and destroy all copies of this 
> message and any attachments. 
>
> WARNING: Computer viruses can be transmitted via email. The recipient should 
> check this email and any attachments for the presence of viruses. The company 
> accepts no liability for any damage caused by any virus transmitted by this 
> email. 
>
> www.wipro.com
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request

2009-05-23 Thread John Stanton
Looks like feature bloat.  If you need such features for a particular 
application just add the code to your application or create a modified 
version of Sqlite for just that application.

It makes no sense to burden all users with overhead for a feature 
required by only a handful of applications.

Simon Slavin wrote:
> On 21 May 2009, at 5:59am, Simon Slavin wrote:
>
>   
>> int sqlite3_change_count(sqlite3*)
>>
>> Returns an integer which is incremented whenn a change is made to any
>> table in the database.  May be the value stored in bytes 24..27 of
>> file header or something else if that's not what I really want.
>> 
>
> Having thought about it some more, I want two different numbers.  One  
> of them changes when a field changes: INSERT/DELETE/UPDATE.  The other  
> changes when the schema changes: ALTER/DROP/CREATE.  Rather than crowd  
> the function library with multiple functions, it might be possible to  
> implement it like so:
>
> int sqlite3_change_count(sqlite3*, int cTypes)
>
> when cTypes = 1, you get the number of changes to the schema
> when cTypes = 2, you get the number of changes to fields
> when cTypes = 3, you get the sum of the above two figures
>
> Or possibly 1 and 2 should be the other way around.
>
> Any ideas, criticism, or reasons it can't be done ?
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Re-try logic on SQLITE_BUSY/deadlocked?

2009-05-23 Thread John Stanton
P Kishor wrote:
> On Sat, May 23, 2009 at 6:34 AM, Rosemary Alles  
> wrote:
>   
>> Thanks Simon. I have been leaning that way too - considering switching.
>>
>> -rosemary.
>>
>> On May 22, 2009, at 5:55 PM, Simon Slavin wrote:
>>
>> 
>>> On 23 May 2009, at 12:10am, Rosemary Alles wrote:
>>>
>>>   
 Multiple machines with multiple cpus. [snip]
 
 The total size of
 current DB is up to 70mb.
 
>>> I suspect you'd be better off with MySQL.  (Am I allowed to say that
>>> here ?)  See the last page of
>>>
>>> 
>>>
>>> MySQL runs as a service which can be connected to over the internet.
>>> It runs all the time, whether anything is talking to it or not.
>>> Everything that wants to change the database does it by talking to the
>>> same server.  Consequently, the server can do its own change-caching,
>>> keep indices in memory, and do the many other things that can be done
>>> when you don't have to worry about other people accessing the files on
>>> disk.  And it's designed to cope well with access from many clients
>>> concurrently: the server doesn't need the client to do busy/waiting,
>>> it just gives you the most up-to-date answers it has.
>>>
>>> At work, where I can run servers and need 24/7 uptime and concurrent
>>> access from multiple clients I use MySQL.  At home where I want tiny/
>>> fast/simple/embeddable/non-server I use SQLite.
>>>
>>> Fortunately, it's relatively easy to export from sqlite3 and import
>>> into MySQL, or vice versa by exporting the database as a set of SQL
>>> commands (.dump in sqlite3) and making minor adjustments.  And the
>>> basic installation of MySQL (all you need) is free.
>>>
>>> I'm sorry if discussion of MySQL is forbidden here, but it sounds like
>>> the right solution for this poster.
>>>   
>
> Suggesting a better alternative is definitely a very good advice, and
> should be evaluated per one's needs. My advice would be to consider
> Postgres instead of MySQL as an alternative. Pg is generally
> considered a better database than MySQL, but subjective criteria
> aside, Pg is also licensed with a better, more flexible licensing
> terms, and since Pg was the inspiration for SQLite, you are likely to
> find more compatibilities between the two.
>
>   
PostgreSQL is a more complete DB implementation  than Mysql, and is very 
robust.  We have used it for many years with no catastrophes.  It also 
works nicely as a network companion to Sqlite.
>   
>>> Simon.
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>   
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>> 
>
>
>
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] most efficient way to get 1st row

2009-05-21 Thread John Stanton
My error.  I was replying to the wrong email.

Shane Harrelson wrote:
> We're not talking about transaction control - we're talking about limiting
> the size of the result set.   And LIMIT/OFFSET clauses are not supported by
> the SQL-92 standard.   LIMIT is listed as a reserved word, but that's it.
>
> http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt
>
>
>
>
> On Wed, May 20, 2009 at 2:47 PM, John Stanton <jo...@viacognis.com> wrote:
>
>   
>> What  makes you think that transaction control is not part of SQL-92?
>>
>> If a database engine does not support transaction control it is not a
>> full implementation.   Transaction control is at the very heart of
>> effective data management.
>>
>> Shane Harrelson wrote:
>> 
>>> Additionally, it's important to note that the LIMIT/OFFSET clause is not
>>> standard SQL, and although it is supported by SQLite, and many other SQL
>>> engines, there are some that do NOT support it, most notably Microsoft
>>>   
>> SQL
>> 
>>> Server.
>>>
>>> HTH.
>>> -Shane
>>>
>>>
>>> On Tue, May 19, 2009 at 2:23 PM, Sam Carleton <scarle...@gmail.com>
>>>   
>> wrote:
>> 
>>>   
>>>> Kees Nuyt wrote:
>>>>
>>>>
>>>> 
>>>>> Imagine a SELECT with an ORDER BY which makes SQLite sort
>>>>> the resultset before it can return the first row in the
>>>>> resultset. Need I say more?
>>>>>
>>>>> http://www.sqlite.org/cvstrac/wiki?p=ScrollingCursor
>>>>>
>>>>>
>>>>>
>>>>>   
>>>> Thank you for the link, it is a VERY useful read, VERY useful!
>>>>
>>>> Sam
>>>>
>>>> ___
>>>> sqlite-users mailing list
>>>> sqlite-users@sqlite.org
>>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>>
>>>>
>>>>
>>>> 
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>>   
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] most efficient way to get 1st row

2009-05-21 Thread John Stanton
Igor Tandetnik wrote:
> John Stanton <jo...@viacognis.com> wrote:
>   
>> Shane Harrelson wrote:
>> 
>>> Additionally, it's important to note that the LIMIT/OFFSET clause is
>>> not standard SQL
>>>   
>> What  makes you think that transaction control is not part of SQL-92?
>> 
>
> What does LIMIT clause have to do with transaction control, in your 
> opinion?
>   
Brain damage.  My last brain transplant came from a career politician.
> Igor Tandetnik
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] most efficient way to get 1st row

2009-05-20 Thread John Stanton
What  makes you think that transaction control is not part of SQL-92?

If a database engine does not support transaction control it is not a 
full implementation.   Transaction control is at the very heart of 
effective data management.

Shane Harrelson wrote:
> Additionally, it's important to note that the LIMIT/OFFSET clause is not
> standard SQL, and although it is supported by SQLite, and many other SQL
> engines, there are some that do NOT support it, most notably Microsoft SQL
> Server.
>
> HTH.
> -Shane
>
>
> On Tue, May 19, 2009 at 2:23 PM, Sam Carleton  wrote:
>
>   
>> Kees Nuyt wrote:
>>
>> 
>>> Imagine a SELECT with an ORDER BY which makes SQLite sort
>>> the resultset before it can return the first row in the
>>> resultset. Need I say more?
>>>
>>> http://www.sqlite.org/cvstrac/wiki?p=ScrollingCursor
>>>
>>>
>>>   
>> Thank you for the link, it is a VERY useful read, VERY useful!
>>
>> Sam
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
>> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite as a FIFO buffer?

2009-05-18 Thread John Stanton
You should be able to implement a classic circular buffer in SQL and 
make it a VIEW for easy access.  ROWIDs can be the buffer pointers and a 
second table can store the current values.

Allen Fowler wrote:
> Hello,
>
> I have several CGI and cron scripts and that I would like coordinate via a 
> "First In
> / First Out" style buffer.That is, some processes are adding work
> units, and some take the oldest and start work on them.
>
> Could SQLite be used for this?  
>
> It would seem very complex to use SQL for just a FIFO, but then again, SQLite 
> would take acre of all ACID / concurrency issues.
>
> Has this been done before?
>
> Thanks,
> :)
>
>
>
>   
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] About Time Field

2009-05-15 Thread John Stanton
Sqlite does not have a TIME type..  It is interpreting your time as an 
integer or floating point number or maybe text, depending upon its format.

Hughman wrote:
> Hi,
>
> I create a table with a field of datatype Time, and when I insert a
> formatting string like 'HHMMSS' into it , the first number 0 always be
> trimed . I want to keep it, how should I do?
> For exampe, '081220' will be converted into '81220'.
>
>
> 
>Say goodbye to romance...
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Prepared statements must be generated inside your transaction

2009-05-12 Thread John Stanton
Joanne Pham wrote:
> Hi,
> Thanks for information!
> Regarding "batch" insert why we need to put the column binding 
> (sqlite3_bind...) before running sqlite3_step. For example:
>  sqlite_prepare_v2 
>  begin transaction
> loop thru all the changes
> sqlite3_bind 
> sqlite3_step.
> end loop
> end transaction
>
> For other database like Microsoft Sql server I only bind the column once(bind 
> statement outside the loop to the data structure) in the loop I don't need to 
> bind column again but just copy the new inserted row to the data structure 
> that already binded outside of the loop. In this case we don't need to bind 
> the columns in the loop. Why this way didn't work for SQLite3 database.
> Thanks,
> JP
>
>
>
>
> 
> From: John Stanton <jo...@viacognis.com>
> To: General Discussion of SQLite Database <sqlite-users@sqlite.org>
> Sent: Tuesday, May 12, 2009 12:09:09 PM
> Subject: Re: [sqlite] Prepared statements must be generated inside your 
> transaction
>
> The confusion is in the names.  When you "PREPARE" a statement you 
> actually compile the SQL.  Compiling a program each time you run it is a 
> waste of resources, and so is compiling the SQL each time you use it.
>
> Prepare your statements once and use them many times, binding data to 
> the compiled code at execution time.
>
> The design of Sqlite is such that it is possible to store compiled SQL 
> permanently and use it when you run your application.  I beleive there 
> is, or was a version of Sqlite tailored for embedded use which does just 
> that.
>
> In our Sqlite programs we  like toprepare all SQL in an initialization 
> phase and have two wins.  First we get faster execution and secondly we 
> detect database errors or mismatches before entering the main functions 
> of the program and avoid having to backtrack in error recovery.
>
> Pavel Ivanov wrote:
>   
>> I believe, these matters are very specific for each database server
>> (though I can't recall server for which it's true what you say). What
>> specific server is talked about in this book? What's the name of this
>> book?
>>
>> As long as SQLite is a concern, I prepare statements outside of
>> transaction and then use them across different transactions without
>> any problems but with huge performance improvement compared to when
>> I've prepared statements before each transaction.
>>
>> Pavel
>>
>> On Tue, May 12, 2009 at 12:32 PM, Joanne Pham <joannekp...@yahoo.com> wrote:
>>   
>> 
>>> Hi All,
>>> I have read one of the performance document and it stated that "prepared 
>>> statements must be generated inside transaction". Is that correct.
>>> The document also stated that " While trying to improve the performance of 
>>> bulk imports in our C++ project, we found that creating the prepared 
>>> statements was a large hit. Creating them once at the
>>> construction of our class, though, made the problem worse! It turns 
>>> out that prepared statements that are generated before the transaction 
>>> start do not work with the transaction. The fix was simply to
>>> create new prepared statements once per transaction."
>>>
>>> So I have to do this:
>>> begin transaction
>>> prepared statement
>>> ..
>>> end transaction.
>>>
>>> I though the prepare statement must be outside of the transaction. Can any 
>>> one confirm this?
>>> Thanks,
>>> JP
>>>
>>>
>>>
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>> 
>>>   
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>   
>> 
> If you do not need to bind more than one data address, do not use the "?" (or 
> alternative) in your SQL.  You use the bind capability to bind different data 
> addresses to the compiled SQL statement
>   
The Sqlite binding method gives great flexibility in the use of 
compiled/prepared statements.
>
>   
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Prepared statements must be generated inside your transaction

2009-05-12 Thread John Stanton
The confusion is in the names.  When you "PREPARE" a statement you 
actually compile the SQL.  Compiling a program each time you run it is a 
waste of resources, and so is compiling the SQL each time you use it.

Prepare your statements once and use them many times, binding data to 
the compiled code at execution time.

The design of Sqlite is such that it is possible to store compiled SQL 
permanently and use it when you run your application.  I beleive there 
is, or was a version of Sqlite tailored for embedded use which does just 
that.

In our Sqlite programs we  like toprepare all SQL in an initialization 
phase and have two wins.  First we get faster execution and secondly we 
detect database errors or mismatches before entering the main functions 
of the program and avoid having to backtrack in error recovery.

Pavel Ivanov wrote:
> I believe, these matters are very specific for each database server
> (though I can't recall server for which it's true what you say). What
> specific server is talked about in this book? What's the name of this
> book?
>
> As long as SQLite is a concern, I prepare statements outside of
> transaction and then use them across different transactions without
> any problems but with huge performance improvement compared to when
> I've prepared statements before each transaction.
>
> Pavel
>
> On Tue, May 12, 2009 at 12:32 PM, Joanne Pham  wrote:
>   
>> Hi All,
>> I have read one of the performance document and it stated that "prepared 
>> statements must be generated inside transaction". Is that correct.
>> The document also stated that " While trying to improve the performance of 
>> bulk imports in our C++ project, we found that creating the prepared 
>> statements was a large hit. Creating them once at the
>> construction of our class, though, made the problem worse! It turns 
>> out that prepared statements that are generated before the transaction start 
>> do not work with the transaction. The fix was simply to
>> create new prepared statements once per transaction."
>>
>> So I have to do this:
>> begin transaction
>> prepared statement
>>..
>> end transaction.
>>
>> I though the prepare statement must be outside of the transaction. Can any 
>> one confirm this?
>> Thanks,
>> JP
>>
>>
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] import / insert 120k records

2009-05-07 Thread John Stanton
Write a simple program..

As I recall our programs which do this use the Expat parser and obey a protocol 
where the XML DTD represents the database tables and columns so that SQL can be 
generated directly from the XML.  It makes for a handy way to transport a 
database, or a part of a database in a universal format.  All that is needed is 
the program to scan the database and generate XML and one to receive the XML 
and transform it into SQL statements.


Ben Marchbanks wrote:
> Whats the best way to import a large number of records ?
> 120K+ currently stored as an XML file.  
>
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Optimizing concurrency with sql query - locks?

2009-05-06 Thread John Stanton
Sqlite is an ACID database - it ensures that data is written to disk, so 
a database in memory still shares a single disk resource.

Jim Wilcoxson wrote:
> I'm not sure what you are considering a massive slowdown, but let's
> assume that the entire database fits into memory and disk I/O isn't
> the bottleneck.  You said you're running 300 instances of the query on
> several processors.  If several means 3 CPUs, then in a perfect world,
> running 300 instances will be 100 times slower than running just 1.
> This assumes you are getting linear scalability, which no one ever
> does, so the slowdown will be more than 100x.  If you have 4
> processors, running 300 queries simultaneously should still be 75x
> slower (best case) than running 1.
>
> You also mentioned seeing a wide variance in response times.  This is
> typical, because most schedulers in multi-user systems won't perfectly
> distribute CPU time to 300 processes.  If the scheduler decides to run
> task 1 to completion, then task 2, etc., your last task will appear to
> take much longer than the first task.  For example, let's say that
> each task by itself takes 1 second of CPU time and zero I/O time,
> assuming the database is all in memory.  300 queries will take 300
> seconds to complete.  If the system scheduler runs each task, in
> order, to completion, then the first task will take 1 second and the
> last task will take 300 seconds to complete.  Wide variance.  Or, the
> system scheduler could decide to give each task 1/100th of a second.
> It will take 3 seconds for all tasks to get a timeslice.  In this
> scenario, it will still take 300 seconds to complete all 300 jobs, but
> they will complete within 1/100th of a second of each other, and each
> job will report that it took 300 seconds to complete.  No variance.
> The price you will pay for no variance is that you increase the
> multiprocessing overhead because now instead of doing just 300 task
> switches to execute each job to completion, you are doing 300x100 =
> 30,000 task switches.  Task switches aren't free, so the "no variance"
> schedule will take longer overall than the wide variance.  This is a
> classic fairness vs high throughput dilemma; you can't have both.
>
> If you are seeing something like 1000x slower performance, then as
> others have mentioned you could have a disk I/O or locking bottleneck.
>
> Jim
>
> On 5/6/09, Igor Tandetnik  wrote:
>   
>> "Rosemary Alles"  wrote
>> in message news:af79a266-b697-4924-b304-2b1feccba...@ipac.caltech.edu
>> 
>>> Run on a single processor, the following query is quite fast:
>>>
>>> When concurrency is introduced (simply running the query on several
>>> processors against the same database - say 300 instances of it) causes
>>> a massive slow down
>>>   
>> Well, you may have multiple CPUs, but you only have a single hard drive.
>> That drive head can't be in multiple places simultaneously.
>>
>> Igor Tandetnik
>>
>>
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>> 
>
>
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Optimizing concurrency with sql query - locks?

2009-05-06 Thread John Stanton
Igor Tandetnik wrote:
> "Rosemary Alles"  wrote
> in message news:af79a266-b697-4924-b304-2b1feccba...@ipac.caltech.edu
>   
>> Run on a single processor, the following query is quite fast:
>>
>> When concurrency is introduced (simply running the query on several
>> processors against the same database - say 300 instances of it) causes
>> a massive slow down
>> 
>
> Well, you may have multiple CPUs, but you only have a single hard drive. 
> That drive head can't be in multiple places simultaneously.
>
> Igor Tandetnik
>
>   
Further to Igor's point, the machines we use today almost always have 
one hard disk and database activity involves sharing that single 
resource between users.  If there is heavy disk activity you will get 
maximum throughput by having one process running at a time.  If there is 
sporadic disk activity interspersed with other processing you will get 
more throughput with concurrent processes.

Remember that all multi processing and multi threading involves a 
substantial overhead.  The throughput on old, slow computers with a 
simple monitor instead of a multi tasking O/S was very impressive. 
because there was no task switching overhead  Machines optimized for 
multi-user database activity have many disk spindles with the database 
distributed between them so that the disk farm is no longer a single 
resource.
> _
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] mutex and begin/end transaction

2009-05-01 Thread John Stanton
The better sales pitch and better political connections persuade the 
granter of research grants.  Better design and better science proves 
itself by performance and success in the marketplace and bad design and 
bad science consumes the grant without productivity.

This has strayed from the topic.  It should get back to Sqlite.

James Gregurich wrote:
> I describe reality.
> 
> Someone has to be the arbiter of "better." Generally, that arbiter is  
> the guy handing out the research grants.
> 
> On May 1, 2009, at 5:33 AM, John Stanton wrote:
> 
>> Science is the Scientific Method - observation, hypothesis and
>> skepticism.  The antithesis of politics.  There are no facts in  
>> science,
>> only observations and any hypothesis is only valid until a better one
>> replaces it.
>>
>> You describe bad, politicized science.
>>
>> James Gregurich wrote:
>>> With all due respect,  science itself is a set of
>>> "positions" (opinions) which are endorsed by small group of people as
>>> official doctrine after appropriate study. Saying "A 'position' is
>>> politics, not science" is not a particularly meaningful statement.   
>>> If
>>> you want to argue that point, feel free to send me a private email.
>>>
>>> My threaded application works pretty darn well. I can process
>>> thousands of print industry files on an 8-core system keeping the
>>> cores busy without lagging the GUI for other applications. Just
>>> because many people create ill conceived programs doesn't mean
>>> threaded programs are inherently doomed to be ill-conceived. The
>>> development tools and techniques for building concurrent systems are
>>> advancing and making concurrency quite feasible.
>>>
>>> James Gregurich
>>> Engineering Manager
>>> Markzware
>>>
>>> On Apr 30, 2009, at 5:01 AM, John Stanton wrote:
>>>
>>>> A "position" is politics, not science.  Warnings about the use of
>>>> threads are based on science, and advise you to avoid them if  
>>>> possible
>>>> for your own protection.
>>>>
>>>> I see ill conceived programs using threads which go to complex
>>>> synchronization to achieve the equivalent of single stream execution
>>>> but
>>>> with much greater overhead.  A KISS situation.
>>>>
>>>> James Gregurich wrote:
>>>>> thanks for the info. That should work for me.
>>>>>
>>>>> Given the industry is going multicore and 16-core macintoshes for
>>>>> your
>>>>> grand-mother are  just a few years away, I recommend you rethink  
>>>>> your
>>>>> position on the use of threading. Apple is heavily pushing
>>>>> parallelism
>>>>> on its developers.  NSOperation is a major part of that effort.  
>>>>> As I
>>>>> understand it, MS is developing their copy of NSOperation for  
>>>>> VS2010.
>>>>> The development landscape is only going to get more threaded as  
>>>>> time
>>>>> goes on.
>>>>>
>>>>> -James
>>>>>
>>>>>
>>>>>
>>>>>> On Apr 29, 2009, at 10:03 PM, James Gregurich wrote:
>>>>>>
>>>>>>
>>>>>>> howdy!
>>>>>>>
>>>>>>> question:
>>>>>>>
>>>>>>> for an in-memory db with the threading mode set to serialized, is
>>>>>>>
>>>>>> the
>>>>>>
>>>>>>> internal mutex held for an entire transaction so that one thread
>>>>>>>
>>>>>> won't
>>>>>>
>>>>>>> access the db while another one is in the middle of a transaction
>>>>>>>
>>>>>> with
>>>>>>
>>>>>>> multiple insert statements?
>>>>>>>
>>>>>> No.  But the mutex is recursive.  So you can get a copy of it  
>>>>>> using
>>>>>> sqlite3_db_mutex() then lock it yourself using
>>>>>> sqlite3_mutex_enter()/
>>>>>> leave().
>>>>>>
>>>>>> Also remember:  You should not be using threads.  Threads will  
>>>>>> bring
>>>>>> only grief and woe.  On your own head be it.
>>>>>>
>>>>>>
>>>>>>
>>>>>> D. Richard Hipp
>>>>>> drh at hwaci.com
>>>>>>
>>>>>>
>>>>> ___
>>>>> sqlite-users mailing list
>>>>> sqlite-users@sqlite.org
>>>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>>>
>>>> ___
>>>> sqlite-users mailing list
>>>> sqlite-users@sqlite.org
>>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Errors with quickstart.html's C example

2009-05-01 Thread John Stanton
What is your make file?  It seems to be missing a link library.

Samuel Baldwin wrote:
> Howdy. When I try to compile the small C program at the bottom of
> http://www.sqlite.org/quickstart.html , I get the following errors:
> 
> arrakis^library% gcc test.c
> test.c: In function ‘main’:
> test.c:21: warning: incompatible implicit declaration of built-in
> function ‘exit’
> test.c:27: warning: incompatible implicit declaration of built-in
> function ‘exit’
> /tmp/ccYuZFw7.o: In function `main':
> test.c:(.text+0xc0): undefined reference to `sqlite3_open'
> test.c:(.text+0xd2): undefined reference to `sqlite3_errmsg'
> test.c:(.text+0xf4): undefined reference to `sqlite3_close'
> test.c:(.text+0x123): undefined reference to `sqlite3_exec'
> test.c:(.text+0x14f): undefined reference to `sqlite3_free'
> test.c:(.text+0x158): undefined reference to `sqlite3_close'
> collect2: ld returned 1 exit status
> 
> The warnings about exit() are removed if I add #include ,
> but then I'm left with:
> 
> arrakis^library% gcc test.c
> /tmp/ccENlzA6.o: In function `main':
> test.c:(.text+0xc0): undefined reference to `sqlite3_open'
> test.c:(.text+0xd2): undefined reference to `sqlite3_errmsg'
> test.c:(.text+0xf4): undefined reference to `sqlite3_close'
> test.c:(.text+0x123): undefined reference to `sqlite3_exec'
> test.c:(.text+0x14f): undefined reference to `sqlite3_free'
> test.c:(.text+0x158): undefined reference to `sqlite3_close'
> collect2: ld returned 1 exit status
> 
> I'm really not sure what else to do. I've tried this on two machines
> running two seperate linux distributions, and friends have confirmed
> the same error on yet other machines. Is the sample code just
> outdated? Looking over it and sqlite3.h makes me think it should work.
> 
> I'm running sqlite3 3.6.13
> 
> Thanks, I'm completely baffled.
> 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] mutex and begin/end transaction

2009-05-01 Thread John Stanton

Science is the Scientific Method - observation, hypothesis and 
skepticism.  The antithesis of politics.  There are no facts in science, 
only observations and any hypothesis is only valid until a better one 
replaces it.

You describe bad, politicized science.

James Gregurich wrote:
> With all due respect,  science itself is a set of  
> "positions" (opinions) which are endorsed by small group of people as  
> official doctrine after appropriate study. Saying "A 'position' is  
> politics, not science" is not a particularly meaningful statement.  If  
> you want to argue that point, feel free to send me a private email.
> 
> My threaded application works pretty darn well. I can process  
> thousands of print industry files on an 8-core system keeping the  
> cores busy without lagging the GUI for other applications. Just  
> because many people create ill conceived programs doesn't mean  
> threaded programs are inherently doomed to be ill-conceived. The  
> development tools and techniques for building concurrent systems are  
> advancing and making concurrency quite feasible.
> 
> James Gregurich
> Engineering Manager
> Markzware
> 
> On Apr 30, 2009, at 5:01 AM, John Stanton wrote:
> 
>> A "position" is politics, not science.  Warnings about the use of
>> threads are based on science, and advise you to avoid them if possible
>> for your own protection.
>>
>> I see ill conceived programs using threads which go to complex
>> synchronization to achieve the equivalent of single stream execution  
>> but
>> with much greater overhead.  A KISS situation.
>>
>> James Gregurich wrote:
>>> thanks for the info. That should work for me.
>>>
>>> Given the industry is going multicore and 16-core macintoshes for  
>>> your
>>> grand-mother are  just a few years away, I recommend you rethink your
>>> position on the use of threading. Apple is heavily pushing  
>>> parallelism
>>> on its developers.  NSOperation is a major part of that effort. As I
>>> understand it, MS is developing their copy of NSOperation for VS2010.
>>> The development landscape is only going to get more threaded as time
>>> goes on.
>>>
>>> -James
>>>
>>>
>>>
>>>> On Apr 29, 2009, at 10:03 PM, James Gregurich wrote:
>>>>
>>>>
>>>>> howdy!
>>>>>
>>>>> question:
>>>>>
>>>>> for an in-memory db with the threading mode set to serialized, is
>>>>>
>>>> the
>>>>
>>>>> internal mutex held for an entire transaction so that one thread
>>>>>
>>>> won't
>>>>
>>>>> access the db while another one is in the middle of a transaction
>>>>>
>>>> with
>>>>
>>>>> multiple insert statements?
>>>>>
>>>> No.  But the mutex is recursive.  So you can get a copy of it using
>>>> sqlite3_db_mutex() then lock it yourself using  
>>>> sqlite3_mutex_enter()/
>>>> leave().
>>>>
>>>> Also remember:  You should not be using threads.  Threads will bring
>>>> only grief and woe.  On your own head be it.
>>>>
>>>>
>>>>
>>>> D. Richard Hipp
>>>> drh at hwaci.com
>>>>
>>>>
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] mutex and begin/end transaction

2009-05-01 Thread John Stanton
I suggest that you peruse the source code of Pthreads to get some idea 
of what you do when you create a thread.  They are not, as some people 
appear to believe, a magic way of making the machine do more and go faster.

Threads are of course invaluable when you want one resource, such as a 
user, to have access to multiple resources.

When you have multiple threads accessing one resource, such as a 
database, then you must have a robust synchronization method, such as a 
mutex, queue or similar.  There is a danger that programmers will 
implement a busy wait to perform their synchronization and get poor 
performance. In some low level, low latency cases however the use of the 
ultimate busy wait, a spin lock, is the best approach.

Pavel Ivanov wrote:
> So you don't even try to argue about database-servers-like
> applications? Apache maybe indeed not very good example because of
> good degree of independence between threads, though I think in some
> cases caching across different connections and thus across different
> threads could be useful.
> 
>> The danger with threading is in concurrent access to data.
> 
> I understand that. That's why concurrent access should be made very
> wisely. But if this concurrent access is to some cache which allows to
> avoid huge amount of disk reads - it's worth the effort. And my
> overall point here is that there's no universal taboo "threads are
> evil in all cases". It totally depends on the type of application.
> 
> Pavel
> 
> On Thu, Apr 30, 2009 at 2:54 PM, Roger Binns  wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA1
>>
>> Pavel Ivanov wrote:
>>> I'm curious: with all these "warnings based on science" how would you
>>> implement application similar to apache web-server
>> The danger with threading is in concurrent access to data.  Apache has
>> several different modes of operation (forking etc) but in the one that
>> uses threading, each thread handles a connection and does not handle or
>> share any data with other threads during its operation on that
>> connection.  For the non-Windows worker (MPM) it also uses multiple
>> processes with multiple threads and the processes exit after handling a
>> certain number of requests.
>>
>> As an example you can read about mod_wsgi which provides for running
>> Python code as though it was CGI - see http://code.google.com/p/modwsgi/
>>
>> Roger
>> -BEGIN PGP SIGNATURE-
>> Version: GnuPG v1.4.9 (GNU/Linux)
>>
>> iEYEARECAAYFAkn58/MACgkQmOOfHg372QSoOQCfXvz8ef7vS0HP/Uc9hZ/1BQSO
>> Cw0AoMXzgpNwO0PA5uMBvG/DB2Y3lKHG
>> =Cage
>> -END PGP SIGNATURE-
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] mutex and begin/end transaction

2009-04-30 Thread John Stanton
A "position" is politics, not science.  Warnings about the use of 
threads are based on science, and advise you to avoid them if possible 
for your own protection.

I see ill conceived programs using threads which go to complex 
synchronization to achieve the equivalent of single stream execution but 
with much greater overhead.  A KISS situation.

James Gregurich wrote:
> thanks for the info. That should work for me.
>
> Given the industry is going multicore and 16-core macintoshes for your  
> grand-mother are  just a few years away, I recommend you rethink your  
> position on the use of threading. Apple is heavily pushing parallelism  
> on its developers.  NSOperation is a major part of that effort. As I  
> understand it, MS is developing their copy of NSOperation for VS2010.  
> The development landscape is only going to get more threaded as time  
> goes on.
>
> -James
>
>
>   
>> On Apr 29, 2009, at 10:03 PM, James Gregurich wrote:
>>
>> 
>>> howdy!
>>>
>>> question:
>>>
>>> for an in-memory db with the threading mode set to serialized, is  
>>>   
>> the
>> 
>>> internal mutex held for an entire transaction so that one thread  
>>>   
>> won't
>> 
>>> access the db while another one is in the middle of a transaction  
>>>   
>> with
>> 
>>> multiple insert statements?
>>>   
>> No.  But the mutex is recursive.  So you can get a copy of it using
>> sqlite3_db_mutex() then lock it yourself using sqlite3_mutex_enter()/
>> leave().
>>
>> Also remember:  You should not be using threads.  Threads will bring
>> only grief and woe.  On your own head be it.
>>
>>
>>
>> D. Richard Hipp
>> drh at hwaci.com
>>
>> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sporadic freezes of sqlite

2009-04-26 Thread John Stanton
Sqlite does not use a server.  It is embedded in the application.

Pavel Ivanov wrote:
> Unfortunately these two options are not for me.
> I'm not in control of servers hardware, so my application should work
> on the given servers and nobody will adjust them for my application.
> And about first option, I believe what you say is to use in-memory
> database for intensive operations. But all my application consists of
> these intensive operations alone. So that you can reasonably argue
> that I should reject the idea of on-disk database and work totally in
> memory. And i can agree with you. But there's a couple of requirements
> that make things difficult. And the main of it is application should
> have some durability and survive power outages, crashes and reboots.
> "Some" because I can sacrifice for example everything that was written
> up to 5 minutes before power outage, but everything else should stay.
> And at this point all idea of in-memory database is ruined and I have
> to cope somehow with problems of frequent writings to disk.
>
>
> Pavel
>
> On Fri, Apr 24, 2009 at 6:13 AM, January Weiner  
> wrote:
>   
>>> I have an application written using sqlite. It writes into the
>>> database very intensively. And I noticed that it works nice and very
>>> fast but from time to time it just freezes for several seconds (I've
>>> registered freezes up to 8 secs). After some tracing of sqlite code
>>>   
>> I had the same problem. Also, it was increasing with database size.
>> Depending on your environment, work procedure and whether you want
>> speed or security, there are two things that work beautifully for me:
>>
>> 1) do the intentsive work on a db copy that sits on a ramdisk (or
>> tmpfs filesystem). I do that if I have to create a new database or
>> rebuild this from scratch, and since the process is supervised, there
>> is not really a danger of data loss.
>>
>> 2) for normal operation, I use a software RAID from flash disks, which
>> is not as fast (for data transfer) as a hard drive or SSD, but it is
>> has a response time better by an order of magnitude (at least) than
>> even a good hard drive.
>>
>> j.
>>
>> --
>> -Dr. January Weiner 3  -+---
>> Inst. of Bioinformatics, UKM, Univ. of Muenster | Von-Esmarch-str. 54
>> (+49) (251) 83 53002| D48149 Münster
>> http://www.compgen.uni-muenster.de/ | Germany
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] the speed of embedded database engines, sqlite3 vs berkeley db, I'm confused

2009-04-26 Thread John Stanton
P Kishor wrote:
> On Fri, Apr 24, 2009 at 5:27 AM, John Stanton <jo...@viacognis.com> wrote:
>   
>> Sqlte provides COMMIT and ROLLBACK unlike Berkeley.  If you can get by
>> without the advanced features of Sqlite, then use Berkely and take
>> advantage of its simplicity and faster execution.
>> 
>
> BDB does support transactions...
>
> http://www.oracle.com/technology/documentation/berkeley-db/xml/gsg_xml_txn/cxx/usingtxns.html
>
> In fact, BDB acts as the (a) transactional layer in MySQL (the
> alternative is InnoDB).
>
> Of course, other advantages of SQLite still apply. A key-value
> metaphor can only be pushed so far.
>
>   
Guess what make BDB run faster - no transactions. If you are not using
transactions BDB mght be for you. Personally as a long time user of BDB
in various ways I have a very poor opinion of it. It has a very
different application to Sqlite.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] the speed of embedded database engines, sqlite3 vs berkeley db, I'm confused

2009-04-24 Thread John Stanton
Sqlte provides COMMIT and ROLLBACK unlike Berkeley.  If you can get by 
without the advanced features of Sqlite, then use Berkely and take 
advantage of its simplicity and faster execution.

Note that for best performance an Sqlite application should group 
database inserts, deletes and updates into transactions.

liubin liu wrote:
> our project is using sqlite3, but found that the speed is too slow.
> I just tested the BerkeleyDB, and found that the speed is very fast. But I
> knew the sqlite3 is fast enough. And so I'm confused.
> I may be using sqlite3 in wrong way?
>
> anyway, next is my test code. I'm glad to receive your message.
>
> __
>
>
> // http://www.ibm.com/developerworks/cn/linux/l-embdb/
>
> //head///
> #include 
> #include  // for system
> #include  // for memset strcpy
> #include  // for time
>
> #include  // for Sqlite3
> #include  // for Berkeley DB
>
>
>
>   macro and struct 
> /
> #define DB_FILE_SQLITE "test_sqlite_0.1.db"
> #define DB_FILE_BDB "test_bdb_0.1.db"
>
> struct customer
> {
>   int c_id;
>   char name[10];
>   char address[20];
>   int age;
> };
>
>
>
>
>
>
> //   global variable  
> ///
>
> sqlite3 *db = NULL;
>
> int ret = -1; // 各函数返回值
>
>
>
>
>
>
>
> //   func proto  
> ///
>
> void way01(); // 打开、关闭的影响
>
>
>
> / sqlite3 //
>
> int sqlite_createtb(sqlite3 *db);
> int sqlite_insertdb(sqlite3 *db);
>
> int getdata_sqlite ( sqlite3 *db, struct customer *tb1 );
>
>
>
> /// berkeley db 
>
> int bdb_createdb(); // 包含了插入数据
>
> void print_error(int r);
> void init_dbt( DBT *key, DBT *data );
>
>
>
>
>
>
> /// code///
>
> int main ( void )
> {
>   int c = 0;
>   
>   system ( "rm -rf test_0.1.db" );
>   ret = sqlite3_open ( DB_FILE_SQLITE,  );
>   ret = sqlite_createtb(db);
>   ret = sqlite_insertdb(db);
>   sqlite3_close (db);
>   
>   printf ( "Sqlite3 / Berkeley DB, 创建数据库 + 插入数据 ... 完成\n" );
>   
>   printf ( "/\n" );
>   printf ( "1 : 查询性能比较 - Berkeley DB 与 Sqlite3 ///\n" );
>   
>   while ( (c=getchar()) != 'q' )
>   {
>   switch (c)
>   {
>   case '1':
>   way01();
>   break;
>   default:
>   break;
>   }
>   }
>   
>   system ( "rm -rf test_sqlite_0.1.db" );
>   system ( "rm -rf test_bdb_0.1.db" );
>   
>   return 0;
> }
>
> ///
> // 查询性能比较 - Berkeley DB 与 Sqlite3
> void way01()
> {
>   time_t tick1, tick2;
>   
>   int i = 0;
>   int num = 1000*100;
>   
>   struct customer tb_data;
>   
>   ///
>   time (  );
>   for ( i=0; i   {
>   ret = sqlite3_open ( DB_FILE_SQLITE,  );
>   ret = getdata_sqlite ( db, _data );
>   sqlite3_close (db);
>   }
>   time (  );
>   printf("Sqlite3 : 打开、关闭并操作数据库文件 %d 次, 时间为: %4ld s\n", num, tick2 -
> tick1 );
>   
>   ///
>   bdb_createdb();
> }
>
>
>
>
>
>
>
> ///
> void *callback(void *para, int col, char **value, char **colname )
> {
> //int i;
> //for(i=0; i //printf("%s, ", (value[i] ? value[i] : "NULL") );
> //}
> //printf("col = %d\n", col);
>   return (void *) 0;
> }
> int getdata_sqlite ( sqlite3 *db, struct customer *tb1 )
> {
>   char *sql = "SELECT * FROM table1 WHERE id=500;";
>   ret = sqlite3_exec ( db, sql, (void *)callback, NULL, NULL );
>   return 0;
> }
>
> ///
> int sqlite_createtb( sqlite3 *db )
> {
>   char *sql1 = "CREATE TABLE table1 (id INTEGER, name VARCHAR(10), address
> VARCHAR(20), age INTEGER)";
>   ret = sqlite3_exec ( db, sql1, NULL, NULL, NULL );
>   return 0;
> }
> int sqlite_insertdb(sqlite3 *db)
> {
>   time_t tick1, tick2;
>   int i = 0;
>   int num = 1000;
>   
>   char *qf = "INSERT INTO table1 VALUES (%d, %Q, %Q, %d)";
>   char *sql = NULL;
>   
>   time (  );
>   sqlite3_exec ( db, "BEGIN", NULL, NULL, NULL );
>   for (i=0;i   {
>   sql = sqlite3_mprintf ( qf, i, "javer", "chengdu", 32*i );
>   ret = sqlite3_exec(db, sql, NULL, NULL, NULL);
>   sqlite3_free (sql);
>   }
>   

Re: [sqlite] Creating procedures in sqlite

2009-04-23 Thread John Stanton
Sqlite does not have stored procedures, but does have functions.

We iplemented Javascript as a stored procedure language in Sqlite and 
are currently looking at doing the same with Python for another 
application.  You could also use Lua.  These are all Open Source projects.

Jyoti Seth wrote:
> Hi All,
>
>  
>
> SQLite lacks built-in support for stored procedures. We need to create it in
> our application. Please guide me any alternative method to achieve this
> functionality.
>
>  
>
> Thanks,
>
> Jyoti
>
>  
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction commit performance

2009-04-23 Thread John Stanton
COMMIT does the heavy lifting, which is why it consumes resources.  It 
actualy performs the safe write to the disk.

Alessandro Merolli wrote:
> Hi all,
>
>I'm working on a project that makes massively usage of SQL  
> statements with many inserts/selects. It's a multi-threaded process  
> that shares a certain amount of SQLite "connections" to be used by  
> other process through an IPC layer. The connections are created at  
> startup and 5 other database files are attached to them before being  
> shared by this process. Most of the time, the statements requested are  
> grouped into a database transaction like this:
>
>  BEGIN TRANSACTION;
>  INSERT INTO TBTEST (X,Y,Z) SELECT X,Y,Z FROM TBDATA JOIN ... ;
>  DELETE FROM TBTEST WHERE X NOT IN (SELECT );
>  UPDATE TBTEST SET Y=y WHERE X=x;
>  COMMIT TRANSACTION;
>
>I'm already considering all the issues that this kind of design  
> has: database concurrency, database locks, etc... but, this is not the  
> thing.
>I'm experiencing a behavior that seams strange to me and I'd like  
> to check if anybody can give me some tips to minimize this: The COMMIT  
> statement consumes 50% or more of the time of the whole transaction;  
> for example: if the whole transaction costs 2000 ms, only the COMMIT  
> operation took 1500 ms. I understood all the operations described in  
> the SQLite documentation about transaction commit/rollback feature,  
> but this scenario seems to me that some adjustments can be made to  
> minimize this.
>
>I'm using:
>- The server is an Intel 2GHz, 2GB of RAM and a 80GB hard-disk SATA  
> II (w/ 8MB of cache) 7200 RPM;
>- Windows XP SP3 professional using NTFS;
>- SQLite 3.6.13 (amalgamation version);
>- I'm using shared-cache with the wait-notify feature implemented  
> in the previous 3.6.12;
>- Other compiler options being used are:
>SQLITE_THREADSAFE=1;
>TEMP_STORE=3;
>SQLITE_DEFAULT_CACHE_SIZE=65568;
>SQLITE_DEFAULT_TEMP_CACHE_SIZE=65568;
>SQLITE_MAX_ATTACHED=30;
>SQLITE_ENABLE_COLUMN_METADATA;
>SQLITE_ENABLE_UNLOCK_NOTIFY.
>- The database size is around 200MB.
>
>What I'm looking for is:
>- If someone has a similar environment, what can be done to  
> optimize Windows XP disk writes?
>- What PRAGMA statements or compiler options can help me with this,  
> without the risk of getting the database corrupted?
>
>Any suggestion is helpful, thanks for the attention.
>Best regards,
>Alessandro Merolli.
>   
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sporadic freezes of sqlite

2009-04-23 Thread John Stanton
Running in an unsafe mode for speed does not mean that ultimately there 
will have to be no writes,

Perhaps you could devise a system where you post writes to a queue and 
have another thread or process perform the writes asynchronously.  You 
would then use otherwise idle machine time for writing and avoid the 
"freezing".

Doug Currie wrote:
> On Apr 22, 2009, at 4:38 PM, Pavel Ivanov wrote:
>
>   
>> I've tried to set pragma synchronous = on (it's off by default for
>> me), but it makes application to work 5 times slower which is not
>> acceptable for me. I would be happy if there was some solution in
>> between that, i.e. for example just a bit slower operation on every
>> pwrite but without 8 seconds-peaks.
>> 
>
> Perhaps you can occasionally wrap a transaction with:
> PRAGMA synchronous = NORMAL;
> < do the transaction >
> PRAGMA synchronous = OFF;
>
> which would flush OS allocated database cache buffers; do this one out  
> of N transactions. If you have threads performing transactions that  
> are not otherwise communicating, you base this decision on a random  
> number [rand() % N == 1]. Tune N to achieve the maximum pwrite time  
> you need.
>
> Caveat: I have never tried this; I don't use PRAGMA synchronous = OFF.
>
> e
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Select statements in the context of transactions.

2009-04-07 Thread John Stanton
A TRANSACTION only has a meaning when the database is being altered.  
There are no journalling and commits on a read.  You get the results of 
the SELECT as soon as the database read occurs.  You cannot be faster 
than that.

Where you can get improved SELECT performance is by using caching.  
Explore shared caches
JS.

Rosemary Alles wrote:
> Hullo Puneet,
>
> Many thanks for your response.
>
> My understanding of a sqlite3 "transaction" is probably poor. From  
> your response
> (if you are correct) I see that only UPDATES and INSERTS can be  
> speeded up
> via bundling many numbers of them in a Being/Commit block? Leading me  
> to ask:
> Is there no difference in behavior between a SINGLE select and several
> of them within the context of transaction?
>
> And yes, each of the many SELECTS have a different WHERE clause.
>
> -rosemary.
>
> On Apr 7, 2009, at 12:38 PM, P Kishor wrote:
>
>   
>> On Tue, Apr 7, 2009 at 2:36 PM, Rosemary Alles  
>>  wrote:
>> 
>>> Hullo Igor,
>>>
>>> Many thanks for your response: I believe I didn't phrase my question
>>> correctly:
>>>
>>> 1) If I were to bundle several thousand SELECT statements in a single
>>> transaction - why would it not run faster?
>>>   
>> as far as I understand, transactions matter only in the context of
>> UPDATEs and INSERTs for the purpose of speed (keep in mind, I am not
>> addressing data integrity -- the ACID part here, but only speed). A
>> transaction speeds this multiple UPDATEs and INSERTs by decreasing the
>> number of times your program interacts with slowest part of your
>> computer, the hard disk.
>>
>> Multiple SELECTs in a transaction might help with the integrity, but
>> ensuring that you don't end up getting data changed in mid-stream, but
>> won't speed up the query.
>>
>> Are all your thousands of SELECTs based on different WHERE criterion?
>> If not, they would really be just one SELECT.
>>
>> 
>>> 2) This is precisely the problem though - each of those statements
>>> will yield rows of results to be parsed with
>>> sqlite3_column - in the context of the user's (my) program. If many
>>> SELECT statements are issued within the context
>>> of a single transaction (repeatedly), how does one deal with the
>>> results without a callback (if using sql_step)? Yes,
>>> sql_exec is touted to be a wrapper around sql_prepare, bind, step.
>>> However, is does (also - additionally) offer the
>>> option of a user supplied calleback routine which sql_prepare etc. do
>>> not.
>>>
>>> Essentially, my question is about context. if many many SELECTS are
>>> bundled in a single transaction using prepare,
>>> bind and step. In what context does one parse the results? Do we not
>>> have synchronizing issue here?
>>>
>>> Thanks again,
>>> rosemary
>>>
>>> On Apr 6, 2009, at 8:03 PM, Igor Tandetnik wrote:
>>>
>>>   
 "Rosemary Alles"  wrote
 in message news:20A6B796-613B-4F5D- 
 bfca-359d6b9fa...@ipac.caltech.edu
 
> I want to speed up my app. Can I run SELECT statements within the
> context of a transaction.
>   
 Yes, but it's unlikely to make it run any faster.

 
> If so, how does one handle the query
> results?
>   
 The same way as when running it without an explicit transaction.

 
> I would assume this cannot be done with sql_prepare,
> sql_bind, sql_step?
>   
 Of course it can. See sqlite3_column_*

 
> Would I *have* to use sql_exec
>   
 No. And if you look at the implementation of sqlite3_exec, it uses
 sqlite3_prepare and sqlite3_step internally anyway. It's maintained
 mostly for backward compatibility.

 
> What am I giving up
> by using sql_exec vs sql_prepare, sql_bind and sql_step?
>   
 Off the top of my head: 1) strong typing (you get all data as  
 strings,
 so that, say, an integer is converted to string and then you'll have
 to
 convert it back); 2) streaming (with sqlite3_exec, the whole  
 resultset
 must be present in memory at the same time; with sqlite3_step, only
 one
 row's worth of data needs to be present in memory; makes a huge
 difference for very large resultsets).

 Igor Tandetnik



 
>> -- 
>> Puneet Kishor http://www.punkish.org/
>> Nelson Institute for Environmental Studies http://www.nelson.wisc.edu/
>> Carbon Model http://carbonmodel.org/
>> Open Source Geospatial Foundation http://www.osgeo.org/
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> 
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   


Re: [sqlite] httpd server ???

2009-03-19 Thread John Stanton
It is not an open source project but you are welcome to access to the 
source code.and notes.

The intent of the project is to provide a highly efficient application 
server for AJAX support of browser-based IT networks.  To that end the 
design criteria were:
  o  Minimize process and thread creation and destruction.
 o   Avoid inter process communication
 o  Minimize buffer shadowing on network traffic
 o  Provide a highly efficient generator of HTML pages from data using SQL.
o   Provide generation of charts from datab using SQL
 o  Maximize traffic by using compression where it can help.
 o  Implement  stored procedure capability for the database
 o Provide a secure RPC mechanism for XMLHttpRequest traffic from browsers.

A simple server page generation language which uses a compiled byte code 
format with resolved addresses efficiently creates HTML pages from 
templates and SQL.  Embedded Sqlite provides efficiewnt database access 
from within the server process and the  use of sendfiletTransmitFile 
network API minimizes buffer shadowing..  The end result is the ability 
of processing an AJAX-style transaction involving database access on a 
slow server and client machine in 20-30mS.  That elapsed time includes 
network latency and the time spent processing Javascript in the client 
browswer.

The HTTP primitives are all custom coded from first principles to speed 
parsing and general processing.

As a simple HTTP server this program serves HTML pages and CGI processes 
with minimum overheads.

Should you have an interest in examining or using all or part of this 
software contact me at jo...@viacognis.com and I can make a URLavailable.
JS

dcharno wrote:
> John Stanton wrote:
>   
>> I have an HTTP server wj\hich embeds Sqlite as well as a custom page 
>> generation language, and compiler and a remote procedure call interface 
>> for AJAX functionality and Javascript as an embedded scripting 
>> language.  It runs on Unix/Linux and conditionally compiles for 
>> Windows.  It uses a sinngle process with a pool of threads to handle 
>> multiple users.
>>
>> The RPC mechanism protects it from SQL injection attack.  It also 
>> implements GI and SSL plus data compression..
>>
>> Embedding the database code in the application server by using Sqlite 
>> gives an efficient application environment where there is not a 
>> proliferation of processes and process/thread generation and 
>> destruction.  Performance is good..
>>
>> Source is all in simple C.
>> 
>
> Sounds interesting.  Is this an open source project hosted somewhere?
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] httpd server ???

2009-03-18 Thread John Stanton
I have an HTTP server wj\hich embeds Sqlite as well as a custom page 
generation language, and compiler and a remote procedure call interface 
for AJAX functionality and Javascript as an embedded scripting 
language.  It runs on Unix/Linux and conditionally compiles for 
Windows.  It uses a sinngle process with a pool of threads to handle 
multiple users.

The RPC mechanism protects it from SQL injection attack.  It also 
implements GI and SSL plus data compression..

Embedding the database code in the application server by using Sqlite 
gives an efficient application environment where there is not a 
proliferation of processes and process/thread generation and 
destruction.  Performance is good..

Source is all in simple C.

Ken wrote:
> This is off topic, but since sqlite list has such savvy folks I thought I'd 
> try here.
>
> Does anyone know of an embedded http server that can serve and/or create 
> pages from a sqlite database?
>
>
> Thanks,
> Ken
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DateTime Objects

2009-02-28 Thread John Stanton
Look at the Sqlite sourcce code in the date function area and all, is 
revealed.

jonwood wrote:
> John Stanton-3 wrote:
>   
>> Use the Sqlite date storage format and support.  With that approach 
>> which is astronomivally correct you can deliver any date format or 
>> manipulwtion,  You may need some custom written functions.  to get week 
>> number according to national rules etc, but the method is sound.  It is 
>> also compatible with different date systems.
>>
>> 
>
> Thanks, but I'm not sure what this means. "SQLite date storage format and
> support" doesn't appear to be a specific term (at least, it didn't turn up
> anything specific on Google). Is there a link that would describe what
> you're referring to and how it'd help my situation?
>
> Thanks.
>
> Jonathan
>
>
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DateTime Objects

2009-02-28 Thread John Stanton
Use the Sqlite date storage format and support.  With that approach 
which is astronomivally correct you can deliver any date format or 
manipulwtion,  You may need some custom written functions.  to get week 
number according to national rules etc, but the method is sound.  It is 
also compatible with different date systems.

jonwood wrote:
>>> Exactly. No 2-digit year format, no AM/PM format, and no way to eliminate
>>> leading zeros, etc. Just as I pointed out in my original post.
>>>   
>> Well, I would say you (or your users) live in the past. 
>> The rest of the world uses ISO-8601 ;)
>> http://www.cl.cam.ac.uk/~mgk25/iso-time.html
>> 
>
> Heh, well, I suppose I could tell the client that I can't match his current
> reports, or make the columns narrow enough to all fit on a single page and
> see if he still wants to pay me. But, with the economy what it is, I was
> toying with the idea of just giving him what he asked for.
>
>   
>> Pun aside, you can always deliver epoch (or something else
>> you find more convenient) to your application and let the
>> application do the formatting. 
>> SQL isn't meant for presentation anyway, it's for relational
>> storage.
>>
>> Example:
>> Compute the time since the unix epoch in seconds (like
>> strftime('%s','now') except this includes the fractional
>> part):
>>
>> SELECT (julianday('now') - 2440587.5)*86400.0; 
>> 
>
> Right. I think I'm getting the picture of my options. I already have a fair
> amount of code working that relies on the a DATETIME column and was hoping
> it would support a date format supported by C/C++ (like the way I read
> DateTime values with MS SQL and C# in .NET).
>
> I can work something out if these are my options though.
>
> Thanks.
>
> Jonathan
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Maximum number of concurrent users

2009-02-26 Thread John Stanton
It is an embedded system, so the number of users depends on how it is 
used.  Be aware that it is not a database server like Oracle or DB2, but 
is a library of routines to link into your application.

Eversogood wrote:
> Hi,
>
> What is the maximum number of concurrent users for SQLite?
>
> Thanks,
> Ev
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Double entry bookkeeping

2009-02-20 Thread John Stanton
BareFeet wrote:
> Hi John,
> 
>> You still miss the point of the cross reference ID.  It is NOT the
>> reference ID od the document, such as an invoice or check number,  
>> but it
>> more like the row ID used by Sqlire as a unique key for a DB row.
> 
> I thought, from your explanation, that was what you were using the  
> "Reference ID" for, as you said:
> 
>>> reference ID to tag the particular complete transaction
> 
> Anyway, it doesn't matter since I think we're talking about the same  
> thing, but with different column names. If you want to clarify, it  
> would help if you posted a schema of the three tables and the  
> relationships between them.
> 
>> You could normalize out the date and have stored as associated with  
>> the cross reference.  The cost of that would be an extra row lookup  
>> each time you want to get the date of an accounting transaction.  A  
>> performance and code complexity decision.
> 
> It's also an SQL design issue which impacts performance and the  
> ability to extract different reports. De-normalized databases are  
> fundamentally a bad thing. You wouldn't, for instance, design a  
> meeting database where every meeting event explicitly store the name  
> and contact details of the person you were meeting. You'd store it in  
> a People table and refer each event to it. Linking the data through  
> "an extra row lookup" is trivial and optimized within the SQL engine.
Actually an extra row lookup is not trivial, and can be a mighty burden. 
  Normalization is a trade off between function and performance, and one 
where individual circumstances drive the compromises.
> 
> Thanks,
> Tom
> BareFeet
> 
>   --
> Comparison of SQLite GUI tools:
> http://www.tandb.com.au/sqlite/compare/?ml
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Double entry bookkeeping

2009-02-19 Thread John Stanton
We are talking about different things.  The referennce ID is an ID 
assigned by the originator or the transaction, say an invoice number 
from a vendor or a check number from a bank whereas the cross reference 
is assigned by the application software and represents a key in an index 
on the table.

The points made by another poster are significant - not only is the 
design of an accounting database important from a technical perspective, 
it must also not violate the Doctrines of Accounting(consistency, 
disclosure etc).  Since transactions are a matter of legal record they 
cannot be changed or deleted, only added, and it should be possible mto 
prove that they cannot be changed.  The locally assigned cross reference 
number tagging the transaction set can also be used as an audit tool to 
prove the absence of deletions.


BareFeet wrote:
> Hi John,
> 
>> You still miss the point of the cross reference ID.  It is NOT the
>> reference ID od the document, such as an invoice or check number,  
>> but it
>> more like the row ID used by Sqlire as a unique key for a DB row.
> 
> I thought, from your explanation, that was what you were using the  
> "Reference ID" for, as you said:
> 
>>> reference ID to tag the particular complete transaction
> 
> Anyway, it doesn't matter since I think we're talking about the same  
> thing, but with different column names. If you want to clarify, it  
> would help if you posted a schema of the three tables and the  
> relationships between them.
> 
>> You could normalize out the date and have stored as associated with  
>> the cross reference.  The cost of that would be an extra row lookup  
>> each time you want to get the date of an accounting transaction.  A  
>> performance and code complexity decision.
> 
> It's also an SQL design issue which impacts performance and the  
> ability to extract different reports. De-normalized databases are  
> fundamentally a bad thing. You wouldn't, for instance, design a  
> meeting database where every meeting event explicitly store the name  
> and contact details of the person you were meeting. You'd store it in  
> a People table and refer each event to it. Linking the data through  
> "an extra row lookup" is trivial and optimized within the SQL engine.
> 
> Thanks,
> Tom
> BareFeet
> 
>   --
> Comparison of SQLite GUI tools:
> http://www.tandb.com.au/sqlite/compare/?ml
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Any concept of row number in SQLite?

2009-02-19 Thread John Stanton
Use the Sqlite row id.

His Nerdship wrote:
> Hi,
> I am converting a program from Paradox (stop laughing, please) to SQLite. 
> Paradox has a useful feature where you can specify the actual index of a row
> in the table.  This is handy when the table is displayed in a grid and you
> want the record corresponding to a row in that grid - you can just specify
> the index, say 28, of that grid row and it will get the record no 28 from
> the table.  It spares the need for a SELECT statement, and is a lot more
> efficient.
> As a SQLite newbie, the only way I can see to do this is to read the whole
> table with sqlite3_get_table() and then get the required row from the
> returned array.  This seems overkill when I just want a single record.
> Is there a more compact way of doing this?
> Thanks in advance etc.
> Sholto

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Double entry bookkeeping

2009-02-19 Thread John Stanton
You still miss the point of the cross reference ID.  It is NOT the 
reference ID od the document, such as an invoice or check number, but it 
more like the row ID used by Sqlire as a unique key for a DB row.

You could normalize out the date and have stored as associated with the 
cross reference.  The cost of that would be an extra row lookup each 
time you want to get the date of an accounting transaction.  A 
performance and code complexity decision.  As they say "you pays your 
money and you takes your choice".

BareFeet wrote:
> Hi John,
> 
> Thanks again for your discussion.
> 
>> "Double Entry" book keeping is actually a misnomer.  A transaction  
>> is very likely to have more than two entries.
> 
> Yes, I realize that, though most Transactions generally have just two.  
> In any case, if there are n entries in a Transaction, we only need n-1  
> of the Entry amounts in order to deduce the remaining amount. SQL (or  
> more generally, set theory) normalization suggests that explicitly  
> storing a redundant record (in this case, the nth amount) is not  
> ideal. I understand that double entry bookkeeping traditionally uses  
> the redundancy as an error check, but that seems more appropriate for  
> manual paper systems, since computer systems can validate entries  
> automatically. So I wondered if there's a normalized approach to  
> double entry bookkeeping.
> 
>> The "transaction ID" I mentioned is a locally generated reference ID  
>> to tag the particular complete transaction, which might include  
>> postings to a bank account, sales, cost of sales, inventory etc.  It  
>> is not conventionally used, but confers great advantages when it is  
>> implemented.  It lets you locate the balancing entries for any  
>> accounting transaction.  In other words given any element of an  
>> accounting transaction you can complete the T Account.
> 
> I think you actually called this the "Cross Reference" and I called it  
> "Transaction ID". But, yes, I understand using it in each Accounting  
> Entry that is part of the same Transaction, in order to group them  
> together. All Accounting Entries in the same Transaction should sum  
> their Amounts to be zero.
> 
>>> You seem to suggest storing the date for each entry within a  
>>> transaction. Would it not be better to store it once only for the  
>>> transaction as a whole, since each entry will share that same date?
> 
>> Dating each accounting transaction aids in creating a unique ID.
> 
> I understand the need to date a Transaction as a whole (therefore  
> storing the Date in the Transaction table), but it seems redundant and  
> denormalized to store the same date in each constituent Accounting  
> Entry. It doesn't make any of the Accounting Entries more unique  
> (since they all by nature have the same date).
> 
>> Normalizing it out is nor necessarily an advantage.
> 
> I can see disadvantages of requiring multiple updating (changing the  
> date in one Accounting Entry requires updating the Date for all other  
> Accounting Entries that are part of the same Transaction), wasted  
> storage etc. I can't see any advantages.
> 
> Or perhaps you actually also meant to imply that Date should be stored  
> once for the Transaction (not for each Entry within it)?
> 
>> It has been my observation that accounting data is generally stored  
>> in poorly thought-out data structures which then require layer upon  
>> layer of logic to overcome the deficiencies.
> 
> Thanks then for your insight into the matter :-) I want to get it  
> right from the outset. In my experience poor normalization  
> necessitates layers of fixes down the road.
> 
> Here is the basic schema from before. I welcome any direct suggestions  
> about the schema:
> 
>>> create table "Accounting Entry"
>>> (
>>>   ID
>>> , "Transaction ID"  --> "Accounting Transaction".ID
>>> , "Account ID"  --> Account.ID
>>> , Amount
>>> , Operator  -- plus or minus
>>> )
>>> ;
>>> create table "Accounting Transaction"
>>> (
>>>   ID
>>> , Date
>>> , Description
>>> )
>>> ;
>>> create table Account
>>> (
>>>   ID
>>> , Description
>>> )
>>> ;
> 
> Thanks,
> Tom
> BareFeet
> 
>   --
> Comparison of SQLite GUI tools:
> http://www.tandb.com.au/sqlite/compare/?ml
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Double entry bookkeeping

2009-02-19 Thread John Stanton
"Double Entry" book keeping is actually a misnomer.  A transaction is 
very likely to have more than two entries.  The "transaction ID" I 
mentioned is a locally generated reference ID to tag the particular 
complete transaction, which might include postings to a bank account, 
sales, cost of sales, inventory etc.  It is not conventionally used, but 
confers great advantages when it is implemented.  It lets you locate the 
balancing entries for any accounting transaction.  In other words given 
any element of an accounting transaction you can complete the T Account.

Dating each accounting transaction aids in creating a unique ID. 
Normalizing it out is nor necessarily an advantage.

It has been my observation that accounting data is generally stored in 
poorly thought-out data structures which then require layer upon layer 
of logic to overcome the deficiencies.



BareFeet wrote:
> Hi John,
> 
> Thanks for the input.
> 
>> In general you need for one transaction -
>> General Ledger account
>> Date
>> Reference ID
>> Cross Reference
>> Amount
>> Optional narrative
>>
>> The Chart of Accounts defines full set of accounts and the details of
>> each account.
> 
> OK, that seems very similar to the schema I offered before (appended  
> below), with some different names:
> 
> General Ledger account  = Account ID
> Date= Date
> Reference ID
> Cross Reference = Transaction ID
> Amount  = Amount
> Optional narrative  = Description
> 
> I assume that your Reference ID is for references such as cheque  
> number, invoice number etc, but linked outside of this basic schema.
> 
> You seem to suggest storing the date for each entry within a  
> transaction. Would it not be better to store it once only for the  
> transaction as a whole, since each entry will share that same date?
> 
>> If each set of entries is an atomic transaction which balances  
>> debits and credits then the ledger is always balanced.
> 
> Yes, I had concluded the same, since it's kind of the definition of  
> double entry bookkeeping. I had wondered, though, since most  
> transactions involve two entries for the same amount but for two  
> different accounts, whether storing that same amount twice would not  
> only be redundant but actually de-normalized from an SQL point of  
> view. Is it preferable to have a schema where the amount is only  
> stored once in those circumstances?
> 
>> Having a separate index linking balancing transactions makes  
>> displaying
>> accounting activity much clearer.  You could call the cross  
>> reference a
>> "journal ID" or a similar name, since it has some functional  
>> similarity
>> to the traditional bookkeeper's general journal.
> 
> Your cross reference or "Journal ID" seems the same as the  
> "Transaction ID" I included in my earlier schema.
> 
>> Make sure that you insert each set of accounting transactions as one
>> Sqlite transaction which rolls back should it fail to successfully
>> complete, thus maintaining the balance of the ledger.
> 
> Good tip :-)
> 
>>> I found this:
>>> Modeling Business Rules: Data Driven Business Rules
>>> http://www.tdan.com/view-articles/5227
>>> which, seems similar to the route I was taking. If I'm interpreting  
>>> it
>>> correctly, the TDAN article suggests a relationship between  
>>> Accounting
>>> Entry and Accounting Transaction:
>>>
>>> create table "Accounting Entry"
>>> (
>>>   ID
>>> , "Transaction ID"  --> "Accounting Transaction".ID
>>> , "Account ID"  --> Account.ID
>>> , Amount
>>> , Operator  -- plus or minus
>>> )
>>> ;
>>> create table "Accounting Transaction"
>>> (
>>>   ID
>>> , Date
>>> , Description
>>> )
>>> ;
>>> create table Account
>>> (
>>>   ID
>>> , Description
>>> )
>>> ;
>>>
> 
> Thanks,
> Tom
> BareFeet
> 
>   --
> Comparison of SQLite GUI tools:
> http://www.tandb.com.au/sqlite/compare/?ml
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Double entry bookkeeping

2009-02-19 Thread John Stanton
In general you need for one transaction -
General Ledger account
Date
Reference ID
Cross Reference
Amount
Optional narrative

The Chart of Accounts defines full set of accounts and the details of 
each account.

The G/L A/C indicates whether it is an asset or liability account (plus 
or minus) and how it is grouped in reports, the a/c, date amd reference 
ID are a unique ID and the "cross reference" can be a key on an index 
which ties together balancing entries in each "T Account".  If each set 
of entries is an atomic transaction which balances debits and credits 
then the ledger is always balanced.

Having a separate index linking balancing transactions makes displaying 
accounting activity much clearer.  You could call the cross reference a 
"journal ID" or a similar name, since it has some functional similarity 
to the traditional bookkeeper's general journal.

Make sure that you insert each set of accounting transactions as one 
Sqlite transaction which rolls back should it fail to successfully 
complete, thus maintaining the balance of the ledger.

BareFeet wrote:
> Hi Rich,
> 
>>> I'm trying to design a schema for double entry book-keeping. Is there
>>> already a schema available that provides this functionality?
>>   Well, if you're going to re-invent the wheel, take a look at the  
>> source
>> code for gnucash or, better yet, SQL-Ledger.
> 
> Thanks for your reply. I can't find the actual schema for either of  
> those packages. But what I have seen of them indicates that they cater  
> for a much broader solution than what I'm after.
> 
> I'm just looking for the general schema or data model for storing  
> double entry accounting transactions.
> 
> I found this:
> Modeling Business Rules: Data Driven Business Rules
> http://www.tdan.com/view-articles/5227
> which, seems similar to the route I was taking. If I'm interpreting it  
> correctly, the TDAN article suggests a relationship between Accounting  
> Entry and Accounting Transaction:
> 
> create table "Accounting Entry"
> (
> ID
>   , "Transaction ID"  --> "Accounting Transaction".ID
>   , "Account ID"  --> Account.ID
>   , Amount
>   , Operator  -- plus or minus
> )
> ;
> create table "Accounting Transaction"
> (
> ID
>   , Date
>   , Description
> )
> ;
> create table Account
> (
> ID
>   , Description
> )
> ;
> 
> Is tat the general model others have used, experienced, seen,  
> implemented?
> 
> Thanks,
> Tom
> BareFeet
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question on missing Entry Point for Sqlite 3

2009-02-09 Thread John Stanton
"Ypu can lead a horse to water but you cannot make it drink"
.
W Allan Edwards wrote:
> Hey... 640k memory ought to be enough for everyone!
>
> Maybe today, but historically
>
> I thought sqlite was originally designed and developed by Dr. Hipp so he 
> could learn database technology? (SQL database stuff)  Then after years and 
> years and 3 versions later the thing has found it's way into all kinds of 
> devices due to a unique design approach?   As the code turns, it is now 
> meeting up with all kinds of interesting complications to deal with, such as 
> compile configurations to meet demands for individual.  
>
> Although I have enjoyed working on C/C++ compilers and other systems software 
> in the past (professionally), today I integrate business systems 
> (professionally) and I prefer to NOT deal with such issues, and if I do, I 
> like to have a nice document in front of me explaining to me what is going 
> on.  Encapsulation, or hiding your inerds is a really nice thing when it can 
> be done.  Of course you guys have been super helpful so thank you!
>
> Oddly enough, you would think for desktop Linux (Ubuntu) with the average pc 
> carrying 2 gigs of ram today they would compile the dang thing with all 
> configurations on.  : - )  Guess I proved that one wrong.  Maybe the config 
> file should have a feature that checks the amount of RAM on a machine then 
> turns all features if it is over say... 4 megabytes? hehe  Almost seems now 
> you need a sqliter..  but 300 million installs later can't be wrong.
>
> Allan
>
>   
>> Date: Mon, 9 Feb 2009 08:53:12 -0600
>> From: jo...@viacognis.com
>> To: sqlite-users@sqlite.org
>> Subject: Re: [sqlite] Question on missing Entry Point for Sqlite 3
>>
>> You do not appear to understand the intent of Sqlite.  It is an open 
>> source software library which can be comiled to siuit unique application 
>> requirements.   Decide what features you want to use and set the 
>> conditional compilation flags accordingly and com;pile it to all the 
>> platforms you intend to use.  You will then get exactly what you want.
>>
>> W Allan Edwards wrote:
>> 
>>> I also emailed the mono project guys about this because they utilize the 
>>> ado.net code that I am having issues with on top of sqlite on linux.
>>>
>>> My what I have gathered, you compile your .net app with the managed only 
>>> libs from the ado.net provider for sqlite.  Then on windows that code 
>>> interops with the sqlite library on windows (and this works).  After you 
>>> move your code over to linux, you run the assemblies under the mono .net 
>>> run time and it loads the sqlite lib available to it on linux.   In fact, 
>>> the app actually works against sqlite until that meta call is called...  So 
>>> I know that the sqlite native lib must be getting loaded by the mono 
>>> runtime on linux successfully.  But of course, it is the one wihtout that 
>>> method, then the runtime explodes and I get that import message.
>>>
>>> FYI
>>>
>>> http://sqlite.phxsoftware.com/
>>>
>>> From: silicon_pla...@hotmail.com
>>> To: sqlite-users@sqlite.org
>>> Subject: RE: [sqlite] Question on missing Entry Point for Sqlite 3
>>> Date: Sun, 8 Feb 2009 22:47:33 -0600
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>  
>>>   
>>>   
 That doesn't make any sense. (you sound like all my old girlfriends)  Did 
 you mean sqlite3.c?  If you define the
 same thing multiple times in the same compilation then the compiler
 usually complains.
 
 
>>> - I downloaded the sqlite-amalgamation-3.6.10.tar.gz
>>> - exploded the archive on my drive
>>> - .configure
>>> - grepped for that define
>>> - changed the sqlite.c file in the most appropriate spot I could find
>>> - make
>>> - sudo make install
>>> - copied by build into where I believed the lib was getting picked up from 
>>> by the mono ado.net provider
>>> - run my ap... CRAP!
>>>
>>> --
>>>
>>>   
>>>   
 I would fix the provider to catch the entrypoint not found exception and
 gracefully deal with the issue.
 
 
>>> I agree with you on this.  I have emailed those guys.
>>>
>>> ---
>>>
>>>   
>>>   
 If not doing that then I would build a private copy of SQLite using
 configure:

   ./configure CFLAGS=-DSQLITE_ENABLE_COLUMN_METADATA   \
   --prefix=/usr/local/customsqlite --disable-tcl
   make install

 
 
>>> I believe I will go with the special compilation, then figure out how to 
>>> for sure make my compile of sqlite get loaded by the ado.net provide on the 
>>> mono side.  
>>>
>>> <<
>>> <<
>>>
>>>
>>>   
>>>   
 Date: Sun, 8 Feb 2009 20:18:31 -0800
 From: rog...@rogerbinns.com
 To: 

Re: [sqlite] Question on missing Entry Point for Sqlite 3

2009-02-09 Thread John Stanton
You do not appear to understand the intent of Sqlite.  It is an open 
source software library which can be comiled to siuit unique application 
requirements.   Decide what features you want to use and set the 
conditional compilation flags accordingly and com;pile it to all the 
platforms you intend to use.  You will then get exactly what you want.

W Allan Edwards wrote:
> I also emailed the mono project guys about this because they utilize the 
> ado.net code that I am having issues with on top of sqlite on linux.
>
> My what I have gathered, you compile your .net app with the managed only libs 
> from the ado.net provider for sqlite.  Then on windows that code interops 
> with the sqlite library on windows (and this works).  After you move your 
> code over to linux, you run the assemblies under the mono .net run time and 
> it loads the sqlite lib available to it on linux.   In fact, the app actually 
> works against sqlite until that meta call is called...  So I know that the 
> sqlite native lib must be getting loaded by the mono runtime on linux 
> successfully.  But of course, it is the one wihtout that method, then the 
> runtime explodes and I get that import message.
>
> FYI
>
> http://sqlite.phxsoftware.com/
>
> From: silicon_pla...@hotmail.com
> To: sqlite-users@sqlite.org
> Subject: RE: [sqlite] Question on missing Entry Point for Sqlite 3
> Date: Sun, 8 Feb 2009 22:47:33 -0600
>
>
>
>
>
>
>
>
>  
>   
>> That doesn't make any sense. (you sound like all my old girlfriends)  Did 
>> you mean sqlite3.c?  If you define the
>> same thing multiple times in the same compilation then the compiler
>> usually complains.
>> 
>
> - I downloaded the sqlite-amalgamation-3.6.10.tar.gz
> - exploded the archive on my drive
> - .configure
> - grepped for that define
> - changed the sqlite.c file in the most appropriate spot I could find
> - make
> - sudo make install
> - copied by build into where I believed the lib was getting picked up from by 
> the mono ado.net provider
> - run my ap... CRAP!
>
> --
>
>   
>> I would fix the provider to catch the entrypoint not found exception and
>> gracefully deal with the issue.
>> 
>
> I agree with you on this.  I have emailed those guys.
>
> ---
>
>   
>> If not doing that then I would build a private copy of SQLite using
>> configure:
>>
>>   ./configure CFLAGS=-DSQLITE_ENABLE_COLUMN_METADATA   \
>>   --prefix=/usr/local/customsqlite --disable-tcl
>>   make install
>>
>> 
>
> I believe I will go with the special compilation, then figure out how to for 
> sure make my compile of sqlite get loaded by the ado.net provide on the mono 
> side.  
>
> <<
> <<
>
>
>   
>> Date: Sun, 8 Feb 2009 20:18:31 -0800
>> From: rog...@rogerbinns.com
>> To: sqlite-users@sqlite.org
>> Subject: Re: [sqlite] Question on missing Entry Point for Sqlite 3
>>
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA1
>>
>> W Allan Edwards wrote:
>> 
>>> By preprocessor do you mean #define?  
>>>   
>> It is a #define if in C source and -D if a compiler argument.
>>
>> 
>>> I did a search in my sqlite.c file for 
>>>
>>> SQLITE_ENABLE_COLUMN_METADATA.. then I #defined above them ALL!
>>>   
>> That doesn't make any sense.  Did you mean sqlite3.c?  If you define the
>> same thing multiple times in the same compilation then the compiler
>> usually complains.
>>
>> 
>>> Instead of telling me this was incorrect... 
>>>   
>> The SQLite documentation and my messages gave the precise terminology
>> and accurate answer.  We have no idea how you are deciding to install
>> SQLite as there are several ways to make use of SQLite.  Mind reading is
>> not available on this list.  Generally you should ask if after
>> researching and trying you don't know.  You'll find this helpful:
>>
>>  http://www.catb.org/~esr/faqs/smart-questions.html
>>
>> 
>>> why not tell me what your approach might be?
>>>   
>> I would fix the provider to catch the entrypoint not found exception and
>> gracefully deal with the issue.
>>
>> If not doing that, then I would make the provider use a static compile
>> of SQLite (ie directly including the amalgamation with whatever flags
>> needed which would then not use the system SQLite and just work.
>>
>> If not doing that then I would find the Linux distribution package and
>> rebuild the package to include the functionality.
>>
>> If not doing that then I would build a private copy of SQLite using
>> configure:
>>
>>   ./configure CFLAGS=-DSQLITE_ENABLE_COLUMN_METADATA   \
>>   --prefix=/usr/local/customsqlite --disable-tcl
>>   make install
>>
>> And if that won't work then I'd dump the provider for a piece of decent
>> software.
>>
>> Roger
>> -BEGIN PGP SIGNATURE-
>> Version: GnuPG 

Re: [sqlite] Group by week

2009-02-05 Thread John Stanton
You need a function which gives the week number.  Note that this is 
calculated differently in the USA and Europe., so you need to use the 
correct rules to write the function.

Moshe Sharon wrote:
> Hi
>
> How can I select group by week
>
> moshe
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Adding Custom Column Data Types

2009-01-27 Thread John Stanton
We do thisby usig the Sqlite "Declared Type" (which can be anything you 
choose) and intercepting it in a layer of software between Sqlite and 
the application language manager.  For example we have a decimal number 
type, a date type etc.  The data is stored as Sqlite TEXT or FLOAT but 
the interface handle transformatiuon into the decimal or date format 
exopecrted by the host langauge processor.

MikeW wrote:
> Mike McGonagle  writes:
>
>   
>> Hello all,
>>
>> I am in the process of integrating SQLite into a multimedia
>> environment/programming language. One thing that I would like to be
>> able to do is support all the various datatypes in this language, and
>> one in particular is a list of other primitive datatypes. Is there a
>> way to add a "user-type" so that when 'sqlite3_column_type' is called,
>> it would return a unique ID for this "user-type"?
>>
>> I am hoping that I can use this to do some further processing to the
>> stored data to convert it back into the native format the environment
>> uses.
>>
>> Or is there a better way?
>>
>> Thanks,
>>
>> Mike
>>
>> 
>
> Another separate field for the data type ?
>
> MikeW
>
> == gmane padding ===
> == gmane padding ===
> == gmane padding ===
> == gmane padding ===
> == gmane padding ===
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What is the advantage of using native c API over ODBC

2009-01-23 Thread John Stanton
You avoid an unecessary layer of software and have better control over 
the database.

goldy wrote:
> Hi All,
>
> What are the basic advantage of using SQLite with C API over ODBC.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Date datatype

2009-01-21 Thread John Stanton
Sqlite has no date type.  Use a floating point number and the Sqlite 
date functions.  Add your own ones to  get extra functionality.

Paolo Pisati wrote:
> It seems i'm having an hard time with dates in sqlite:
>
> sqlite> .schema
> CREATE TABLE `envelope` (`smtp_id` int(10) NOT NULL, `date` date NOT 
> NULL, `time` time NOT NULL, `mailq_sndr` int(10) NOT NULL, 
> `delivery_sndr` int(10) NOT NULL, `customer_sndr` int(10) NOT NULL, 
> `rpath` varchar(250) NOT NULL, `domain_rcvr` varchar(200) NOT NULL, 
> `user_rcvr` varchar(250) NOT NULL, `size` int(10) NOT NULL, `res` 
> int(10) NOT NULL, `msg` varchar(250) NOT NULL, `ip` int(10) NOT NULL, 
> `vsmtp` varchar(250) NOT NULL, `retries` int(10) NOT NULL);
>
> sqlite> select date from envelope where date > '2009/01/20' limit 3;
> 2009/1/7
> 2009/1/7
> 2009/1/7
>
> why?
>
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Compressed dump SQLite3 database

2009-01-20 Thread John Stanton
A compression program like gzip is not a l"library", it is a free 
standing, open source program.  It has no secrets.

vlema...@ausy.org wrote:
> Hello, thank you and others for your answers.
>
> We are not allowed to use external libraries, because of industrial
> certification constraints. We can use SQLite because we can not do without
> database, but it's a big stuff to validate it according to those
> constraints, so we have to reduce as much as possible the use of such
> third-parties libraries.
>
> The idea from Eric Minbiole to drop index in the copy file and to
> investigate with sqlite_analyzer is for the moment the better one !
>
> For now we are just wondering how to use SQlite facilities, and if it's
> not sufficient, maybe we would think of the opportunity to developp a tiny
> compression algorithm by ourselves, or not... There is no requirement of
> on-the-fly compression / decompression because it's for archive only
> (fortunately !).
>
> ++
>
> Vincent
>
>   
>> Hello!
>>
>> Â ñîîáùåíèè îò Monday 19 January 2009 20:22:33 vlema...@ausy.org
>> íàïèñàë(à):
>> 
>>> It is a requirement that the size of those copies being as small as
>>> possible, without having to perform an external compression.
>>>   
>> You can using internal data compression. By compressing a few big fields
>> you can extremly reduce
>> size of your database. zlib on-the-fly compression is good.
>>
>> Best regards, Alexey.
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>> 
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Compressed dump SQLite3 database

2009-01-19 Thread John Stanton
Just use something like gzip to make a compressed version of the 
database for storage.  You would most likely save up to 80% of the 
space.  The .gz files are an industry standard for compression.

vlema...@ausy.org wrote:
> Hello,
>
> We need to produce copies of our databases for archive.
> It is a requirement that the size of those copies being as small as
> possible, without having to perform an external compression.
> vacuum doesn't seem to perform a compression (it works on fragmented
> data), is there any other way to do that ?
>
> Vincent
>
>
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is this legal SQL?

2009-01-12 Thread John Stanton
You can only bind data elemeents.  If you want to chjange table names 
you have to recompile the SQL (think about it).

Mike McGonagle wrote:
> Hello all,
>
> I am working on connecting SQLite up to another programming language,
> and had a question about how SQLite (or SQL in general) would handle
> this...
>
> I want to be able to create some tables dynamically (same structure,
> different name), and I thought this might work...
>
> CREATE TABLE ?1 (x double, y double);
>
> And then I would assign '?1' to a string...
>
> BUT, it would appear that SQLite does not like this, because when I
> try to 'prepare' the SQL, it complains about "error near '?1': syntax
> error"...
>
> Is there something that I can do, short of generating the SQL
> dynamically (ie allow the use of placeholders for the table name)?
>
> Thanks,
>
> Mike
>
>
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using Eclipse on Windows XP

2009-01-11 Thread John Stanton
Alternatively you can install Dev-CPP IDE which automatically activates 
Mingw and compile Sqlite.

schachtobi wrote:
> Hi,
> 
> what you need are first Eclipse, the C Development Tools (CDT) and a
> compiler for Example the MinGW
> 
> Here are the links for the tools I used for my project.
> 
> http://getamosaic.sourceforge.net/links.php
> 
> regards,
> Tobias
> 
> Alex Krzos schrieb:
>> Is there an easy way to develop on sqlite using eclipse for C/C++ on a
>> Windows XP machine? I have eclipse open on a source directory but can not
>> compile.  Is there a possible faq or wiki on this?  Thanks.
>>
>> Alex
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Impossible to declare field type BIGINT PRIMARY KEY

2008-12-16 Thread John Stanton
Where did you get your pre-occupation with BIGINT?  Sqlite handles 
INTEGERS and makes them up to 64 bits as necessary.  An INTEGER primary 
key will autoincrement,.

Sql;ite lets you introduce a type BIGINT as a declared type, but makes 
its own decision as to underlying type.

Nathan Catlow wrote:
> Please understand that the problem boils down to this:
>
> Why can't I CREATE TABLE t(i BIGINT PRIMARY KEY) and have it autoincrement?
>
> What internal mechanisms am I breaking by defining it like this?
>
> This would make the "reverse trip" easier surely? Well it would for me  
> anyway (It allows me to easily import into mysql without writing an  
> "edit program", mysql understands TEXT too).
>
> My patch works for me, so i can go with that, I'm just unsure about  
> the reasonings behind the autoincrement restriction. Which is just a  
> syntax parsing issue in the end.
>
> Or am I demanding too much?
>
> regards,
>
> nat
>
> Quoting "John Stanton" <jo...@viacognis.com>:
>
>   
>> You are still missing something.  Apply some deeper thought to the
>> concepts behind Sqlite and the elegance will become clear.  At run time
>> the Sqlite programmer has access to the declared type and the actual
>> storage type of the data.  An API layer between the Sqlite API and the
>> application can resolve any data format issues.
>>
>> Note that Sqlite maps to commonly used scripting systems seamlessly.
>> Transferring data from somethiong like Mysql/DB2/Oracle/PostreSQL etc is
>> very simple.  Making the reverse trip may not be so simple and in
>> general would require some form of edit program to transform TEXT
>> columns into the typed columns of say Mysql.  If such a transfer is
>> important to you an application software layer can enforce the decclared
>> types of Sqlite.  Otherwise use Mysql or preferably PostreSQL.
>>
>> Finally, Sqlite is simple and easy to use, hence the "lite".  Let it
>> make life easy for you and don't fight it.  You will be rewarded handsomely.
>>
>> Nathan Catlow wrote:
>> 
>>> Quoting "John Stanton" <jo...@viacognis.com>:
>>>
>>>
>>>   
>>>> You have not grasped  the fundamental concept of typing used by Sqlite.
>>>> It implements manifest typeing in the manner of scripting systems like
>>>> Javascript etc.  It has a clever feature which permits you to declare a
>>>> type as anything you like and parses that name to decide on the
>>>> underlying type as basically either text or numeric.  It  decides at run
>>>> time how to store the  data.
>>>>
>>>> 
>>> Yes, I understand this, sqlite's lovelyness.
>>>
>>>
>>>   
>>>> The declared type in Sqlite is  in essence an assist for the programmer
>>>> and is useful at application level to determine how to handle a column.
>>>> For example a floating point number declared as DATE would be processed
>>>> differently from one declared as ISOTOPE_COUNT.
>>>>
>>>> 
>>> You've hit the nail on the head, I am trying to do exactly that!
>>> Providing an assist for an application level by explicitly declaring
>>> PRIMARY KEY BIGINT. This can then be transferred safely to another
>>> database (mysql in this example) even by doing the following;
>>>
>>> $ sqlite3 sqlitedb .d > out.sql
>>> shell> mysql mysqldb < out.sql
>>>
>>> All the autoincrement values can now be safely carried across because
>>> I could provide the assistance with an explicit BIGINT (This is
>>> already possible but *without* the autoincrement feature)
>>>
>>> I fail to understand the limiting of autoincrement to just INTEGER
>>> where there is no difference internally to sqlite between INTEGER and
>>> BIGINT etc. It is just an unnecessary restriction.
>>>
>>> Consider this 2 line patch which works against sqlite-3.6.2, it could
>>> be extended to all the other (external) integer types with no adverse
>>> effect. Am I missing something here?
>>>
>>>--- build.c.old 2008-12-14 20:53:19.0 +
>>>+++ build.c 2008-12-14 16:29:03.0 +
>>>@@ -1165,7 +1165,7 @@
>>>   if( iCol>=0 && iColnCol ){
>>> zType = pTab->aCol[iCol].zType;
>>>   }
>>>-  if( zType && sqlite3StrICmp(zType, "INTEGER")==0
>>>+  if( zType && (sqlite3StrICmp(zTy

Re: [sqlite] Impossible to declare field type BIGINT PRIMARY KEY

2008-12-15 Thread John Stanton
You are still missing something.  Apply some deeper thought to the 
concepts behind Sqlite and the elegance will become clear.  At run time 
the Sqlite programmer has access to the declared type and the actual 
storage type of the data.  An API layer between the Sqlite API and the 
application can resolve any data format issues.

Note that Sqlite maps to commonly used scripting systems seamlessly.  
Transferring data from somethiong like Mysql/DB2/Oracle/PostreSQL etc is 
very simple.  Making the reverse trip may not be so simple and in 
general would require some form of edit program to transform TEXT 
columns into the typed columns of say Mysql.  If such a transfer is 
important to you an application software layer can enforce the decclared 
types of Sqlite.  Otherwise use Mysql or preferably PostreSQL.

Finally, Sqlite is simple and easy to use, hence the "lite".  Let it 
make life easy for you and don't fight it.  You will be rewarded handsomely.

Nathan Catlow wrote:
> Quoting "John Stanton" <jo...@viacognis.com>:
>
>   
>> You have not grasped  the fundamental concept of typing used by Sqlite.
>> It implements manifest typeing in the manner of scripting systems like
>> Javascript etc.  It has a clever feature which permits you to declare a
>> type as anything you like and parses that name to decide on the
>> underlying type as basically either text or numeric.  It  decides at run
>> time how to store the  data.
>> 
>
> Yes, I understand this, sqlite's lovelyness.
>
>   
>> The declared type in Sqlite is  in essence an assist for the programmer
>> and is useful at application level to determine how to handle a column.
>> For example a floating point number declared as DATE would be processed
>> differently from one declared as ISOTOPE_COUNT.
>> 
>
> You've hit the nail on the head, I am trying to do exactly that!  
> Providing an assist for an application level by explicitly declaring  
> PRIMARY KEY BIGINT. This can then be transferred safely to another  
> database (mysql in this example) even by doing the following;
>
> $ sqlite3 sqlitedb .d > out.sql
> shell> mysql mysqldb < out.sql
>
> All the autoincrement values can now be safely carried across because  
> I could provide the assistance with an explicit BIGINT (This is  
> already possible but *without* the autoincrement feature)
>
> I fail to understand the limiting of autoincrement to just INTEGER  
> where there is no difference internally to sqlite between INTEGER and  
> BIGINT etc. It is just an unnecessary restriction.
>
> Consider this 2 line patch which works against sqlite-3.6.2, it could  
> be extended to all the other (external) integer types with no adverse  
> effect. Am I missing something here?
>
>--- build.c.old 2008-12-14 20:53:19.0 +
>+++ build.c 2008-12-14 16:29:03.0 +
>@@ -1165,7 +1165,7 @@
>   if( iCol>=0 && iColnCol ){
> zType = pTab->aCol[iCol].zType;
>   }
>-  if( zType && sqlite3StrICmp(zType, "INTEGER")==0
>+  if( zType && (sqlite3StrICmp(zType, "INTEGER")==0 ||
> sqlite3StrICmp(zType, "BIGINT")==0)
> && sortOrder==SQLITE_SO_ASC ){
> pTab->iPKey = iCol;
> pTab->keyConf = onError;
>@@ -1174,7 +1174,7 @@
>   }else if( autoInc ){
> #ifndef SQLITE_OMIT_AUTOINCREMENT
> sqlite3ErrorMsg(pParse, "AUTOINCREMENT is only allowed on an "
>-   "INTEGER PRIMARY KEY");
>+   "INTEGER or BIGINT PRIMARY KEY");
> #endif
>   }else{
> sqlite3CreateIndex(pParse, 0, 0, 0, pList, onError, 0, 0,  
> sortOrder, 0);
>
> Thank you for all your patience.
>
> regards,
>
> Nathan
>
>   
>> Ypu can rely on Sqlite storing  large integer value as 64 bits and a
>> small one using less space.  The actual storage form for a particular
>> column may vary from row to row according to decisions made by Sqlite at
>> run time after it analyzes the data value.
>> JS
>>
>> Hi,
>> 
>>> I am perfectly aware of the size of INTEGERS on differing platforms
>>> and of sqlite's typeless nature.
>>>
>>> Can you tell me why you can even specify BIGINT to sqlite then? Or
>>> SMALLINT or any other datatype? What is the difference between INTEGER
>>> and any other type? none of course!
>>>
>>> sqlite> CREATE TABLE t(i SMALLINT, t TEXT);
>>> sqlite> INSERT INTO t(i,t) VALUES(, 'test');
>>> sqlite> SELECT * FROM t;
>>> |test
>>>
>>&

Re: [sqlite] Impossible to declare field type BIGINT PRIMARY KEY

2008-12-15 Thread John Stanton
You have not grasped  the fundamental concept of typing used by Sqlite.  
It implements manifest typeing in the manner of scripting systems like 
Javascript etc.  It has a clever feature which permits you to declare a 
type as anything you like and parses that name to decide on the 
underlying type as basically either text or numeric.  It  decides at run 
time how to store the  data.

The declared type in Sqlite is  in essence an assist for the programmer 
and is useful at application level to determine how to handle a column.  
For example a floating point number declared as DATE would be processed 
differently from one declared as ISOTOPE_COUNT.

Ypu can rely on Sqlite storing  large integer value as 64 bits and a 
small one using less space.  The actual storage form for a particular 
column may vary from row to row according to decisions made by Sqlite at 
run time after it analyzes the data value.
JS
 
Hi,
> I am perfectly aware of the size of INTEGERS on differing platforms  
> and of sqlite's typeless nature.
>
> Can you tell me why you can even specify BIGINT to sqlite then? Or  
> SMALLINT or any other datatype? What is the difference between INTEGER  
> and any other type? none of course!
>
> sqlite> CREATE TABLE t(i SMALLINT, t TEXT);
> sqlite> INSERT INTO t(i,t) VALUES(, 'test');
> sqlite> SELECT * FROM t;
> |test
>
> But there is a difference, autoincrement ONLY works with "INTEGER",  
> why? sqlite quite rightly allows the code above due to all types being  
> treated the same, but all of a sudden starts getting all fussy when I  
> want to autoincrement a BIGINT. If ROWID == INTEGER then it must match  
> the others too ROWID == INTEGER == BIGINT == SMALLINT sqlite just  
> shouldn't care.
>
> The point about "cross platform SQL" and using a library like libdbi,  
> is that it tries to ensure that a particular length of data can fit in  
> all makes of SQL.
>
> My code drives the databases not the other way around, so if *I*  
> decide an integer is only 32bits, then I don't give a damn if  
> sqlite/oracle or whatever wastes space be putting it in a 64bit space.  
> The ultimate goal is running the same code on all DB engines. The goal  
> is not to take an arbitrary database and expect libdbi to read it  
> efficiently or even correctly.
>
> The only thing I have no control over is when using the autoincrement  
> feature, as this is driven by sqlite, and will always attempt to use  
> the full 64bit space. I need to know this to ensure the correct memory  
> is allocated.
>
> I completely accept your point about assumption, but there has to be a  
> compromise on allocating 64bits everywhere, which is inefficient on  
> small systems. I want to raise this point with the libdbi developers.  
> Their code is broken, I know that, but an acceptable compromise is  
> nearly there. I just need to determine that a field is a  
> autoincrementing PRIMARY KEY or be able to explicitly state BIGINT  
> PRIMARY KEY.
>
> I have one question, lets forget about the argument about types, it is  
> a red herring.
>
> Is there a way through sqlite API or PRAGMA to determine a field is an  
> autoincrementing INTEGER PRIMARY KEY? The only way i've found is to  
> parse the create table statement from sqlite_master which is cludgy.
>
> PRAGMA table_info(t);
> PRAGMA index_list(t);
>
> Both those give me no love.
>
> I suppose if i wrote my own autoincrement all this would go away as I  
> could ensure its length.
>
> regards,
>
> Nathan
>
>
>
> 
> This message was sent using IMP, the Internet Messaging Program.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Adding data with periods

2008-12-14 Thread John Stanton
Single quotes are SQL, as chosen by he designers.  It is good practice 
to stick to the standard rather than rely on extensions which vary from 
implementatiopn tio implementation.

Mohd Radzi Ibrahim wrote:
> It seems to works either way.
>
> I'm just wondering is there any hidden reason that single quote is 
> preferred? Portability?
> Or is double-qoute has some kind of special meaning that we should use it 
> for that special purpose?
>
>
> -radzi-
>
>
> - Original Message - 
> From: "P Kishor" 
> To: "General Discussion of SQLite Database" 
> Sent: Monday, December 15, 2008 1:32 PM
> Subject: Re: [sqlite] Adding data with periods
>
>
>   
>> On 12/14/08, aditya siram  wrote:
>> 
>>> Thanks a lot. The issue has been fixed with:
>>>  INSERT INTO TEST_TABLE(CONTENTS) VALUES ("Hello. World!");
>>>
>>>   
>> Use single quotes to delimit text, not double quotes.
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>> 
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Adding data with periods

2008-12-14 Thread John Stanton
Note that literal delimiters in SQL are single quotes, e.g. 'This is an 
SQL literal'.

It is good practice with Sqlite to use bound variables.  You avoid 
possible SQL injection attacks and limit sensitivity to data content.

aditya siram wrote:
> Hi all,
> I'm having trouble adding data with period characters in it. I tries to
> escape the period with a `'` but that didn' t seem to work. Here is an
> example interaction:
>
> sqlite> create table test_table ("Contents" varchar);
> sqlite> insert into test_table "hello . world";
> SQL error: near ""hello . world"": syntax error
> sqlite> insert into test_table "hello '. world";
> SQL error: near ""hello '. world"": syntax error
>
> Thanks ...
> deech
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite replication

2008-12-11 Thread John Stanton
It would be a relatively minor job to make Sqlite replicate itself.  The 
partitioning in its design seperates the i/o level.

Wade Williams wrote:
> I'm looking for an honest assessment from someone that may have made  
> this decision in the past.
>
> I'm considering using an embedded database for an upcoming  
> application.  Operation rate is high 20,000-60,000 per day. (Those  
> will mostly be selects, but some smaller percentage will be inserts).
>
> Our choices appear to be SQLite or Berkley DB.  An RDBMS isn't really  
> an option due to the administrative cost.
>
> My first inclination was to use SQLite.  From what I've seen of the  
> performance numbers, it should be able to support that rate without  
> much trouble.
>
> However, a key feature is disaster recovery.  If the primary machine  
> goes down we've got to quickly switch to another machine (quickly  
> meaning within minutes if not seconds).
>
> In my research it appears SQLite may not be a good option, since the  
> only replication appears to be "lock the database and copy the file to  
> the new machine."  Berkeley DB seems to have the advantage of having  
> replication built-in.  However, I have no idea how useful the  
> replication is and of course the API is much more inscrutable.  I've  
> also certainly heard all the Berkley DB corruption horror stories.
>
> I'm OK with stepping off the deep end into Berkeley DB, but I'd prefer  
> SQLite.  However, I'm certainly not looking to shoot myself in the foot.
>
> I'd appreciate input from anyone on this subject, especially tales  
> from replication projects.
>
> Thanks,
>
> Wade
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite database to postgresql

2008-12-09 Thread John Stanton
Dump the database as SQL, then execute the SQL with PostgreSQL.

Manisha De Silva wrote:
> Now that I have a textfile which has a sqlite database can this be added to a
> postgresql database? If so how?
> This message (and any associated files) is intended only for the use of the 
> individual to which it is addressed and may contain information that is 
> confidential, subject to copyright protection. If you are not the intended 
> recipient you are hereby notified that any dissemination, copying or 
> distribution of this message, and any associated files, is strictly 
> prohibited. If you have received this message in error, please notify us 
> immediately by replying to the message and deleting it from your computer. 
> Messages sent to and from us may be monitored.
> We do not accept responsibility for any errors or omissions that are present 
> in this message, or any attachment, that have arisen as a result of e-mail 
> transmission. Any views or opinions presented are solely those of the author 
> and do not necessarily represent those of the company.
> ePulse Limited
> Registered office: 4 Crown Place, London, EC2A 4BT 
> Registered in England No 3534157
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] data dump in sqlite

2008-12-09 Thread John Stanton
Open up a command prompt window.  Then key

sqlite3  

That will ring up a command line accessto your Sqlite database.

If you want a GUI interface there is a Firefox plugin which interfaces 
to Sqlite.  Try that or one of the very many Sqlite GUI interfaces like 
Sqlitespy.

Manisha De Silva wrote:
> Hi,
> 
> This is what I was trying to do :
> 
> I double clicked on sqlite3.exe
> Then it opens a sqlite prompt 
> 
> Now I need to a do a data dump of a file which resides in c:\spool\jsm.db
> where jsm.db is the sqlite database.
> 
> How can this be done?
> 
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Griggs, Donald
> Sent: 09 December 2008 15:49
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] data dump in sqlite
> 
> I'm not certain I understand your question, but here goes:
> 
> Sqlite3.exe is a command-line interface, so you don't really "browse" at
> all.
> 
> However, you can specify the path and name of a database on the command
> line when you invoke sqlite3, e.g. (In windows):
> 
> Sqlite3 "c:\NiceDirectory_Without_spaces\MyDatabase.db" 
> 
> The quotes are usually optional, but not when your path contains spaces
> 
> ALSO, you may want to see the ATTACH command in the documentation.
> 
> Once inside sqlite3, the .databases command should produce SOME output,
> even if you haven't opened an existing database.   
> 
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Manisha De Silva
> Sent: Tuesday, December 09, 2008 10:36 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] data dump in sqlite
> 
> I downloaded sqlite3.exe
> 
> I typed in .databases and it doesn't show any database. Actually I have
> sqlite db in C:\Program Files\Jabber Inc\Jabber XCP\var\spool and I want
> this to dump the data to a text file. 
> 
> I typed in sqlite> .databases
> 
> How can I browse to the specific folder name which contains the jsm.db
> sqlite db file This message (and any associated files) 
> =
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> This message (and any associated files) is intended only for the use of the 
> individual to which it is addressed and may contain information that is 
> confidential, subject to copyright protection. If you are not the intended 
> recipient you are hereby notified that any dissemination, copying or 
> distribution of this message, and any associated files, is strictly 
> prohibited. If you have received this message in error, please notify us 
> immediately by replying to the message and deleting it from your computer. 
> Messages sent to and from us may be monitored.
> We do not accept responsibility for any errors or omissions that are present 
> in this message, or any attachment, that have arisen as a result of e-mail 
> transmission. Any views or opinions presented are solely those of the author 
> and do not necessarily represent those of the company.
> ePulse Limited
> Registered office: 4 Crown Place, London, EC2A 4BT 
> Registered in England No 3534157
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Deactivation of Manifest Typing

2008-12-08 Thread John Stanton
In one of our Sqlite applications the gratuitous type changes of Sqlite 
were incompatible and it took only a very few patches to the Sqlite 
source to disable the detection and change code.

Clay Dowling wrote:
> Simon de Hartog wrote:
> 
>>SQLite has a feature called Manifest typing. As with many features of
>>software I run, I wonder whether this feature can be disabled. I prefer
>>to use static typing in my databases to prevent stored values not being
>>what my code (C++) expects them to be. So in short: is it possible to
>>use static typing instead of manifest typing in SQLite?
> 
> If you access the database strictly through your software, this is 
> trivially easy if you are using a strongly typed language like Pascal or 
> C.  Simply force the data to be of the appropriate type in your program 
> and it will be stored as the appropriate type in your database.
> 
> If you are allowing direct access to the database via SQL, you can 
> probably assume that any user smart enough to write their own 
> insert/update queries can probably also work out the correct data 
> types.  If not, you may wish to reconsider allowing direct SQL access.
> 
> Clay
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite performance woe

2008-12-02 Thread John Stanton
To efficiently execute the SQL "SELECT * FROM mytab WHERE myid = '1234'" 
you must have an index on the "myid" colunm.  Each row has an index 
which uses a rowid as a key, and that is how the row is accessed.

A "primary key" is a column which is indexed and which has a unique 
value, duplicates are not encouraged.

You may have many indices on a table, and an index my have as its key 
values from more than one column.

Brown, Daniel wrote:
> I maybe confused but indices sound similar to what I understand primary
> keys do, I already have primary keys on each table.  Unless I'm mistaken
> as to what primary keys are?  From your explanation I guess I'm slightly
> confused about the difference in primary keys and indices and that I
> need to implement indices to speed up my queries. Are there any general
> guidelines for creating indices?  Is it as simple as creating an indice
> per primary key in a table?  
> 
> How do I interpret the output from EXPLAIN QUERY PLAN?
> 
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of John Stanton
> Sent: Tuesday, December 02, 2008 2:20 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] SQLite performance woe
> 
> Databases work by using indices.  A search for a row in a table of 1 
> million rows goes from having to do as many as a million row reads to a 
> handful of index node accesses, from minutes to milliseconds.  Note that
> 
> Sqlite is "lite" and only uses one index at a time so thoughtful schema 
> design and query layout is necessary for optimal results.
> 
> Brown, Daniel wrote:
> 
>>Hello Donald & Others,
>>
>>I have primary keys set for each of the table but no indicies (that I
> 
> am
> 
>>aware of) as I simply converted the data from our existing database
>>system which does not support indicies.  As my current system only
>>implements primary keys I have no real experience dealing with
> 
> indicies,
> 
>>are they like some sort of extra key column?  Are there any guides to
>>optimising SQLite performance with indicies?  
>>
>>I tried EXPLAIN QUERY PLAN for the following:
>>"SELECT * FROM test_item INNER JOIN test_container ON
>>test_item.container_code = test_container.container_code"
>>
>>The output was:
>>0|0|TABLE test_item
>>1|1|TABLE test_container
>>
>>Is there a guide I can check for understanding this output?
>>
>>Daniel
>>
>>-Original Message-
>>From: [EMAIL PROTECTED]
>>[mailto:[EMAIL PROTECTED] On Behalf Of Griggs, Donald
>>Sent: Tuesday, December 02, 2008 9:52 AM
>>To: General Discussion of SQLite Database
>>Subject: Re: [sqlite] SQLite performance woe
>>
>>Hi Daniel,
>>
>>Regarding:
>>   "What I'd like to know is  if there is anything we can do with
>>our queries, SQLite set-up or library configuration to improve the
>>speed? " 
>>
>>Unless indicies would be inappropriate, did you mention whether you've
>>defined any indicies and does EXPLAIN QUERY PLAN show that the proper
>>index is being used?
>>
>>___
>>sqlite-users mailing list
>>sqlite-users@sqlite.org
>>http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>___
>>sqlite-users mailing list
>>sqlite-users@sqlite.org
>>http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite performance woe

2008-12-02 Thread John Stanton
Databases work by using indices.  A search for a row in a table of 1 
million rows goes from having to do as many as a million row reads to a 
handful of index node accesses, from minutes to milliseconds.  Note that 
Sqlite is "lite" and only uses one index at a time so thoughtful schema 
design and query layout is necessary for optimal results.

Brown, Daniel wrote:
> Hello Donald & Others,
> 
> I have primary keys set for each of the table but no indicies (that I am
> aware of) as I simply converted the data from our existing database
> system which does not support indicies.  As my current system only
> implements primary keys I have no real experience dealing with indicies,
> are they like some sort of extra key column?  Are there any guides to
> optimising SQLite performance with indicies?  
> 
> I tried EXPLAIN QUERY PLAN for the following:
> "SELECT * FROM test_item INNER JOIN test_container ON
> test_item.container_code = test_container.container_code"
> 
> The output was:
> 0|0|TABLE test_item
> 1|1|TABLE test_container
> 
> Is there a guide I can check for understanding this output?
> 
> Daniel
> 
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Griggs, Donald
> Sent: Tuesday, December 02, 2008 9:52 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] SQLite performance woe
> 
> Hi Daniel,
> 
> Regarding:
>"What I'd like to know is  if there is anything we can do with
> our queries, SQLite set-up or library configuration to improve the
> speed? " 
> 
> Unless indicies would be inappropriate, did you mention whether you've
> defined any indicies and does EXPLAIN QUERY PLAN show that the proper
> index is being used?
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite 3.6.5 slow on windows

2008-11-25 Thread John Stanton
Use sqlite3_prepare/bind/step instead of sqlite3_exec and group your 
activity into BEGIN ... COMMIT transactions.

Breeze Meadow wrote:
> Hi,
>  
> I found sqlite is kind of slow on windows (2003 server and XP professional). 
> Here is what i did:
> 1) downloaded and installed sqlite3.dll (version 3.6.5).
> 2) created a MSVC 6.0 C++ projects with the following code, which basically 
> runs a list of SQL statments on a opened database:
>  
> #include 
> #include "pctimer.h"
> #include "sqlite3.h"
> int main(int argc, char** argv)
> {
>  int ret;
>  double start, duration;
>  char* error = NULL;
>  char* database_name = "test.dat";
>  sqlite3* database = NULL;
>  ret = sqlite3_open(database_name, );
>  char* sql_list[] = {
>   "create table if not exists t1(f1)",
>   "insert into t1 values('a')",
>   "select * from t1",
>   "drop table if exists t1"
>  };
>  for (int i = 0; i < 4; i++)
>  {
>   TIME(ret = sqlite3_exec(database, sql_list[i], NULL, NULL, ), 1000);
>   printf("sql_list[%d] = %d, %f msec\n", i, ret, duration);
>  }
>  sqlite3_close(database);
>  return 0;
> }
>  
> in "pctimer.h"
> #define TIME(Statement, Unit) start = pctimer(), Statement, duration = 
> (pctimer() - start) * Unit
> #include 
> 
> __inline pctimer_t pctimer()
> {
> static LARGE_INTEGER pcount, pcfreq;
> static int initflag;
> if (!initflag)
> {
> QueryPerformanceFrequency();
> initflag++;
> }
> QueryPerformanceCounter();
> return (double)pcount.QuadPart / (double)pcfreq.QuadPart;
> }
> 
> Here is a typical outputs from the program running in debug mode:
>  
> sql_list[0] = 0, 115.522582 msec
> sql_list[1] = 0, 146.979062 msec
> sql_list[2] = 0, 0.152468 msec
> sql_list[3] = 0, 127.423370 msec
>  
> The "create table", "insert into" and "drop table" are slower than I expected 
> whereas the "select" is ok (compared to published sqlite performance). I 
> tried turning off antivirus but got the same results. Notice my constraint is 
> these SQL statements must run one by one, not within a transaction.
>  
> My questions are:
> 1) are these expected performance on windows?
> 2) if not, what can be done to improve it?
>  
> Many thanks in advance.
>  
> FLX
>  
> 
> 
>   
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Two questions

2008-11-19 Thread John Stanton
Christophe Leske wrote:
>>> Can I spare some bytes in my DB by defining the ID field of the  
>>> standard
>>> table as being a foreign key of the rtree table? In other words, when
>>> defining a foreign key, is the coloumn referencing the ID field of the
>>> foreign table and thus NOT replicating them (using a smaller memory
>>> footprint in the file)?
>>> Or does the table which has a foreign key still have its own ID  
>>> coloumn?
>>> 
>> The ID column is replicated in each table.
>>   
> Is there a construct in sqlite which would allow a coulumn to be shared 
> amongst tables in such a way that the data is there only once, thus creating 
> a smaller file?
> 
> In the example provided, the iDs are exactly the same, yet they are there 
> twice...
> 
"Third Normal Form" data design.
> 
> 
> Christophe Leske
> 
> www.multimedial.de - [EMAIL PROTECTED]
> http://www.linkedin.com/in/multimedial
> Lessingstr. 5 - 40227 Duesseldorf - Germany
> 0211 261 32 12 - 0177 249 70 31
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Terrible performance for one of our tables

2008-11-19 Thread John Stanton
Jens Miltner wrote:
> Am 19.11.2008 um 13:05 schrieb D. Richard Hipp:
> 
>> On Nov 19, 2008, at 3:08 AM, Jens Miltner wrote:
>>
>>> Hi,
>>>
>>> we're seeing terrible performance problems when fetching data from  
>>> one
>>> of our tables:
>>> The table contains roughly 1.2 Million rows and a plain "SELECT
>>> COUNT(*) FROM t" query takes 8 minutes to finish.
>> That's because "SELECT count(*) FROM t" has to read and count every
>> row in the table - all 1.2 million rows.  And this involves reading
>> all 1.2 million rows from disk.  If each row is 100 bytes in size,
>> that involves reading 120MB.
>>
>> An SQLite database does *not* maintain the number of rows in a table
>> as separate metadata.  Many client/server database engines do maintain
>> such metadata.  When I was designing the SQLite file format, I
>> deliberately choose to omit such metadata because including it would
>> slow down inserts and deletes.
> 
> Yes, I know about this and it's usually not a problem.
> It only turned into a problem for this specific table.
> 
> As I mentioned in my original e-mail, after vacuuming the database,  
> the time to run the COUNT(*) query went down to about 5 seconds, so it  
> looks like somehow the database became fragmented enough to seriously  
> hit the performance of the serial row access...
> 
> BTW: we don't actually run the COUNT(*) query, but we see major  
> performance hits after a while with this table, so I figured I'd run  
> the most simple query first and found that the time needed for  
> COUNT(*) goes through the ceiling (from something like 5 seconds to  
> almost 9 minutes -  roughly a 100x time increase) after having deleted  
> and re-inserted rows in that table for a while.
> 
> Any ideas why there would be such a _huge_ performance hit after  
> deleting and re-inserting records for a while?
> Anything we can do to avoid this kind of performance-penalty- 
> fragmentation (other than regularly vacuuming)?
> 
> Thanks,
> 
>
Fragmentation is inherent when there are deletions and insertions, and 
involves orders of magnitude increases in access times.  Just look at 
the example of before and after when a badly fragmented disk is 
defragged to see the effect.

Many years ago we learned from practical experience to avoid such 
situations when designing applications.  High availability applications 
cannot tolerate an increase in chaos during operation.

> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Recovery program

2008-11-17 Thread John Stanton
Since an Sqlite database is a single file taking a snapshot backup is 
very simple and rapid.  You could maintain an almost current image at 
all times that way.  Complete recovery could be possible by modifying 
Sqlite oryour application to keep a journal of SQL statements processed 
between snapshot backups.

A better strategy would be to ensure that you used a reliable server. 
We have never had a corrupted Sqlite database on our Linux or AIX aervers.
JS

raf wrote:
> Roger Binns wrote:
> 
> 
>>raf wrote:
>>
>>>i also needed the ability to restore an sqlite3 database
>>>after a corruption.
>>
>>What exactly are you doing that corrupts the database in the first
>>place?  If the machine you are using is randomly corrupting data then
>>having your own journal won't help that much!
>>
>>If you are doing things like turning synchronous off then it is easier
>>to have two database files (use ATTACH).  Have one be the defaults (ie
>>correctness, no corruption) at the expense of speed and the second be
>>turned for speed at the expense of possible corruption.  You can copy
>>the data back and forth as necessary.
>>
>>Roger
> 
> 
> i've never turned off synchronous. i set up my recovery
> mechanism on the grounds that corruption is possible and i
> wanted the website to heal itself should it ever happen.
> 
> i've only ever had one real corruption (as reported by
> pragma integrity_check). i have no idea what caused it but
> it was nothing mentioned in the how to avoid corruption
> document. however, because the .dump command doesn't report
> errors, i had been interpreting its many failures to
> function correctly as a sign of corruption. this was a
> mistake on my part but one that was easy to make since i was
> also receiving "malformed disk image" error messages from
> the website at the same time. it turned out that these
> messages always coincided with database recovery (i.e. it
> didn't preceed them) and so were presumably signs of the
> database being rebuilt after a false corruption 'detection'.
> 
> the .dump failures were just due to the database being busy.
> i now retry .dumps when its output is abnormally short
> and it's very much better.
> 
> however, i will keep the recovery system in place because,
> as i said, there has been one real corruption, and it might
> happen again and the recovery system works very well.
> 
> cheers,
> raf
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Escaped values throws error

2008-11-17 Thread John Stanton
It is particularly valuable to use bound values when you have WWW access 
so that you block the possibility of SQL injection attacks.

Eric Minbiole wrote:
>>This is my first foree into PDO-SQLite and I saw samples using binding,
>>I am using prepare() though without binding.
>>
>>So you think that had I used binding my escape issue would not be an issue ?
> 
> 
> Correct.  Though I've not used PDO-SQLite (I use the C interface), I 
> would think that using bound parameters would eliminate your string 
> escaping issues.  I use bound parameters almost exclusively:  You never 
> have to worry about escaping or sanitizing your strings-- just let the 
> Db engine do the work for you.
> 
> A related classic: http://xkcd.com/327/
> 
> ~Eric
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Escaped values throws error

2008-11-16 Thread John Stanton
An SQL literal uses single quotes.

Ben Marchbanks wrote:
> Abridged sample of the insert statement that produces an error when
> trying to insert escaped text.
> 
> 
> INSERT INTO pageWords (word) VALUES (" \"nitch\" ");
> 
> SQLSTATE[HY000]: General error: 1 near "nitch": syntax error'
> 
> 
> Using
> 
> PHP Version 5.2.4  
> PDO SQLite - version 1.0.1 $Id: pdo_sqlite.c,v 1.10.2.6.2.2 
> 2007/03/23 14:30:00 wez Exp $
> SQLite 3.4.2
> 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] (no subject)

2008-11-14 Thread John Stanton
Everything you want is on www.sqlite.org.  If you cannot handle that you 
probably do not have a use for Sqlite.


Satish wrote:
> Hi!
>Can I get some samples to work on sqlite that is how to create a database
> and inserting ... don' t think again I am joking plz help me.it was bit
> confusing when iam using sqlite command prompt
> 
> Regards,
> Satish
> 
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of P Kishor
> Sent: Friday, November 14, 2008 11:35 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] (no subject)
> 
> On 11/13/08, Satish <[EMAIL PROTECTED]> wrote:
> 
>>Hi!
>>
>>Really I am Serious about this I am very new to database and I
> 
> want
> 
>> to use sqllite.so I want to know.THE Reason why asked this question is I
>> have seen a video of google tech bytes in that the presenter said that
>> sqllite doesn't support foreign key and alter table.is this True.i want
> 
> to
> 
>> know based on this I can proceed further.
> 
> 
> If you are serious, start with reading the documentation on
> sqlite.org. Check out the list of FAQs, the features, datatypes,
> syntax, etc. Do some background work more than just watching a video
> of "google tech bytes" whatever that is.
> 
> Then post a well formed question on the list, with a clear subject
> line, and you will be promptly helped.
> 
> Until then you will likely get a response equivalent to "Wtf, is this
> a joke? :/"
> 
> 
>>
>> -Original Message-
>> From: [EMAIL PROTECTED]
>> [mailto:[EMAIL PROTECTED] On Behalf Of Cory Nelson
>> Sent: Friday, November 14, 2008 11:22 AM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] (no subject)
>>
>> On Thu, Nov 13, 2008 at 9:31 PM, Satish <[EMAIL PROTECTED]> wrote:
>> > Hi!
>> >
>> >Will sqllite support Foreign Key and ALTER Table Comands .I want to
>> know
>> > is there any documentation that helps me to know what are the classes
> 
> that
> 
>> I
>> > can use and program.I use vc++ to connect to database.tell me if there
> 
> any
> 
>> > wrapper classes which I can use and also provide me documentation about
>> > those clases and functions present in it and how to use them.
>> >
>>
>> Wtf, is this a joke? :/
>>
>> --
>> Cory Nelson
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


  1   2   3   4   5   6   7   8   9   10   >