Re: [sqlite] [sqlite-dev] Poor SQLite indexing performance

2010-03-16 Thread Tim Romano
I have not done this, but if you have enough RAM available, you might 
try putting your primary keys in a table in an in-memory database, and 
test for existence there.  That would allow you to enforce uniqueness 
while postponing creation of the PK index on the disk table until after 
the initial population has completed.

The only other way to enforce uniqueness without an index is a hashed 
table (a feature not available in SQLite). On a table with very many 
rows, finding the key using a hash can be much quicker than scanning a 
b-tree for it, and inserts moreover do not slow down as no unique index 
is being created/reorganized during batch population of the table.

Regards
Tim Romano

On 3/15/2010 10:31 AM, Pavel Ivanov wrote:
> 
>
>> Is there any way to have a UNIQUE
>> field but disable indexing till the end?
>>  
> How do you expect your uniqueness to be enforced? SQLite does that by
> looking into index - if value is there then it is repeated, if value
> is not there then it's unique and should be inserted into index for
> further check.
>
>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Unicode command line bug in Windows version of sqlite3 with patch

2010-03-16 Thread Kunc Filip
Hello SQLite Team,

 

We currently use sqlite 3.6.23. We have a big problem with characters with
accents or other special characters in path to database file, for example in
Czech Windows XP the "Application Data" folder is translated to "Data
aplikací" so if the accented 'í' is in path the sqlite3.exe writes that it
is unable to open file in this path.

 

To workaround this problem we tried to find the source of this error. It
lies in MultiByteToWideChar and WideCharToMultiByte with CP_UTF8 as encoding
argument. If is instead used CP_ACP as ANSI encoding then there is no
problem.

 

In attachment you will find diff patch which works for us. But because
SQLite3 source code is really big we can't be sure that this fix is correct
and doesn't introduce any side effects.

 

Thanks for your hard work and help

Filip Kunc

 

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLITE3 in multi-thread server

2010-03-16 Thread HLS
Folks,

I read the documentation suggesting to use another SQL engine for a
multi-threaded client/server environment and I reached a point where I
see why.   But I was thinking it could be used because currently, we
have a ISAM/BTREE database under our full reader/writer and exclusive
locking controls.  So my attempt was to basically plug and play
SQLITE3.

What I need is NO LOCKING so we can use our own thread context
contention controls.

I read the technical notes but I'm still getting either a error 5 or
8.  I'm not entirely sure what steps need to be done.   I tried:

      sqlite3_config(SQLITE_CONFIG_MULTITHREAD);

Overall, the design is the RPC server opens exclusive the database
file, and each thread context gets a query cursor.   Since the client
API has no concept of a "close",  the cursor is saved in the server
thread context and reused for NEXT/PREV client function,  so on the
client side, a example code:

 TFileRecord rec = {0};
 DWORD tid = 0;
     if (SearchFileRec(FileNameAreaKey, rec, tid) do {
 ... do something with rec ...
     } while (GetNextFilerec(FileNameAreaKey, rec, tid));

Here, in the current system, the tid holds a pointer to the btree page
allowing for the traversal.   But for the SQLITE3 implementation,  the
search cursor is saved in the context and recalled in the Next call.

So as you can imagine what happen now  if there any record updating:

     if (SearchFileRec(FileNameAreaKey, rec, tid) do {
   rec. = change some fields
   UpdateFileRec(rec);
     } while (GetNextFilerec(FileNameAreaKey, rec, tid));

I am now getting a sharing issue with the UpdateFileRec().

I think before we punt on SQLITE3,  we can use it if we had full
control of the I/O access to the database.  No other process will be
allowed access to the database because I am going to open it in none
sharing more.  All access will be through the client API.

So how do I make it so there is no locking?

--
hls
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unicode command line bug in Windows version of sqlite3 with patch

2010-03-16 Thread Pavel Ivanov
sqlite3_open[_v2] accepts all filenames in UTF-8 (although it doesn't
check for valid UTF-8 string). So CP_UTF8 cannot be changed anywhere.

OTOH maybe command line utility should have some logic of re-encoding
of command line parameter from terminal encoding to UTF-8. But I'm not
sure about that. Could you try to run sqlite3 from a batch file that
is written in UTF-8 encoding (properly encode your path). I believe it
will work this way...


Pavel

2010/3/16 Kunc Filip :
> Hello SQLite Team,
>
>
>
> We currently use sqlite 3.6.23. We have a big problem with characters with
> accents or other special characters in path to database file, for example in
> Czech Windows XP the "Application Data" folder is translated to "Data
> aplikací" so if the accented 'í' is in path the sqlite3.exe writes that it
> is unable to open file in this path.
>
>
>
> To workaround this problem we tried to find the source of this error. It
> lies in MultiByteToWideChar and WideCharToMultiByte with CP_UTF8 as encoding
> argument. If is instead used CP_ACP as ANSI encoding then there is no
> problem.
>
>
>
> In attachment you will find diff patch which works for us. But because
> SQLite3 source code is really big we can't be sure that this fix is correct
> and doesn't introduce any side effects.
>
>
>
> Thanks for your hard work and help
>
> Filip Kunc
>
>
>
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Porting Sqlite to MQX OS: Question 2

2010-03-16 Thread GeoffW

Hello Dan


>Do the tasks share a heap? And do you have multiple connections
>to the database (multiple calls to sqlite3_open() or sqlite3_open_v2().

I would like to keep the implementation as general purpose as possible so we
can use sqlite in the future in differient Applications. Therefore I will
say I need to have multiple connections to the database. Each of the tasks
in the Application, have their own task stack, but mallocs() for example
will all take blocks from a common memory pool. Not too sure of how this
effects my locking strategy implementation.

To be honest, despite having read the locking bits of osUnix.c numerous
times, I am still pretty much lost.

Regards Geoff









Dan Kennedy-4 wrote:
> 
> 
> On Mar 16, 2010, at 5:22 AM, GeoffW wrote:
> 
>>
>> Hello Dan
>>
>> Thanks for your useful input. To answer your questions.
>>
>>> Do you have any file-locking primitives provided by the OS?
>> There are no file locking OS Primitives at all that I can use.
>>
>>> Do you have clients connecting to the database from multiple
>>> processes? Or only multiple threads within the same process?
>>
>> My Application doesnt have a concept of processes and threads as you  
>> might
>> find in Windows.
>> Think of it as a  small data collection embedded Application. The  
>> platform
>> has essentially 1 fixed Application running, made up of say 10  
>> different
>> concurrent tasks.
> 
> Do the tasks share a heap? And do you have multiple connections
> to the database (multiple calls to sqlite3_open() or sqlite3_open_v2().
> 
> If you only have one connection, then you don't need any locking. Have
> xCheckReservedLock() set its result variable to 0 for all calls.
> 
> If all tasks share a heap, maybe you can implement locking in-memory
> using global variables in the VFS layer. Or, if you strictly use
> shared-cache mode, you will not need any locking.
> 
> Dan.
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/Porting-Sqlite-to-MQX-OS%3A-Question-2-tp27874124p27918689.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unicode command line bug in Windows version of sqlite3 with patch

2010-03-16 Thread Jean-Christophe Deschamps

>We currently use sqlite 3.6.23. We have a big problem with characters with
>accents or other special characters in path to database file, for 
>example in
>Czech Windows XP the "Application Data" folder is translated to "Data
>aplikací" so if the accented 'í' is in path the sqlite3.exe writes that it
>is unable to open file in this path.

A much better solution is to use a MSYS terminal (installed by MinGW), 
so you have UTF-8 command-line and data entry/display without 
conversion.  No need to "patch" anything.

The culprit here isn't the command-line utility, but the WinDOS usage 
of old charset.

>In attachment you will find diff patch which works for us. But because
>SQLite3 source code is really big we can't be sure that this fix is 
>correct
>and doesn't introduce any side effects.

If you change input encoding and use your code page, then it's likely 
you'll going to do the same with data, which is plain wrong: SQLite 
needs UTF-8 (or UTF-16) data, not ANSI.




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE3 in multi-thread server

2010-03-16 Thread Pavel Ivanov
I didn't understand how you make everything to work, but can answer at
the following question:

> So how do I make it so there is no locking?

I guess you already know that there's concept of VFS in SQLite
(http://www.sqlite.org/c3ref/vfs.html). In VFS there's structure
sqlite3_io_methods (http://www.sqlite.org/c3ref/io_methods.html) which
contains functions xLock and xUnlock. So if you implement your own VFS
and make these functions as no-op then you'll get SQLite without any
kind of locking (you should implement xCheckReservedLock as no-op
too). And if your application is sure that nobody else accesses the
same database and you synchronize threads somehow (or use shared
cache) then it will work just fine (I've implemented this kind of
trick in my application to boost its performance).


Pavel

On Tue, Mar 16, 2010 at 10:05 AM, HLS  wrote:
> Folks,
>
> I read the documentation suggesting to use another SQL engine for a
> multi-threaded client/server environment and I reached a point where I
> see why.   But I was thinking it could be used because currently, we
> have a ISAM/BTREE database under our full reader/writer and exclusive
> locking controls.  So my attempt was to basically plug and play
> SQLITE3.
>
> What I need is NO LOCKING so we can use our own thread context
> contention controls.
>
> I read the technical notes but I'm still getting either a error 5 or
> 8.  I'm not entirely sure what steps need to be done.   I tried:
>
>       sqlite3_config(SQLITE_CONFIG_MULTITHREAD);
>
> Overall, the design is the RPC server opens exclusive the database
> file, and each thread context gets a query cursor.   Since the client
> API has no concept of a "close",  the cursor is saved in the server
> thread context and reused for NEXT/PREV client function,  so on the
> client side, a example code:
>
>     TFileRecord rec = {0};
>     DWORD tid = 0;
>      if (SearchFileRec(FileNameAreaKey, rec, tid) do {
>         ... do something with rec ...
>      } while (GetNextFilerec(FileNameAreaKey, rec, tid));
>
> Here, in the current system, the tid holds a pointer to the btree page
> allowing for the traversal.   But for the SQLITE3 implementation,  the
> search cursor is saved in the context and recalled in the Next call.
>
> So as you can imagine what happen now  if there any record updating:
>
>      if (SearchFileRec(FileNameAreaKey, rec, tid) do {
>           rec. = change some fields
>           UpdateFileRec(rec);
>      } while (GetNextFilerec(FileNameAreaKey, rec, tid));
>
> I am now getting a sharing issue with the UpdateFileRec().
>
> I think before we punt on SQLITE3,  we can use it if we had full
> control of the I/O access to the database.  No other process will be
> allowed access to the database because I am going to open it in none
> sharing more.  All access will be through the client API.
>
> So how do I make it so there is no locking?
>
> --
> hls
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM & journal size

2010-03-16 Thread Max Vlasov
> This means that to VACUUM a SQLite database of size X, you need at
> least 2X of _additional_ free disk space available.  That seems rather
> wasteful, just looking at it as a SQLite user.  Although
> programmatically there may be reasons for it that I'm not aware of.
>
>

Hmm, did some research, I think that VACUUM  requirements for free disk
space is too big.

When I read the comments it was obvious that the algorithm uses very simple
approach:
Attach blank database, copy all data,  detach,  rename. Sure I might be
wrong in details, but generally it looks like this.

With this actions journal file(s) for the new database should not contain
much data. So my quess is one only need at most the size of the actual data
from the source base plus very tiny journal file. But in fact (as everyone
sees) it uses much more.

I just tried to perform VACUUM on a test base and emulate the actions with
manual attach/copy with the same base.

We have:
- truecrypt volume, size: 10M in order to see all the "disk full" errors.
- new connection, testdb.db3
- CREATE TABLE [TestTable] ([Id] INTEGER PRIMARY KEY AUTOINCREMENT,[Value]
Text)
- 100,000 times
INSERT INTO TestTable (Value) VALUES
("12345678901234567890123456789012345678901234567890")
Final size: 6M
- After we have completely filled the db, let's remove half of the records.
DELETE FROM TestTable WHERE Id < 5

Now we have our 6M db (ready to be decreased to ~3) and about 4M of free
space.

So two scenarios:

 1. Simple vacuum

Trying to perform VACUUM:
Result: "database or disk is full" error.

 2. Emulation of Vacuum actions.

testdb_new.db3 is a new connection (tiny file 3k in size)

ATTACH 'Q:\testdb_new.db3' AS newdb

BEGIN TRANSACTION
 ; the following operations create two db-journal files so rollback is
possible.

CREATE TABLE newdb.TestTable ([Id] INTEGER PRIMARY KEY AUTOINCREMENT,[Value]
Text)
 ; actually sqlite3RunVacuum does some string formatting with results from
sqlite_master and performing corresponding queries. so this create table is
probably the same query.

INSERT INTO newdb.TestTable SELECT * FROM main.TestTable
  ; the operation is ok, Windows reports 3M free, but this is probably due
to cached writes.

END TRANSACTION
  ; Ok, testdb_new is 3m and 1M is free

=

So the question is what is so special about sqlite3RunVacuum that it needs
more space than a simple emulation of its actions?

Thanks

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE3 in multi-thread server

2010-03-16 Thread HLS
Pavel,  thanks for your response.  I will read up again and pay more
attention to the technical VFS details (try it out).To answer your
question, maybe I am beating a dead horse. :)   This is an exploratory
project.

Right now, the server ISAM/BTREE is open in exclusive mode (no process
sharing) and all records and btree pages are managed with software
level reader/writer/exclusive locks.   SQLITE3 was suggested to low
footprint sql engine to improve the FTS and offer more ad-hoc
searching capabilities.  So I was thinking if I removed its locking
logic, I can manage it the sharing with incoming threads.

One goal to maintain the client side API traversal functions and
record level I/O functions:

GetFileRecByRef()
SearchFileRec()
GetFirstFileRec()
GetNextFileRec()
GetPrevFileRec()
GetLastFileRec()
UpdateFileRec()
AddFileRec()
DeleteFileRec()


What I seeing basically if a thread is currently in, I presume *read
only* mode with a SELECT, any attempt before the cursor is released,
to update a record, a sharing violation is returned.

I can get it to work if the select request is completed (retrieve the
records first), then update them.  But this request a change to
existing client applets using the current client side RPC API.

I'm new to SQLITE3 so obviously I have a lot to learn about its
technical details.  But I was thinking if the cursor (sqlite3_stmt *),
if I correct in viewing it as a *cursor*, then it would be past the
its fetch and should be released from any update restriction.  But
SQLITE3 locks the whole request.  It would be seem to me that this
could be an improvement to release locks at the record/row level (once
the fetch was done).  But then again, that is probably what VFS is
there for.

Thanks


On Tue, Mar 16, 2010 at 10:56 AM, Pavel Ivanov  wrote:
> I didn't understand how you make everything to work, but can answer at
> the following question:
>
>> So how do I make it so there is no locking?
>
> I guess you already know that there's concept of VFS in SQLite
> (http://www.sqlite.org/c3ref/vfs.html). In VFS there's structure
> sqlite3_io_methods (http://www.sqlite.org/c3ref/io_methods.html) which
> contains functions xLock and xUnlock. So if you implement your own VFS
> and make these functions as no-op then you'll get SQLite without any
> kind of locking (you should implement xCheckReservedLock as no-op
> too). And if your application is sure that nobody else accesses the
> same database and you synchronize threads somehow (or use shared
> cache) then it will work just fine (I've implemented this kind of
> trick in my application to boost its performance).
>
>
> Pavel
>
> On Tue, Mar 16, 2010 at 10:05 AM, HLS  wrote:
>> Folks,
>>
>> I read the documentation suggesting to use another SQL engine for a
>> multi-threaded client/server environment and I reached a point where I
>> see why.   But I was thinking it could be used because currently, we
>> have a ISAM/BTREE database under our full reader/writer and exclusive
>> locking controls.  So my attempt was to basically plug and play
>> SQLITE3.
>>
>> What I need is NO LOCKING so we can use our own thread context
>> contention controls.
>>
>> I read the technical notes but I'm still getting either a error 5 or
>> 8.  I'm not entirely sure what steps need to be done.   I tried:
>>
>>       sqlite3_config(SQLITE_CONFIG_MULTITHREAD);
>>
>> Overall, the design is the RPC server opens exclusive the database
>> file, and each thread context gets a query cursor.   Since the client
>> API has no concept of a "close",  the cursor is saved in the server
>> thread context and reused for NEXT/PREV client function,  so on the
>> client side, a example code:
>>
>>     TFileRecord rec = {0};
>>     DWORD tid = 0;
>>      if (SearchFileRec(FileNameAreaKey, rec, tid) do {
>>         ... do something with rec ...
>>      } while (GetNextFilerec(FileNameAreaKey, rec, tid));
>>
>> Here, in the current system, the tid holds a pointer to the btree page
>> allowing for the traversal.   But for the SQLITE3 implementation,  the
>> search cursor is saved in the context and recalled in the Next call.
>>
>> So as you can imagine what happen now  if there any record updating:
>>
>>      if (SearchFileRec(FileNameAreaKey, rec, tid) do {
>>           rec. = change some fields
>>           UpdateFileRec(rec);
>>      } while (GetNextFilerec(FileNameAreaKey, rec, tid));
>>
>> I am now getting a sharing issue with the UpdateFileRec().
>>
>> I think before we punt on SQLITE3,  we can use it if we had full
>> control of the I/O access to the database.  No other process will be
>> allowed access to the database because I am going to open it in none
>> sharing more.  All access will be through the client API.
>>
>> So how do I make it so there is no locking?
>>
>> --
>> hls
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> 

Re: [sqlite] VACUUM & journal size

2010-03-16 Thread Pavel Ivanov
> So the question is what is so special about sqlite3RunVacuum that it needs
> more space than a simple emulation of its actions?

I believe renaming of the file cannot be atomic. So in case of OS
crash you can be in situation without database at all - no old and no
new. Also deleting of old file can be problematic when other processes
have open handles to it. Even if Unix system will allow that all those
processes won't know that they should re-open there handles to work
with new file.


Pavel

On Tue, Mar 16, 2010 at 11:18 AM, Max Vlasov  wrote:
>> This means that to VACUUM a SQLite database of size X, you need at
>> least 2X of _additional_ free disk space available.  That seems rather
>> wasteful, just looking at it as a SQLite user.  Although
>> programmatically there may be reasons for it that I'm not aware of.
>>
>>
>
> Hmm, did some research, I think that VACUUM  requirements for free disk
> space is too big.
>
> When I read the comments it was obvious that the algorithm uses very simple
> approach:
> Attach blank database, copy all data,  detach,  rename. Sure I might be
> wrong in details, but generally it looks like this.
>
> With this actions journal file(s) for the new database should not contain
> much data. So my quess is one only need at most the size of the actual data
> from the source base plus very tiny journal file. But in fact (as everyone
> sees) it uses much more.
>
> I just tried to perform VACUUM on a test base and emulate the actions with
> manual attach/copy with the same base.
>
> We have:
> - truecrypt volume, size: 10M in order to see all the "disk full" errors.
> - new connection, testdb.db3
> - CREATE TABLE [TestTable] ([Id] INTEGER PRIMARY KEY AUTOINCREMENT,[Value]
> Text)
> - 100,000 times
> INSERT INTO TestTable (Value) VALUES
> ("12345678901234567890123456789012345678901234567890")
> Final size: 6M
> - After we have completely filled the db, let's remove half of the records.
> DELETE FROM TestTable WHERE Id < 5
>
> Now we have our 6M db (ready to be decreased to ~3) and about 4M of free
> space.
>
> So two scenarios:
>
>  1. Simple vacuum
>
> Trying to perform VACUUM:
> Result: "database or disk is full" error.
>
>  2. Emulation of Vacuum actions.
>
> testdb_new.db3 is a new connection (tiny file 3k in size)
>
> ATTACH 'Q:\testdb_new.db3' AS newdb
>
> BEGIN TRANSACTION
>  ; the following operations create two db-journal files so rollback is
> possible.
>
> CREATE TABLE newdb.TestTable ([Id] INTEGER PRIMARY KEY AUTOINCREMENT,[Value]
> Text)
>  ; actually sqlite3RunVacuum does some string formatting with results from
> sqlite_master and performing corresponding queries. so this create table is
> probably the same query.
>
> INSERT INTO newdb.TestTable SELECT * FROM main.TestTable
>  ; the operation is ok, Windows reports 3M free, but this is probably due
> to cached writes.
>
> END TRANSACTION
>  ; Ok, testdb_new is 3m and 1M is free
>
> =
>
> So the question is what is so special about sqlite3RunVacuum that it needs
> more space than a simple emulation of its actions?
>
> Thanks
>
> Max
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE3 in multi-thread server

2010-03-16 Thread Olaf Schmidt

"HLS"  schrieb im
Newsbeitrag
news:9cf5931e1003160705v38335b5g47a3d91193c28...@mail.gmail.com...

> ...we have a ISAM/BTREE database under our full
> reader/writer and exclusive locking controls...

>From your posted code-snippet I don't see any
"heavy SQL-Query-usage" - so, first the question,
why would you want to change your current system?

If you need only something like a "fast entry" into
some of your DataFile-Chunks (per FileNameAreaKey)
and then enumerate (record-)content in these
Data-Chunks, respecting some "sort-order" - why choose
a system which is based on SQL-query-processing?

There are other engines, which fit better for such
scenarios - berkeley-db comes to mind, which in
its latest incarnations should be usable threadsafe too
(not sure about concurrency and locking in that engine).
Or just "write your own thing", if the goal is only,
to achieve fast "ordered enumeration" of more or less
simple records-structs, hosted in files.

If SQL-based querying is (becoming) something which
would be "nice to have", then maybe consider other engines,
which work "better over sockets" (since this mode is
built-in) and have not that much "locking-restrictions"
as SQLite in concurrent scenarios (and also support
record-level-locking directly).

If it has to be SQLite, because it is nice, small, fast and
easy to deploy - then you should consider a completely
different approach with regards to your current client-
side locking-handling.

We also use an RPC-server, with the SQLite-engine
as the serverside backend - but we "completely isolate"
the serverside-cursors from the clientside - meaning, we
perform our SQL-query at the serverside - and serialize
(copy) all the records, according to the "set the SQL-string
describes" into a "transferrable container-object" (a Recordset,
or ResultSet) first.  This container-objects content is then
compressed and transferred over sockets to the clientside
(after all the sqlite-handles for the query in question were
freed).

This works fast and robust, no sqlite-handles are kept open
between requests - in fact this works faster than "serializing
and transferring each record separately over the sockets,
whilst keeping a bunch of serverside SQLite-cursors alive,
which can only be moved forward".

At the clientside, the transferred container (the Recordset)
is freely navigatable (back and forth) - supports its own
Find-, Sort- methods - contains all the Field-Type-
descriptions and so on...

If we need Record-Level-locking at the clientside, we
just update a (serverside) LockedRecords-Table with
the currently locked RecordIDs (+UserID - and some
extra-fields for some "timeout-handling") - and based on
queries against these "locking-tables", we can "colorize"
the currently locked Records appropriately in our GUI.

I understand, that you "don't want to rewrite, what you
currently have" - but (at least for an *SQLite*-engine in the
backend) you will have to IMO.

Nonetheless, such a "disconnected approach" is in the
meantime something like "common-practise" - given all
the Browser-based clients which work against
"RPC-Servers, or Application-Servers - or let's call them
*WebServers* ;-) ...over sockets (e.g. receiving JSON-
serialized Resultsets over http) - ... no matter what SQL-
engine is working in the backend of such a WebSite ...
"clientside-controlled Servercursors" are "a thing of the
past" IMO, since they "introduce more problems than
they solve".

Olaf Schmidt



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM & journal size

2010-03-16 Thread Jay A. Kreibich
On Tue, Mar 16, 2010 at 06:18:13PM +0300, Max Vlasov scratched on the wall:

> When I read the comments it was obvious that the algorithm uses very simple
> approach:
> Attach blank database, copy all data,  detach,  rename. Sure I might be
> wrong in details, but generally it looks like this.

  With one exception.  The temporary database is not renamed back to
  the original database name.  Rather, the data is copied back, page by
  page, from the temp database file back into the original database
  file.  The original database file is then truncated to the correct size.

> With this actions journal file(s) for the new database should not contain
> much data.

  That is correct.  The comments in the code support this.

> So my quess is one only need at most the size of the actual data
> from the source base plus very tiny journal file. But in fact (as everyone
> sees) it uses much more.

  Almost.  In addition to the original file, you need enough free space
  for the newly vacuumed database, as well as the journal files for
  both databases.

  As mentioned, the journal file for the temporary database is quite
  small.  However, the "copy back" operation creates a journal file on
  the original database.  This file is going to be similar in size to
  the original database, since every page in that file will be touched
  as part of the vacuum process.  The journal file is required, as any
  failure or error (including a process or power failure) will require
  the journal file to recreate the original pre-vacuum database file.
  
> So the question is what is so special about sqlite3RunVacuum that it needs
> more space than a simple emulation of its actions?

  The copy-back operation.

  Yes, it would be faster and easier to just copy the temp database
  back over to the original database name, but there are issues with
  that.
  
  First, I'm sure it is nearly impossible to do this as a
  guaranteed, atomic operation on most OSes and filesystems.  That's
  bad for data security.  
  
  Second, if this is meant to look like a cleanup operation on the
  original file, the original file (including any filesystem meta-data)
  should be kept in-tact.  Replacing the file by copying something on
  top of it won't do this.
  
  Last, file manipulations of this sort aren't supported by the current
  VFS interface, and (IMHO) with good reason.  The current vacuum
  process works regardless of the operating environment, including
  in-memory databases and any custom VFS modules.  Reaching outside
  of that context to things like filesystem and directory manipulations
  complicates this.  This makes modification of the current vacuum
  process unlikely.



  However, it might be nice to have a "VACUUM TO " version of the
  command.  This would allow the user/application to vacuum a database
  to another non-temporary file and skip the copy-back operation.  This
  would save considerable space and some amount of time.  You could
  then do as you want with the new file.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE3 in multi-thread server

2010-03-16 Thread Pavel Ivanov
> But
> SQLITE3 locks the whole request.  It would be seem to me that this
> could be an improvement to release locks at the record/row level (once
> the fetch was done).  But then again, that is probably what VFS is
> there for.

SQLite locks the whole database when reading cursor is open because it
was developed as light-weight file-based multi-process database
engine. There are no row-level locks in SQLite because it's impossible
(or almost impossible) to implement them using only OS file-locking
mechanisms. And you won't find row-level locking API in VFS.
BTW, if you turn on shared cache on your server and set
read_uncommited to 1 then you'll make possible writing while readers
are active right away. Beware though that if you have open reading
cursor and trying to write over the same SQLite connection then your
transaction won't be committed until cursor is closed.


Pavel

On Tue, Mar 16, 2010 at 11:36 AM, HLS  wrote:
> Pavel,  thanks for your response.  I will read up again and pay more
> attention to the technical VFS details (try it out).    To answer your
> question, maybe I am beating a dead horse. :)   This is an exploratory
> project.
>
> Right now, the server ISAM/BTREE is open in exclusive mode (no process
> sharing) and all records and btree pages are managed with software
> level reader/writer/exclusive locks.   SQLITE3 was suggested to low
> footprint sql engine to improve the FTS and offer more ad-hoc
> searching capabilities.  So I was thinking if I removed its locking
> logic, I can manage it the sharing with incoming threads.
>
> One goal to maintain the client side API traversal functions and
> record level I/O functions:
>
>    GetFileRecByRef()
>    SearchFileRec()
>    GetFirstFileRec()
>    GetNextFileRec()
>    GetPrevFileRec()
>    GetLastFileRec()
>    UpdateFileRec()
>    AddFileRec()
>    DeleteFileRec()
>
>
> What I seeing basically if a thread is currently in, I presume *read
> only* mode with a SELECT, any attempt before the cursor is released,
> to update a record, a sharing violation is returned.
>
> I can get it to work if the select request is completed (retrieve the
> records first), then update them.  But this request a change to
> existing client applets using the current client side RPC API.
>
> I'm new to SQLITE3 so obviously I have a lot to learn about its
> technical details.  But I was thinking if the cursor (sqlite3_stmt *),
> if I correct in viewing it as a *cursor*, then it would be past the
> its fetch and should be released from any update restriction.  But
> SQLITE3 locks the whole request.  It would be seem to me that this
> could be an improvement to release locks at the record/row level (once
> the fetch was done).  But then again, that is probably what VFS is
> there for.
>
> Thanks
>
>
> On Tue, Mar 16, 2010 at 10:56 AM, Pavel Ivanov  wrote:
>> I didn't understand how you make everything to work, but can answer at
>> the following question:
>>
>>> So how do I make it so there is no locking?
>>
>> I guess you already know that there's concept of VFS in SQLite
>> (http://www.sqlite.org/c3ref/vfs.html). In VFS there's structure
>> sqlite3_io_methods (http://www.sqlite.org/c3ref/io_methods.html) which
>> contains functions xLock and xUnlock. So if you implement your own VFS
>> and make these functions as no-op then you'll get SQLite without any
>> kind of locking (you should implement xCheckReservedLock as no-op
>> too). And if your application is sure that nobody else accesses the
>> same database and you synchronize threads somehow (or use shared
>> cache) then it will work just fine (I've implemented this kind of
>> trick in my application to boost its performance).
>>
>>
>> Pavel
>>
>> On Tue, Mar 16, 2010 at 10:05 AM, HLS  wrote:
>>> Folks,
>>>
>>> I read the documentation suggesting to use another SQL engine for a
>>> multi-threaded client/server environment and I reached a point where I
>>> see why.   But I was thinking it could be used because currently, we
>>> have a ISAM/BTREE database under our full reader/writer and exclusive
>>> locking controls.  So my attempt was to basically plug and play
>>> SQLITE3.
>>>
>>> What I need is NO LOCKING so we can use our own thread context
>>> contention controls.
>>>
>>> I read the technical notes but I'm still getting either a error 5 or
>>> 8.  I'm not entirely sure what steps need to be done.   I tried:
>>>
>>>       sqlite3_config(SQLITE_CONFIG_MULTITHREAD);
>>>
>>> Overall, the design is the RPC server opens exclusive the database
>>> file, and each thread context gets a query cursor.   Since the client
>>> API has no concept of a "close",  the cursor is saved in the server
>>> thread context and reused for NEXT/PREV client function,  so on the
>>> client side, a example code:
>>>
>>>     TFileRecord rec = {0};
>>>     DWORD tid = 0;
>>>      if (SearchFileRec(FileNameAreaKey, rec, tid) do {
>>>         ... do something with rec ...
>>>      

Re: [sqlite] SQLITE3 in multi-thread server

2010-03-16 Thread HLS
On Tue, Mar 16, 2010 at 11:54 AM, Olaf Schmidt  wrote:
>
> "HLS"  schrieb im
> Newsbeitrag
> news:9cf5931e1003160705v38335b5g47a3d91193c28...@mail.gmail.com...
>
>> ...we have a ISAM/BTREE database under our full
>> reader/writer and exclusive locking controls...
>
> From your posted code-snippet I don't see any
> "heavy SQL-Query-usage" - so, first the question,
> why would you want to change your current system?

Thank you!  Trust me, we are trying to figure that out.  I guess the
justification is all of what you stated.   Overall:

1) Improve the search system
2) Allow for 3rd party growth (add 3rd party fields)
3) "Social Networking" ideas.
5) Low footprint installation
6) API compatibility with existing installed base of client applets.
7) Extend the API to be more "SQL" ready

> If you need only something like a "fast entry" into
> some of your DataFile-Chunks (per FileNameAreaKey)
> and then enumerate (record-)content in these
> Data-Chunks, respecting some "sort-order" - why choose
> a system which is based on SQL-query-processing?

Trying to see if we can make a transparent replacement. Right now, the API ...

http://www.winserver.com/public/wcsdk

has over 250 functions. For the databases, they are structured the
same. For example for files,  we have 5 keys:

//!
//! Files Database Function Keys
//!

const int FileNameAreaKey = 0;
const int FileAreaNameKey = 1;
const int FileAreaDateKey = 2;
const int FileUploaderKey = 3;
const int FileDateAreaKey = 4;

and a set of transversal and I/O functions.  For for example, for the
client function:

BOOL  APIENTRY GetFirstFileRec(DWORD keynum, TFileRecord , DWORD );

On the backend, the server stub is:

/* [fault_status][comm_status] */ error_status_t WcsGetFirstFileRec(
   /* [in] */ TWildcatContextHandle wch,
   /* [in] */ unsigned long keynum,
   /* [ref][out] */ TFileRecord __RPC_FAR *f,
   /* [ref][out] */ unsigned long __RPC_FAR *tid)
{
   ZeroMemory(f, sizeof(TFileRecord));
   TClientContext *cc = GetClientContext(wch);
   if (!cc->LoggedIn()) return WC_USER_NOT_LOGGED_IN;
  TUserContext *uc = cc->GetUserContext();

  TReaderGrabber grab(FileAreas);

  // release/create new SQL cursor
  if (cc->qFiles)  delete cc->qFiles;
  cc->qFiles = new CSqliteQuery(*sqlFilesDb);

  CString sql = "select * from files";
  switch (keynum) {
     case FileNameAreaKey: sql += " order by Name, Area"; break;
     case FileAreaNameKey: sql += " order by Area, Name"; break;
     case FileAreaDateKey: sql += " order by Area, FileTime"; break;
     case FileUploaderKey: sql += " order by UploaderId"; break;
     case FileDateAreaKey: sql += " order by FileTime, Area"; break;
  }
  sqlite3_stmt *stm = cc->qFiles->get_result(sql.GetString());
  if (!stm) {
     delete cc->qFiles;
     cc->qFiles = NULL;
      return WC_RECORD_NOT_FOUND;
  }

  // save tid point, not used, but checked in Next/Preve
  *tid = (unsigned long)>qFiles;

  for (;;)
  {
     if (!cc->qFiles->fetch_row()) {
        cc->qFiles->free_result();
        delete cc->qFiles;
        cc->qFiles = NULL;
        return WC_RECORD_NOT_FOUND;
     }
     DWORD area = cc->qFiles->getuval("Area");
     DWORD luds = OBJECTFLAGS_FILEAREA_LIST |
                             OBJECTFLAGS_FILEAREA_DOWNLOAD;
     BOOL bCheckFileAccess = FALSE;
     if (area < FileAreas.Count() &&
         CheckFileAreaAccess(cc,area,luds,bCheckFileAccess)) {
        if (uc && bCheckFileAccess) {
           // Private Files Logic
           DWORD pvtid = cc->qFiles->getuval("PrivateUserId");
           DWORD upid  = cc->qFiles->getuval("UploaderId");
           if (!CheckFileOwnership(pvtid,upid,uc->User.Info.Id)) {
              continue;
           }
        }
        // perform full SQL conversion and return record
        TFullFileRecord ff = {0};
        SqliteToFileRecord(cc->qFiles,);
        GetOfflineInfo(ff);
        *f = ff.Info;
        return 0;
     }
  }
   return 0;
}

And for the Next (and Prev) functions, the logic is the same except
the cc->qFiles context cursor is expected.

So yes, the methodology is not fitting for SQL which requires a cursor
release concept. It is expected that a new level of API functions will
do this, add a GetFileRecClose() function.

But as you said, the direction is to more more of the work to the
client, so there is a lot of truth that we battling older design I/O
framework.

I think SQLITE3 is just a way to explore this because it may not
matter what sql engine we use.  But we want to use "generic" SQL so
that different engines can be used.

I explored berkerley-db back in 1998 or so and it appears that we
could do something, but it never happen - the goals above were not as
strong.

> We also use an RPC-server, with the SQLite-engine
> as the serverside backend - but we "completely isolate"
> the serverside-cursors from the clientside - meaning, we
> perform our SQL-query at the serverside - and serialize
> (copy) all the records, according to the "set 

Re: [sqlite] SQLITE3 in multi-thread server

2010-03-16 Thread HLS
Thanks Pavel, it adds a lot of weight to the reality of where SQLITE3 fits.

On Tue, Mar 16, 2010 at 12:04 PM, Pavel Ivanov  wrote:
>> But
>> SQLITE3 locks the whole request.  It would be seem to me that this
>> could be an improvement to release locks at the record/row level (once
>> the fetch was done).  But then again, that is probably what VFS is
>> there for.
>
> SQLite locks the whole database when reading cursor is open because it
> was developed as light-weight file-based multi-process database
> engine. There are no row-level locks in SQLite because it's impossible
> (or almost impossible) to implement them using only OS file-locking
> mechanisms. And you won't find row-level locking API in VFS.
> BTW, if you turn on shared cache on your server and set
> read_uncommited to 1 then you'll make possible writing while readers
> are active right away. Beware though that if you have open reading
> cursor and trying to write over the same SQLite connection then your
> transaction won't be committed until cursor is closed.
>
>
> Pavel
>
> On Tue, Mar 16, 2010 at 11:36 AM, HLS  wrote:
>> Pavel,  thanks for your response.  I will read up again and pay more
>> attention to the technical VFS details (try it out).    To answer your
>> question, maybe I am beating a dead horse. :)   This is an exploratory
>> project.
>>
>> Right now, the server ISAM/BTREE is open in exclusive mode (no process
>> sharing) and all records and btree pages are managed with software
>> level reader/writer/exclusive locks.   SQLITE3 was suggested to low
>> footprint sql engine to improve the FTS and offer more ad-hoc
>> searching capabilities.  So I was thinking if I removed its locking
>> logic, I can manage it the sharing with incoming threads.
>>
>> One goal to maintain the client side API traversal functions and
>> record level I/O functions:
>>
>>    GetFileRecByRef()
>>    SearchFileRec()
>>    GetFirstFileRec()
>>    GetNextFileRec()
>>    GetPrevFileRec()
>>    GetLastFileRec()
>>    UpdateFileRec()
>>    AddFileRec()
>>    DeleteFileRec()
>>
>>
>> What I seeing basically if a thread is currently in, I presume *read
>> only* mode with a SELECT, any attempt before the cursor is released,
>> to update a record, a sharing violation is returned.
>>
>> I can get it to work if the select request is completed (retrieve the
>> records first), then update them.  But this request a change to
>> existing client applets using the current client side RPC API.
>>
>> I'm new to SQLITE3 so obviously I have a lot to learn about its
>> technical details.  But I was thinking if the cursor (sqlite3_stmt *),
>> if I correct in viewing it as a *cursor*, then it would be past the
>> its fetch and should be released from any update restriction.  But
>> SQLITE3 locks the whole request.  It would be seem to me that this
>> could be an improvement to release locks at the record/row level (once
>> the fetch was done).  But then again, that is probably what VFS is
>> there for.
>>
>> Thanks
>>
>>
>> On Tue, Mar 16, 2010 at 10:56 AM, Pavel Ivanov  wrote:
>>> I didn't understand how you make everything to work, but can answer at
>>> the following question:
>>>
 So how do I make it so there is no locking?
>>>
>>> I guess you already know that there's concept of VFS in SQLite
>>> (http://www.sqlite.org/c3ref/vfs.html). In VFS there's structure
>>> sqlite3_io_methods (http://www.sqlite.org/c3ref/io_methods.html) which
>>> contains functions xLock and xUnlock. So if you implement your own VFS
>>> and make these functions as no-op then you'll get SQLite without any
>>> kind of locking (you should implement xCheckReservedLock as no-op
>>> too). And if your application is sure that nobody else accesses the
>>> same database and you synchronize threads somehow (or use shared
>>> cache) then it will work just fine (I've implemented this kind of
>>> trick in my application to boost its performance).
>>>
>>>
>>> Pavel
>>>
>>> On Tue, Mar 16, 2010 at 10:05 AM, HLS  wrote:
 Folks,

 I read the documentation suggesting to use another SQL engine for a
 multi-threaded client/server environment and I reached a point where I
 see why.   But I was thinking it could be used because currently, we
 have a ISAM/BTREE database under our full reader/writer and exclusive
 locking controls.  So my attempt was to basically plug and play
 SQLITE3.

 What I need is NO LOCKING so we can use our own thread context
 contention controls.

 I read the technical notes but I'm still getting either a error 5 or
 8.  I'm not entirely sure what steps need to be done.   I tried:

       sqlite3_config(SQLITE_CONFIG_MULTITHREAD);

 Overall, the design is the RPC server opens exclusive the database
 file, and each thread context gets a query cursor.   Since the client
 API has no concept of a "close",  the cursor is saved in the server

Re: [sqlite] SQLITE3 in multi-thread server

2010-03-16 Thread HLS
On Tue, Mar 16, 2010 at 1:01 PM, HLS  wrote:
>
>   dim f as TFileRecord
>   dim tid as integer
>   dim n as integer = 0
>   LoginSystem()
>   if GetFirstFileRec(FileNameAreaKey, f, tid) then
>      do
>       inc(n)
>      ... do something ...
>      loop while GetNextFileRec(FileNameAreaKey, f, tid)
>   end if
>
> Our customers have many scripts like this for their needs.  So this is
> what I are trying to accomplish here - how to keep a persistent
> SQLITE3 cursor per thread context for the current API implementation.

Once approach is to queue any updates/deletes when the database is
locked with a select request.  So when a fetch ends (like in the
GetNext function), it will check to see for any pending updates
and process them.

Does that sound like a viable approach with SQLITE3?

Hm, this would not address possible client code that can break
from a loop before reaching the end of select query.

--
hls
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE3 in multi-thread server

2010-03-16 Thread Simon Slavin

On 16 Mar 2010, at 5:17pm, HLS wrote:

> Once approach is to queue any updates/deletes when the database is
> locked with a select request.  So when a fetch ends (like in the
> GetNext function), it will check to see for any pending updates
> and process them.
> 
> Does that sound like a viable approach with SQLITE3?
> 
> Hm, this would not address possible client code that can break
> from a loop before reaching the end of select query.

For the approach that involves queueing write commands, you do definitely need 
to know when a SELECT has finished with the database.  But SQLite depends on 
that anyway, and there's no reason why it shouldn't depend on correct usage of 
the API including sqlite3_finalize() .

I have one installation where it's okay for SELECT commands to return results 
which are slightly out of date, but new data becomes available at sparse and 
irregular intervals from many sources.  Consequently, this system delays write 
commands until there has been no access to the database for one minute (checked 
by touching a file whenever sqlite3_prepare() is done).  In this particular 
installation this makes perfect sense, because the patterns of reads and writes 
is well understood.  However, for some random installation for a random use of 
SQLite it would be disastrous.  You may be in a situation where you can develop 
a protocol which fits your particular use of SQL very well even though the 
solution would be useless for a more general use of SQLite.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] question regarding performance tuning of HTML5 Db (Sqlite)

2010-03-16 Thread p r
Folks,

Couple of questions we were wrestling with:

1. How to tune performance of the application that uses HTML5 DB (Sqlite under 
the covers). Any general tips or suggestions e.g. is it possible to use in 
memory sqlite db or attach multiple db's (some in memory dbs some regular on 
disk dbs?

2. How to reduce latency/ round trips on updates? We need some kind of case / 
if..then capability and as we do not seem to have stored procedures we are 
using triggers on insert into a driver table e.g. Actions (action, parameters) 
to drive inserts/updates/deletes on other tables. Any other tuning suggestions?

Thanks in advance,
Pete


  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE3 in multi-thread server

2010-03-16 Thread HLS
Thanks Simon.  It just seem so simplistic that SQLITE3 does not allow for

Open Cursor
for each fetch
    Issue Update based on ROWID
endfor
Close Cursor

The row fetched is already complete, or the rowid in the table is no
longer "sensitive" to anything but a update whether it was opened or
not. The "current" cursor is at the next record ready to be fetched,
not the one that just been read.  So it would seem it would be
possible to implement a wrap a lock on the rowid update.

I had to see if this is possible with the VFS notes Pavel pointed me
to perhaps, or even been looking at the feasibility of changing code
(which is the last thing I don't want to do.)

On Tue, Mar 16, 2010 at 1:31 PM, Simon Slavin  wrote:
>
> On 16 Mar 2010, at 5:17pm, HLS wrote:
>
>> Once approach is to queue any updates/deletes when the database is
>> locked with a select request.  So when a fetch ends (like in the
>> GetNext function), it will check to see for any pending updates
>> and process them.
>>
>> Does that sound like a viable approach with SQLITE3?
>>
>> Hm, this would not address possible client code that can break
>> from a loop before reaching the end of select query.
>
> For the approach that involves queueing write commands, you do definitely 
> need to know when a SELECT has finished with the database.  But SQLite 
> depends on that anyway, and there's no reason why it shouldn't depend on 
> correct usage of the API including sqlite3_finalize() .
>
> I have one installation where it's okay for SELECT commands to return results 
> which are slightly out of date, but new data becomes available at sparse and 
> irregular intervals from many sources.  Consequently, this system delays 
> write commands until there has been no access to the database for one minute 
> (checked by touching a file whenever sqlite3_prepare() is done).  In this 
> particular installation this makes perfect sense, because the patterns of 
> reads and writes is well understood.  However, for some random installation 
> for a random use of SQLite it would be disastrous.  You may be in a situation 
> where you can develop a protocol which fits your particular use of SQL very 
> well even though the solution would be useless for a more general use of 
> SQLite.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



--
hls
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite file format

2010-03-16 Thread Magnus Torfason
On 3/11/2010 10:46 AM, Diana Chan wrote:
> I have some questions about SQLite.  I would like to know if it's possible
> to use SQLite as storage for huge genomic datasets. These datasets are
> currently in netcdf format. I'm wondering if it's possible to convert them
> to the SQLite file saved format.

To answer the three questions you state:

1. SQLite for huge data sets? Yes, certainly

2. SQLite for genomic data sets? Hard to see why not

3. Conversion/import to SQLite? Yes, this should be trivial (on a scale 
of "trivial" to "complex" appropriate for handling "huge" data sets).

However, the more important question seems to be if having your data in 
SQL form (SQLite or other) would be useful. Do you know what kinds of 
SQL queries you would want to run?

I've used SQLite for huge social networks data sets and found it very 
flexible for working with and manipulating my data.

Best,
Magnus
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Backuping SQLite database in VC system

2010-03-16 Thread Alberto Simões
Hello

I was pondering to backup an SQLite database using a version control
system (probably SVN). Of course that if I add the binary file SVN
will diff the entire file. That is not a good option.

I though on dumping the full database to SQL and put that file into svn.

I would like to ask:
 - any comment on this? is it just stupid?
 - if it is not stupid, how can I dump a full sqlite database to a
text file that could be used later to restore the database?

Cheers
ambs
-- 
Alberto Simões
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Backuping SQLite database in VC system

2010-03-16 Thread Alberto Simões
Hello

2010/3/16 Alberto Simões :
> Hello
>
> I was pondering to backup an SQLite database using a version control
> system (probably SVN). Of course that if I add the binary file SVN
> will diff the entire file. That is not a good option.
>
> I though on dumping the full database to SQL and put that file into svn.
>
> I would like to ask:
>  - any comment on this? is it just stupid?
>  - if it is not stupid, how can I dump a full sqlite database to a
> text file that could be used later to restore the database?

Well, .dump might help. My bad :)

> Cheers
> ambs
> --
> Alberto Simões
>



-- 
Alberto Simões
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Backuping SQLite database in VC system

2010-03-16 Thread Darren Duncan
Alberto Simões wrote:
> I was pondering to backup an SQLite database using a version control
> system (probably SVN). Of course that if I add the binary file SVN
> will diff the entire file. That is not a good option.
> 
> I though on dumping the full database to SQL and put that file into svn.
> 
> I would like to ask:
>  - any comment on this? is it just stupid?
>  - if it is not stupid, how can I dump a full sqlite database to a
> text file that could be used later to restore the database?

If your database file isn't really small or you want the VC system to show you 
what changed between versions, then you would be best to dump it so that it is 
a 
plain text format which the VC would work best with.

A caveat here is that you'd want the dumper to follow some deterministic 
algorithm such as sorting all the rows when they are dumped so that different 
versions of the database show unchanged parts in the same order, so that VC's 
diff can be the most meaningful and efficient.

Since a SQL table's rows aren't naturally sorted, by definition, you can't 
count 
on a plain "select * from foo" giving rows in the same order.

Technically a table's columns aren't supposed to be ordered either, though SQL 
makes them so.

And of course, dump the schema and tables in a mutually consistent order.

-- Darren Duncan
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE3 in multi-thread server

2010-03-16 Thread Pavel Ivanov
Please be advised that updating/inserting/deleting from the table
while you're executing select on it has undefined behavior in SQLite
(if you do that on the same connection) and is considered dangerous.
Doing that from different connections is possible only if have shared
cache turned on and read_uncommitted set to 1. If you don't use shared
cache and turned off locking on VFS level then you can easily get
database corruption.

Pavel

On Tue, Mar 16, 2010 at 2:12 PM, HLS  wrote:
> Thanks Simon.  It just seem so simplistic that SQLITE3 does not allow for
>
> Open Cursor
> for each fetch
>     Issue Update based on ROWID
> endfor
> Close Cursor
>
> The row fetched is already complete, or the rowid in the table is no
> longer "sensitive" to anything but a update whether it was opened or
> not. The "current" cursor is at the next record ready to be fetched,
> not the one that just been read.  So it would seem it would be
> possible to implement a wrap a lock on the rowid update.
>
> I had to see if this is possible with the VFS notes Pavel pointed me
> to perhaps, or even been looking at the feasibility of changing code
> (which is the last thing I don't want to do.)
>
> On Tue, Mar 16, 2010 at 1:31 PM, Simon Slavin  wrote:
>>
>> On 16 Mar 2010, at 5:17pm, HLS wrote:
>>
>>> Once approach is to queue any updates/deletes when the database is
>>> locked with a select request.  So when a fetch ends (like in the
>>> GetNext function), it will check to see for any pending updates
>>> and process them.
>>>
>>> Does that sound like a viable approach with SQLITE3?
>>>
>>> Hm, this would not address possible client code that can break
>>> from a loop before reaching the end of select query.
>>
>> For the approach that involves queueing write commands, you do definitely 
>> need to know when a SELECT has finished with the database.  But SQLite 
>> depends on that anyway, and there's no reason why it shouldn't depend on 
>> correct usage of the API including sqlite3_finalize() .
>>
>> I have one installation where it's okay for SELECT commands to return 
>> results which are slightly out of date, but new data becomes available at 
>> sparse and irregular intervals from many sources.  Consequently, this system 
>> delays write commands until there has been no access to the database for one 
>> minute (checked by touching a file whenever sqlite3_prepare() is done).  In 
>> this particular installation this makes perfect sense, because the patterns 
>> of reads and writes is well understood.  However, for some random 
>> installation for a random use of SQLite it would be disastrous.  You may be 
>> in a situation where you can develop a protocol which fits your particular 
>> use of SQL very well even though the solution would be useless for a more 
>> general use of SQLite.
>>
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
>
> --
> hls
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE3 in multi-thread server

2010-03-16 Thread Simon Slavin

On 16 Mar 2010, at 6:12pm, HLS wrote:

> It just seem so simplistic that SQLITE3 does not allow for
> 
> Open Cursor
> for each fetch
> Issue Update based on ROWID
> endfor
> Close Cursor

One reason you cannot do this is that changing the value of a field may change 
how you step from row to row.  You might change something which needs an index 
changed, for example.  When you start working out how locking by rows works you 
find that you need to lock not just that row but the rows near it, in case one 
of them suddenly moves away.  It can get quite complicated.  One of the things 
that makes SQLite very simple is that it doesn't try to do this, it has either 
everything or nothing locked.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE3 in multi-thread server

2010-03-16 Thread HLS
Do you know if TLS is used?  How does it workout splite3_open_v2() per
thread?  Can the handle be used globally?   In other words, is open
per thread considered a "different connection?"

On Tue, Mar 16, 2010 at 3:57 PM, Pavel Ivanov  wrote:
> Please be advised that updating/inserting/deleting from the table
> while you're executing select on it has undefined behavior in SQLite
> (if you do that on the same connection) and is considered dangerous.
> Doing that from different connections is possible only if have shared
> cache turned on and read_uncommitted set to 1. If you don't use shared
> cache and turned off locking on VFS level then you can easily get
> database corruption.
>
> Pavel
>
> On Tue, Mar 16, 2010 at 2:12 PM, HLS  wrote:
>> Thanks Simon.  It just seem so simplistic that SQLITE3 does not allow for
>>
>> Open Cursor
>> for each fetch
>>     Issue Update based on ROWID
>> endfor
>> Close Cursor
>>
>> The row fetched is already complete, or the rowid in the table is no
>> longer "sensitive" to anything but a update whether it was opened or
>> not. The "current" cursor is at the next record ready to be fetched,
>> not the one that just been read.  So it would seem it would be
>> possible to implement a wrap a lock on the rowid update.
>>
>> I had to see if this is possible with the VFS notes Pavel pointed me
>> to perhaps, or even been looking at the feasibility of changing code
>> (which is the last thing I don't want to do.)
>>
>> On Tue, Mar 16, 2010 at 1:31 PM, Simon Slavin  wrote:
>>>
>>> On 16 Mar 2010, at 5:17pm, HLS wrote:
>>>
 Once approach is to queue any updates/deletes when the database is
 locked with a select request.  So when a fetch ends (like in the
 GetNext function), it will check to see for any pending updates
 and process them.

 Does that sound like a viable approach with SQLITE3?

 Hm, this would not address possible client code that can break
 from a loop before reaching the end of select query.
>>>
>>> For the approach that involves queueing write commands, you do definitely 
>>> need to know when a SELECT has finished with the database.  But SQLite 
>>> depends on that anyway, and there's no reason why it shouldn't depend on 
>>> correct usage of the API including sqlite3_finalize() .
>>>
>>> I have one installation where it's okay for SELECT commands to return 
>>> results which are slightly out of date, but new data becomes available at 
>>> sparse and irregular intervals from many sources.  Consequently, this 
>>> system delays write commands until there has been no access to the database 
>>> for one minute (checked by touching a file whenever sqlite3_prepare() is 
>>> done).  In this particular installation this makes perfect sense, because 
>>> the patterns of reads and writes is well understood.  However, for some 
>>> random installation for a random use of SQLite it would be disastrous.  You 
>>> may be in a situation where you can develop a protocol which fits your 
>>> particular use of SQL very well even though the solution would be useless 
>>> for a more general use of SQLite.
>>>
>>> Simon.
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>
>>
>>
>> --
>> hls
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
hls
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unicode command line bug in Windows version of sqlite3 with patch

2010-03-16 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Jean-Christophe Deschamps wrote:
> A much better solution is to use a MSYS terminal (installed by MinGW), 
> so you have UTF-8 command-line and data entry/display without 
> conversion.  No need to "patch" anything.

No need for msys.  You can make a regular command prompt use UTF8 by
switching to code page 65001.  Either of these commands will do that.

  chcp 65001
  mode con cp select=65001

You can use "chcp" or "mode con" to see the current code page.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkuf/5UACgkQmOOfHg372QS+xACg0VZPfwEz4y2OAzs4OpHon+EG
crIAoMsd8wVrRWhhPBouPVnI1m0M4lL/
=/eXW
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE3 in multi-thread server

2010-03-16 Thread HLS
On Tue, Mar 16, 2010 at 4:44 PM, Simon Slavin  wrote:

> One of the things that makes SQLite very simple is that it doesn't try to do
> this, it has either everything or nothing locked.

Yes, that is coming to realization now.

-- 
hls
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Attach a database dynamically if not already available within a trigger

2010-03-16 Thread Kees Nuyt
On Mon, 15 Mar 2010 18:52:26 -0400, andy
 wrote:

>Hello
>I am hoping someone can help with the following problem.
>
>- I want to create a log of all changes to tables in a database.
>- I would like to keep the log table in a separate database to the  
>main data files
>- I plan to use triggers to capture changes and insert them into the  
>log table.
>
>Question:
>How can I embed the 'attach" command within the trigger statement so  
>that the log database is attached if it is not already available?

You can't do that.

Even if you make sure the log database is attached at all
times, it is not possible to define triggers (or views, for
that matter) that span both databases. 
The reason for this design is that the schema would be
invalid when one database file would be missing.

Your best bet is to create the log table(s) in the same
database, so it would be in the same ACID domain, consistent
with the contents of the database, and periodically offload
rows from the log table to an attached database, and delete
them from the log table in the main database.

Pseudo code (untested):
ATTACH DATABASE 'log.db' as logdb;
BEGIN EXCLUSIVE;
INSERT INTO logdb.log (col1,col2,..) 
  SELECT col1,col2,.. FROM log
  WHERE something;
DELETE FROM log WHERE something;
COMMIT;
DETACH DATABASE logdb;


>i.e something like
>
>create trigger insert_data after insert on data
>begin
>   if database logdb does not exist
>   attach logdb.db as logdb;
>
>   insert into logdb.log values(.)
>end;
>
>
>I am also a little concerned about performance so I am not sure if  
>testing for the presence of the database for every change will add to  
>much overhead. Thoughts?

Don't be concerned about performance until you really have a
performance problem. A healthy schema usually performs well.

Design, Build, Benchmark, Optimize, Benchmark.

>Thanks
-- 
  (  Kees Nuyt
  )
c[_]
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Datatypes wiki note

2010-03-16 Thread Dennis Volodomanov
Hello all,

A small note - on this page: http://sqlite.org/datatype3.html it says CAST(expr 
TO type), but it should be CAST(expr AS type), right?

Best regards,

   Dennis

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite on 64-bit Windows O/S

2010-03-16 Thread Rashed Iqbal
Has anyone compiled and used SQLite on 64-bit Windows? Are there any
code changes that would be needed before compiling? Would the same DB
file work on 32-bit and 64-bit Windows platforms?

Thank you for your help.

Rashed

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] state of wrappers

2010-03-16 Thread Jay A. Kreibich

  I'm trying to put together some documentation and need some help
  from SQLite users utilizing non-C languages.  The wiki page on SQLite
  wrappers is a bit of a mess, so I'm hoping some people that actually
  use these languages can offer some opinions.

  In specific, I'm trying to understand the current state of drivers
  and wrappers for:

  - Java JDBC.  Is there a "the" driver for JDBC?  These seem to
be popular, with some common code paths:

http://www.ch-werner.de/javasqlite/
http://www.zentus.com/sqlitejdbc/index.html
http://www.xerial.org/trac/Xerial/wiki/SQLiteJDBC


  - .NET   The wiki lists a dozen or so wrappers, but most look like
they were weekend projects that never really went anywhere. Is
there a supported open-source .NET/ADO.NET driver that has established
itself?  Or at least one that is well supported and likely to have
a future?  The two big ones seem to be:

http://sqlite.phxsoftware.com/
http://www.mono-project.com/SQLite

And they seem to be related (at least the newer SQLite v3 stuff).
I must admit I'm a bit lost here.  I know nothing of .NET.



  If you're the developer of one of these packages or want to express a
  strong personal opinion, please feel free to contact me off-list.

  Thanks!

   -j


-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite on 64-bit Windows O/S

2010-03-16 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Rashed Iqbal wrote:
> Has anyone compiled and used SQLite on 64-bit Windows? Are there any
> code changes that would be needed before compiling? Would the same DB
> file work on 32-bit and 64-bit Windows platforms?

The file format is fixed and is independent of the host, 32 or 64 bit,
endianess etc.

I don't believe there are any Win64 issues, but I don't have it myself.
 I do run SQLite on 64 bit Linux hosts and have done for years.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkugVMgACgkQmOOfHg372QSd9ACfYk7MAmDnNJTlrrQqa+8SzSSW
F+gAn1ww4XBxM1/adg2RZ8otu5O/CSmB
=9voP
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite on 64-bit Windows O/S

2010-03-16 Thread Random Coder
On Tue, Mar 16, 2010 at 6:21 PM, Rashed Iqbal  wrote:
> Has anyone compiled and used SQLite on 64-bit Windows? Are there any
> code changes that would be needed before compiling? Would the same DB
> file work on 32-bit and 64-bit Windows platforms?

I routinely run SQLite on 64 bit and 32 bit windows, moving the
database back and forth without any issues.  There were no issues
building it.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users