Re: [sqlite] for what reason :memory: is much slower than /dev/shm/dummy.db

2006-12-01 Thread Isaac Raway

On 12/1/06, Isaac Raway <[EMAIL PROTECTED]> wrote:

> Because our project needs to be ported to windows - the /dev/shm is not an
> option - because win2000 does not support any temporary memory based file
> system.

Not so.

"FILE: Ramdisk.sys sample driver for Windows 2000"
http://support.microsoft.com/kb/257405

Even includes C code, along with a binary. A Google search reveals
several other ones, some commercial, for 2K and XP.



Try http://www.winsoft.sk/ramdisk.htm

Just installed it and it works perfectly. Looks like it's $35 through SWREG.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] for what reason :memory: is much slower than /dev/shm/dummy.db

2006-12-01 Thread Isaac Raway

Because our project needs to be ported to windows - the /dev/shm is not an
option - because win2000 does not support any temporary memory based file
system.


Not so.

"FILE: Ramdisk.sys sample driver for Windows 2000"
http://support.microsoft.com/kb/257405

Even includes C code, along with a binary. A Google search reveals
several other ones, some commercial, for 2K and XP.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Accommodating 'Insert' and 'Update'

2006-11-28 Thread Isaac Raway

On 11/28/06, Nicolas Williams <[EMAIL PROTECTED]> wrote:

On Tue, Nov 28, 2006 at 03:03:58PM -0600, Isaac Raway wrote:
> Use an index on the table with your key values and call "INSERT OR
> UPDATE INTO t(...) VALUES(...)" for all creation and update
> operations. Unless you're dealing with a tremendous amount of data per
> record this will be perfectly efficient.

Surely you mean INSERT OR REPLACE ...
  ^^^

Actually... yes. Not sure what version of SQL I was thinking of.
Probably the one in my head.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Accommodating 'Insert' and 'Update'

2006-11-28 Thread Isaac Raway

Use an index on the table with your key values and call "INSERT OR
UPDATE INTO t(...) VALUES(...)" for all creation and update
operations. Unless you're dealing with a tremendous amount of data per
record this will be perfectly efficient.

On 11/28/06, Rich Shepard <[EMAIL PROTECTED]> wrote:

   This is a generic approach question, not neccesarily tied to a specific
database and certainly not to a particular language. For context, however,
my application is written in python and C and uses sqlite3. Here's the
situation:

   A database is created or opened, and the user enters data via fields on a
form. These data are then saved by inserting into the appropirate table and
fields. Now, if the user goes back to some of the fields and changes the
values in them, the table row should be updated, not re-inserted.

   What is the most efficient approach to test whether a record has a value
in a field that's different from the one displayed? The only approach that
occurs to me is to retrieve each field from the table and compare values
with those in the form when the "Save" button is clicked. If the field is
blank, then use an 'insert' statement; if the field has an existing value,
use an 'update' statement. Seems kludgy to me.

   Interestingly, I've not encounted this before with any database
application I've written, because they were all done in discrete steps. That
is, a menu was used to select whether new data were to be entered, or if
existing data were to be modified. With an event-driven GUI engine, and the
ability to modify the values in the widgets, it's a brand new game for me.

Thanks,

Rich

--
Richard B. Shepard, Ph.D.   |The Environmental Permitting
Applied Ecosystem Services, Inc.(TM)|Accelerator
<http://www.appl-ecosys.com> Voice: 503-667-4517  Fax: 503-667-8863

-
To unsubscribe, send email to [EMAIL PROTECTED]
-----





--
Isaac Raway
Entia non sunt multiplicanda praeter necessitatem.

http://blueapples.org - blog
http://stonenotes.com - personal knowledge management

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Music Files

2006-11-28 Thread Isaac Raway

On 11/27/06, LuYanJun <[EMAIL PROTECTED]> wrote:


Can anybody give a simple example for domestrating ?
I am puzzled by this topic, how does a music file be sotred in DB as BLOB
type?



You can insert /any/ kind of data into a SQLite database (or most any other
sort of DB for that matter). Here's a short Pascal program that would do
about what you want -- but of course getting the binary data out and into an
object that can play it is another matter. Also I imagine this would be very
slow, coming in at around 3 - 5 MB per song that has to be completely loaded
into memory from the DB before playback and begin. I have not tested this
but it gives you the idea: 1) load data into a stream / data string 2) write
as DB BLOB.

var
 MP3Source: string;
 Data: TFileStream

 DBFileName: string;
 DB: TSQLiteDatabase;
 MustCreate: boolean;
begin
 MP3Source := 'SomeSong.mp3';
 DBFileName := 'mp3.db3';

 MustCreate := not FileExists(DBFileName);
 DB := TSQLiteDatabase.Create(DBFileName);
 try
   if MustCreate then begin
 DB.ExecSQL('CREATE TABLE mp3(filename STRING PRIMARY KEY, data
BLOB);');
   end;

   Data := TFileStream.Create(MP3Source, fmOpenRead);
   try
 Data.Seek(0);

 DB.UpdateBlob('INSERT OR UPDATE INTO mp3(filename, data) ' +
   'VALUES(' + QuotedStr(MP3Source) + ', ?);', Data);
   finally
 FreeAndNil(Data);
   end;
 finally
   DB.Close;
   FreeAndNil(DB);
 end;
end;


Re: [sqlite] SELECT on empty fields ??

2006-11-27 Thread Isaac Raway

In my experience, NULL has been used by inexperienced developers to great
detriment to the stability of their projects.

Please note my use of the word "likely" and the definition of the given
word. In a large portion of cases there is no reason that there would be a
"missing" or "unknown" value. There are cases where it could be useful, but
in the vast majority of cases it causes much more work than needed
(constantly checking for a NULL value etc). It triples all boolean logic for
instance - true, false, and null conditions.

On 11/27/06, Jay Sprenkle <[EMAIL PROTECTED]> wrote:


On 11/27/06, Isaac Raway <[EMAIL PROTECTED]> wrote:
> I'd like to strongly second this. Avoid NULL columns, even at apparent
cost.
> Having a valid default value is always better. If a design appears to
> require NULL values, then the design is likely critically flawed.

Using NULLS is NOT a critical design flaw.

NULL means something specific and if you use it correctly it works
perfectly.
NULL indicates when nothing has been entered into a field.
Not entering anything, and entering spaces or a default value, are
different.
If you need that information then it's very useful. If you don't then
don't use it
by assigning default values.


-
To unsubscribe, send email to [EMAIL PROTECTED]

---------





--
Isaac Raway
Entia non sunt multiplicanda praeter necessitatem.

http://blueapples.org - blog
http://stonenotes.com - personal knowledge management


Re: [sqlite] Saving tables

2006-11-27 Thread Isaac Raway

First: Dump VB and use Delphi.

Second (more seriously): try to find an interface that directly exposes
sqlite. Actually, you may be able to get away with this with your current
wrapper: try replacing the sqlite3.dll file with a new one and see what
happens. I use a Delphi unit to interface with the DLL and while I don't get
any new functions, I can now use newer SQL commands through the slightly
older unit.

On 11/27/06, P Kishor <[EMAIL PROTECTED]> wrote:


On 11/27/06, RB Smissaert <[EMAIL PROTECTED]> wrote:
> Just to clear up one thing that is not 100% clear to me.
> When you are using SQLite on your machine is it true that you don't need
to
> have anything installed if you are using a VB wrapper dll. This wrapper
can
> Create, update, select etc. and nil else is needed.
>
> Now, if I am using this wrapper and there is a new version of SQLite out
> what does that mean to me? I suppose nil, unless the author of the
wrapper
> brings a  new one out that uses features of the new version of SQLite?

I don't know the first V about VB, but your logic seems immaculate.
Such is the peril of using a wrapper made by someone else. I live on a
similar edge with Perl DBD::SQLite which has SQLite, the library,
bundled in it. If I had my druthers I would compile my own, but I lost
my druthers a while back... so I suffer the consequence of laziness.
Its no big deal -- in my world, DBD::SQLite is reasonably current
(keep in mind, not every x.x.y release of SQLite is a crucial
upgrade).

You could write to the author of your VB dll, buy her a beer or something.


>
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of P
Kishor
> Sent: 27 November 2006 20:59
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Saving tables
>
> SQLite is the C library that does all the db magic. You have to
> somehow get to that library, which you can do from a program written
> in a variety of different languages, even GUI programs, or, from
> another confusingly similar named program called sqlite or sqlite
> shell. Since the jump from version 2 to version 3 of SQLite, the C
> library, made the data incompatible between the two versions, the
> version 3 of the shell is called sqlite3 usually... it uses the SQLite
> library version 3.x
>
> Here is what I did (my comments in-line)
>
> > sqlite3 foo.sqlite
>
> # called sqlite3 with a db named foo.sqlite
> # since foo.sqlite didn't exist at first, sqlite3 helpfully created it
>
> > CREATE TABLE bar (a, b);
> > INSERT...
>
> # CREATEd a table and inserted a row into it.
>
> > .quit
>
> # got out in a hurry (that was a .dot command, specific to sqlite3, the
> shell)
> # time passed
>
> > sqlite3 foo.sqlite
>
> # this time foo.sqlite existed, so sqlite3 just opened it up
>
> > SELECT * FROM bar;
>
> # the table bar was there, and had my data in it.
>
>
>
> On 11/27/06, sebcity <[EMAIL PROTECTED]> wrote:
> >
> > I typed exactly what you typed there and i get
> > SQL error: no such table: bar
> > my command window doesnt have : "sqlite3 foo.sqlite" like yours
> >
> >
> >
> >
> >
> > P Kishor-2 wrote:
> > >
> > >>sqlite3 foo.sqlite
> > > SQLite version 3.3.7
> > > Enter ".help" for instructions
> > > sqlite> CREATE TABLE bar (a, b);
> > > sqlite> INSERT INTO bar (a, b) VALUES (1, 'my mp3');
> > > sqlite> .quit
> > >
> > > .. time passes..
> > >
> > >>sqlite3 foo.sqlite
> > > SQLite version 3.3.7
> > > Enter ".help" for instructions
> > > sqlite> SELECT * FROM bar;
> > > 1|my mp3
> > > sqlite>
> > >
> > >
> > >
> > > On 11/27/06, sebcity <[EMAIL PROTECTED]> wrote:
> > >>
> > >> I created a table.
> > >> I populated the table.
> > >> I exited SQLite.
> > >> Started SQLIte again.
> > >> Typed "select * from ".
> > >> ANd it tells me no such table exists??
> > >>
> > >>
> > >>
> > >> Igor Tandetnik wrote:
> > >> >
> > >> > sebcity <[EMAIL PROTECTED]> wrote:
> > >> >> Im new to SQLite, After you create your tables how do you save
them
> > >> >> so they are permenently there? It might be a dumb question but i
> cant
> > >> >> find the answer anwhere?!
> > >> >
> > >> > They are "permanently there" from the very moment you create
them.
> All
> > >> > changes are written to the database

[sqlite] drop/alter column

2006-11-27 Thread Isaac Raway

How much work is potentially involved in implementing more complete ALTER
TABLE support? Specifically DROP COLUMN and ALTER COLUMN.

--
Isaac Raway
Entia non sunt multiplicanda praeter necessitatem.

http://blueapples.org - blog
http://stonenotes.com - personal knowledge management


Re: [sqlite] SELECT on empty fields ??

2006-11-27 Thread Isaac Raway

I'd like to strongly second this. Avoid NULL columns, even at apparent cost.
Having a valid default value is always better. If a design appears to
require NULL values, then the design is likely critically flawed. For
something this simple, a default '' string would be much better.

On 11/26/06, Darren Duncan <[EMAIL PROTECTED]> wrote:


You can save your self a lot of grief by declaring all of your fields
to be NOT NULL and default the text fields to the empty string, ''.
Use '' rather than NULL when you don't have a name.  Then you can
simply say "where foo=''". -- Darren Duncan

At 10:52 PM +0100 11/26/06, Daniel Schumann wrote:
>Hello,
>
>i got a table 't' with two fields for example :
>
>Lastname   |   Name
>-
>Duck  |  Donald
>   |  Peter
>
>with :
>
>SELECT * FROM t WHERE (Lastname='Duck') AND (Name='Donald')
>
>everything is all right
>with :
>
>SELECT * FROM t WHERE (Lastname='') AND (Name='Peter')
>or
>SELECT * FROM t WHERE (Lastname=NULL) AND (Name='Peter')
>
>nothing is selected
>
>what do i wrong ?
>thx



-
To unsubscribe, send email to [EMAIL PROTECTED]

-----





--
Isaac Raway
Entia non sunt multiplicanda praeter necessitatem.

http://blueapples.org - blog
http://stonenotes.com - personal knowledge management


Re: [sqlite] Database sync

2006-11-19 Thread Isaac Raway

Well, putting this together I think I may have settled on a solution for the
first version of this project:

1) Only a single user will have access to each DB, therefore taking the most
recent record from any table will always be the right thing to do. This
avoids the complexity of conflict resolution, deltas, etc. but of course
reduces complexity. I'm going to try to write it in such a way that conflict
resolution will be "easy" to add -- as in, not made more complex by my
design.

2) Rsync is interesting. Very interesting, but I'm afraid it probably won't
serve my purposes. Some records will come down, some will go up which as I
understand it isn't compatible with rsync (it is meant to keep a mirror copy
in sync with a master). So, I'm going to design a web service that will
allow me to get a list of modified records and then sync their data between
the client and server. A nice side effect of the web service is that object
will eventually be shareable by their URL since the application will already
understand the format returned by the service.

3) To solve the problem of unique IDs, I've come up with a single scheme:
each record created locally will have a normal numeric ID. Records created
on the server by some user action there will have a prefix or suffic ("r456"
etc). I'm a bit worried about indexing these two types of ID values in the
same table though. Any thoughts on this? I know it will "work" if I put
something with a character in it into an "integer" field in SQLite, but is
it the best idea? Right now I haven't done a ton of indexing, it hasn't been
required for performance, but I'll have to start doing that pretty soon. I
may also be able to use some sort of numeric convention, perhaps all odd
values are local, all even ones remote, etc. Any thoughts on this?


On 11/19/06, John Stanton <[EMAIL PROTECTED]> wrote:


Isaac Raway wrote:
> I am looking at a design that will require syncing a disconnected SQLite
DB
> file on client's machines to a central server. The version of the DB on
the
> server will also be modified periodically, so there is a chance that new
> records will be created in either and also updated. Conflicts therefore
are
> an issue. I am looking at generic methods for handling the deltas
between
> BLOBs I have in my DB, and I think that's within reach.
>
> What I'm worried about is just the logistics of either 1) importing all
> user's data to a single DB somehow or 2) managing several DB files from
> clients automatically. Has anyone does this kind of syncing? I realize
I'm
> somewhat light on details, but I'm not really even sure exactly what
this
> system will need to do: it's more of a framework really.
>
> At any rate, anyone have experience syncing SQLite DB files?
>
One method is to use rsync to synchronize the files.  That takes care of
deltas etc.

We keep Sqlite DBs sync'd by maintaining a "stale" counter using
triggers and having the remote DB sync itself when it decides that it is
  stale.  We use an XML based transaction transmission to perform the
synchronization.  The theory of our method is that the sync'd DB could
be any RDBMS, not necessarily Sqlite or even an SQL driven DB.

This last method involves some extra overhead.  A DB access includes a
communication with the central DB to get the current status.  The rsync
method is less overhead, but does not maintain real time synschronization.

Our real time method is used to maintain a distributed database with
good local performance and which is tolerant of network disruptions.  A
network failure does not stop production, it merely degrades the quality
temporarily.

A word of caution.  Keeping the DB's sync'd in real time involves a lot
of logic and is not a trivial exercise.


-
To unsubscribe, send email to [EMAIL PROTECTED]

-





--
Isaac Raway
Entia non sunt multiplicanda praeter necessitatem.

http://blueapples.org - blog
http://stonenotes.com - personal knowledge management


[sqlite] Database sync

2006-11-18 Thread Isaac Raway

I am looking at a design that will require syncing a disconnected SQLite DB
file on client's machines to a central server. The version of the DB on the
server will also be modified periodically, so there is a chance that new
records will be created in either and also updated. Conflicts therefore are
an issue. I am looking at generic methods for handling the deltas between
BLOBs I have in my DB, and I think that's within reach.

What I'm worried about is just the logistics of either 1) importing all
user's data to a single DB somehow or 2) managing several DB files from
clients automatically. Has anyone does this kind of syncing? I realize I'm
somewhat light on details, but I'm not really even sure exactly what this
system will need to do: it's more of a framework really.

At any rate, anyone have experience syncing SQLite DB files?

--
Isaac Raway
Entia non sunt multiplicanda praeter necessitatem.

http://blueapples.org - blog
http://stonenotes.com - personal knowledge management


Re: [sqlite] Using BLOBs in where fields

2006-11-03 Thread Isaac Raway

If you can get this to work it will be very slow. I suggest when writing to
this table that you extract the relevant parts of the data blob and store
them in separate fields with an appropriate index on each each . This will
avoid the a table scan for every query, which is what you'd get if your
example worked.

RDBMS are designed to allow you to query data in a flexible format. Trying
to extract pieces of data from a blob is not what they're meant to do, you
need to normalize your data before you can query it effectively.

On 11/3/06, Gabriel Cook <[EMAIL PROTECTED]> wrote:


Hello all,

I'm trying to figure out if there is a way use portions of a BLOB field in
a
select query.

Here is an example of the table:

CREATE TABLE fcdata (

timestamp   INTEGER NOT NULL,
portINTEGER NOT NULL,
dataelementtype INTEGER NOT NULL,
iserror INTEGER DEFAULT 0,
length  INTEGER DEFAULT NULL,
dataBLOBDEFAULT NULL
)
;

Which has an index, as follows:

CREATE INDEX fcdata_timestamp_port
ON fcdata (timestamp, port);

I'm also depending on Sqlite to generate the rowid automatically.

I'd really like to do something like the following (which by the way,
doesn't work :) )

SELECT rowid, timestamp, port, dataelementtype, iserror, length, data,
FROM fcdata
WHERE substr(data, 1, 1) == x'bc'
ORDER BY timestamp, port
LIMIT 1000 OFFSET 0;

Is there any way to filter by a byte position within the BLOB?

Any help is appreciated. Thanks very much for your time.

-Gabe





-
To unsubscribe, send email to [EMAIL PROTECTED]

-





--
Isaac Raway
Entia non sunt multiplicanda praeter necessitatem.

http://blueapples.org - blog
http://stonenotes.com - personal knowledge management


Re: [sqlite] Re: Regarding sqlite3_exec

2006-10-28 Thread Isaac Raway

You cannot even consider loading even a thousand records directly.

Get a set of ID numbers. Load each record as it's needed. This is very
basic stuff, and not even that hard to implement. I am just saying for
the record that this is not hard to do, hopefully no one else will be
scared away from the concept.

As a general rule of coding, do it right the first time. One thread
reading a list of integers while another is on the GUI is not complex.

Isaac

On 10/28/06, Da Martian <[EMAIL PROTECTED]> wrote:

Hi

Thanks for the reposnse. The main reason is my record count could be from a
few thousands to a million. But even at the lowly numbers of around 1
the interface can seem slugish if you read every record before displaying
anything.

As you mention, and has been disucssed above, doing stuff in the background
is good way to go, but more complex. As a generla rule of coding I put as
few unneccessary threads into a "phase 1" program as I can, because the
complexity goes up hugly, threads can be complex to use, co-ordinate,
interrupt etc... and chance of bugs goes up drmatically. So I tend to do
that for a "Phase 2" - Bells and whistles phase and only when there isnt a
simpler way.

This thread has covered just about all approaches I can think of :-) thanks
for the reponses.

S


On 10/27/06, Isaac Raway <[EMAIL PROTECTED]> wrote:
>
> Why don't you design the table with a unique row ID, stored in an
> integer field, then fetch a list of those ID numbers?
>
> For 5000 rows, assuming you store them in you application as 4 byte
> longs, that's about 19 k of memory.
>
> Counting that result as you receive it isn't that difficult. If it
> takes a long time (it probably won't) you can do it in another thread
> and update the interface as appropriate.
>
> I'm not seeing a downside here.
>
> Isaac
>
> On 10/26/06, Da Martian <[EMAIL PROTECTED]> wrote:
> > No there isnt, but RDBM systems are a generalised data retrieval
> mechanism.
> > As such they suffer from that generality.
> >
> > Dont get me wrong, RDBM systems  are appropriate for 95% of all data
> > requirements I have had to deal with and I would never dream of trying
> to
> > write one from scratch, nor can I imagine a world without them.
> >
> > However certain applications (Weather data, Gnome data, Large indices
> (like
> > google)) require using somethng designed specifically for that purpose.
> If
> > you customise data retrieval (and particluar your sorting/indcies/access
> > path) you can leave rdbms in the dust in terms of performance. All I
> have
> > read about google, suggests they do exactly this. Although I must point
> out,
> > I dont actually know anything about google with any certainty. Just what
> has
> > "leaked" out over the years on the rumour mill. But designiing my own
> > "google" like indices (on a smaller scale of coure) and some
> specialisted
> > weather stuff, it neccessary to throw away the rdbms and do it yourself.
> For
> > a goole query for instance, they know they will get a list of 1 or more
> > words. They also know they will only ever search through the index of
> words.
> > They dont have other data types, records or tables. Why go through all
> the
> > hassles of compiling SQLs, and that generic overhead when your
> application
> > will only ever do one thing? You can just make an API like this
> > "search(wordlist): Resultset. "
> >
> > You immediatly save yourself complexity and processing time. Then for
> large
> > indices you will know your data set, so instead of using a std BTree you
> > would use a more appropraite DS possible with skip lists etc..
> > .
> > As for performing a database search twice, this whole thread has shown,
> that
> > sometimes the you have to :-)
> >
> > S
> >
> > On 10/25/06, John Stanton <[EMAIL PROTECTED]> wrote:
> > >
> > > There is no magic in data retrieval.  Google use the same physical
> laws
> > > as us ordinary mortals.
> > >
> > > I see no reason to ever perform a dataabase search twice.
> > >
> >
> >
>
>
> --
> Isaac Raway
> Entia non sunt multiplicanda praeter necessitatem.
>
> http://blueapples.org - blog
> http://stonenotes.com - personal knowledge management
>
>
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
>
> -
>
>





--
Isaac Raway
Entia non sunt multiplicanda praeter necessitatem.

http://blueapples.org - blog
http://stonenotes.com - personal knowledge management

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: Regarding sqlite3_exec

2006-10-27 Thread Isaac Raway

Why don't you design the table with a unique row ID, stored in an
integer field, then fetch a list of those ID numbers?

For 5000 rows, assuming you store them in you application as 4 byte
longs, that's about 19 k of memory.

Counting that result as you receive it isn't that difficult. If it
takes a long time (it probably won't) you can do it in another thread
and update the interface as appropriate.

I'm not seeing a downside here.

Isaac

On 10/26/06, Da Martian <[EMAIL PROTECTED]> wrote:

No there isnt, but RDBM systems are a generalised data retrieval mechanism.
As such they suffer from that generality.

Dont get me wrong, RDBM systems  are appropriate for 95% of all data
requirements I have had to deal with and I would never dream of trying to
write one from scratch, nor can I imagine a world without them.

However certain applications (Weather data, Gnome data, Large indices (like
google)) require using somethng designed specifically for that purpose. If
you customise data retrieval (and particluar your sorting/indcies/access
path) you can leave rdbms in the dust in terms of performance. All I have
read about google, suggests they do exactly this. Although I must point out,
I dont actually know anything about google with any certainty. Just what has
"leaked" out over the years on the rumour mill. But designiing my own
"google" like indices (on a smaller scale of coure) and some specialisted
weather stuff, it neccessary to throw away the rdbms and do it yourself. For
a goole query for instance, they know they will get a list of 1 or more
words. They also know they will only ever search through the index of words.
They dont have other data types, records or tables. Why go through all the
hassles of compiling SQLs, and that generic overhead when your application
will only ever do one thing? You can just make an API like this
"search(wordlist): Resultset. "

You immediatly save yourself complexity and processing time. Then for large
indices you will know your data set, so instead of using a std BTree you
would use a more appropraite DS possible with skip lists etc..
.
As for performing a database search twice, this whole thread has shown, that
sometimes the you have to :-)

S

On 10/25/06, John Stanton <[EMAIL PROTECTED]> wrote:
>
> There is no magic in data retrieval.  Google use the same physical laws
> as us ordinary mortals.
>
> I see no reason to ever perform a dataabase search twice.
>





--
Isaac Raway
Entia non sunt multiplicanda praeter necessitatem.

http://blueapples.org - blog
http://stonenotes.com - personal knowledge management

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Trouble with ALTER TABLE/ADD

2006-10-26 Thread Isaac Raway

On 10/25/06, Christian Smith <[EMAIL PROTECTED]> wrote:

A better solution would be to transfer the contents of the table being
updated to a temporary table, then recreate the original tables sans the
surplus columnn:

It may not be quick for large tables, but how often are you going to be
updating the table definition? If often, then you probably have a more
fundamental problem on your hands.


I considered this kind of solution briefly, but I'm afraid that users
might have too much data for this to be effecient at all. It wouldn't
happen a lot, but that rebuilding will have to happen between
user interactions. The user is basically allowed to create a form
template attached to an object. This template has an ID and a table
devoted to that type (user_data_nn).

A row in a user_tables:

ID  name   fields
01 "Person"  "name,email,site"

Then user_data_01 is created with 3 generic columns. If the total
field count ever drops below 3, it's simply ignored.

So what's really 

Re: [sqlite] Trouble with ALTER TABLE/ADD

2006-10-23 Thread Isaac Raway

Thank, I will look at that (away from my dev machine for the day).

One other related question, are there plans to expand the functionality of
ALTER TABLE? I am working on a feature that could benefit greatly from
REMOVE/RENAME COLUMN. As it stands, I am going to have to simulate this by
using generic column names and mapping them to a list of the "actual" names.
It would be *very* nice to see these features added to sqlite before I
finish this feature, but I imagine this has been requested before...

Isaac


On 10/23/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:


"Isaac Raway" <[EMAIL PROTECTED]> wrote:
>
> ALTER TABLE topic ADD COLUMN type_id integer;
>
> This works fine when I run it on the sqlite3 command line, but fails in
the
> Delphi units. Any thoughts? Has anyone ahd trouble running ALTER TABLE
from
> the Delphi bindings?
>

Perhaps the delphi code is statically linked against an
older version of SQLite.  ADD COLUMN was added in version
3.2.0.  You can find what version delphi uses by executing

   SELECT sqlite_version();

--
D. Richard Hipp  <[EMAIL PROTECTED]>



-
To unsubscribe, send email to [EMAIL PROTECTED]

---------





--
Isaac Raway
Entia non sunt multiplicanda praeter necessitatem.

http://blueapples.org - blog
http://stonenotes.com - personal knowledge management


[sqlite] Trouble with ALTER TABLE/ADD

2006-10-23 Thread Isaac Raway

Hi, this is my first question on this list. First, a brief introduction:
I've been using sqlite for about the past year or so and so far I'm very
happy with it. As far as databases, I have experience with MySQL, SQL Server
and of course Access. My language skills include Delphi, PHP, Tcl, and VB
(top 4).

Okay, my question...I have this table in the first version of an application
I have created:

CREATE TABLE topic(id integer primary key, title string, namespace integer,
content blob, meta blob, x integer, y integer, w integer, h integer);

In order to upgrade existing databases I run various commands to add the
needed tables and store the current version of the database

ALTER TABLE topic ADD COLUMN type_id integer;

This works fine when I run it on the sqlite3 command line, but fails in the
Delphi units. Any thoughts? Has anyone ahd trouble running ALTER TABLE from
the Delphi bindings?

--
Isaac Raway
Entia non sunt multiplicanda praeter necessitatem.


--
Isaac Raway
Entia non sunt multiplicanda praeter necessitatem.

http://blueapples.org - blog
http://stonenotes.com - personal knowledge management


[sqlite] Trouble with ALTER TABLE/ADD

2006-10-22 Thread Isaac Raway

Hi, this is my first question on this list. First, a brief introduction:
I've been using sqlite for about the past year or so and so far I'm very
happy with it. As far as databases, I have experience with MySQL, SQL Server
and of course Access. My language skills include Delphi, PHP, Tcl, and VB
(top 4).

Okay, my question...I have this table in the first version of an application
I have created:

CREATE TABLE topic(id integer primary key, title string, namespace integer,
content blob, meta blob, x integer, y integer, w integer, h integer);

In order to upgrade existing databases I run various commands to add the
needed tables and store the current version of the database

ALTER TABLE topic ADD COLUMN type_id integer;

This works fine when I run it on the sqlite3 command line, but fails in the
Delphi units. Any thoughts? Has anyone ahd trouble running ALTER TABLE from
the Delphi bindings?

--
Isaac Raway
Entia non sunt multiplicanda praeter necessitatem.


Re: [sqlite] sqlite performance questions.

2006-10-18 Thread Isaac Raway

I'm going to agree with Robert here, I have an application that makes heavy
use of large blob of text in a sqlite database. Performance was unbearable,
wrapping even small sets of operations in transactions greatly improved the
performance. I don't have numbers, but suffice it to say that it went from a
noticeable and unacceptable delay when saving a record down to something I
can do automatically without a seperate thread--it's so face it's not
noticeable.

Isaac

On 10/18/06, Robert Simpson <[EMAIL PROTECTED]> wrote:


> -Original Message-
> From: Andrew Cheyne [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, October 18, 2006 8:08 AM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] sqlite performance questions.

[snip]
>   I have then been writing some sample C programs making use
> of the C API,
> but have been very disappointed in the performance of
> Œinsert¹ing into the
> database. For example, timing the performance of executing an insert
> statement into this table only gives me an insertion rate of
> 6 rows per
> second (³insert into Node (url, filename) values (Œfoo¹,¹bar¹);²).

3 words -- Transaction Transaction Transaction!

Start a transaction before you start bulk inserting, and commit it
afterwards.  You are being bitten by SQLite's ACID compliance.  Any
statement not wrapped in a transaction is automatically placed inside its
own transaction.  6 rows per second is the fastest rate at which a
transaction can be spun up, 1 insert performed, the buffers flushed to the
physical disk, and the transaction torn down.

By starting the transaction beforehand, and committing it when you're
done,
you save the worst parts of the task for the beginning and end, and only
perform them once.

Robert




-
To unsubscribe, send email to [EMAIL PROTECTED]

-----





--
Isaac Raway
Entia non sunt multiplicanda praeter necessitatem.

http://blueapples.org - blog
http://stonenotes.com - personal knowledge management


Re: [sqlite] CE locking -- review the code

2006-01-07 Thread Isaac Raway

Robert Simpson wrote:


Please have a look at it and poke some holes in it.
 

Upon a quick readthrough I noted that the filename is concated into the 
Mutex name as-is. This is potentially dangerous given that Windows file 
systems are not case sensitive and yet according to the CreateMutex 
documentation "lpName [...] Name comparison is case sensitive. [...]" 
This could lead to some unexpected behavior.


-Isaac



Re: [sqlite] INERT OR REPLACE behavior

2006-01-07 Thread Isaac Raway

[EMAIL PROTECTED] wrote:


Isaac Raway <[EMAIL PROTECTED]> wrote:


[I]nstead of dropping the row existing, [the REPLACE algorithm should]
simply update the provided fields in place, leaving the unmodified fields
as is. I'd call this behavior OR UPDATE as it would be exactly equivalent 
to doing an UPDATE on the existing row(s).




There might be two or more rows that conflict.  How would it choose
which row to use as the basis?

   CREATE TABLE t1(a UNIQUE, b UNIQUE, c DEFAULT 3);
   INSERT INTO t1 VALUES(1,1,1);
   INSERT INTO t1 VALUES(2,2,2);
   REPLACE INTO t1(a,b) VALUES(1,2);
   SELECT c FROM t1;

The statement sequence above should generate a single row of
result.  The current implementation returns 3.  What would you
have it return instead?


If I'm not mistaken the SELECT would actually return 3 twice:

+-+
|3|
+-+
|3|
+-+

I would have it return each row's value before the query, i.e.:

+-+
|1|
+-+
|2|
+-+

To me this makes much more sense in almost all cases that I can think 
of, not the least of which is my current application.


Basically, it would be nice to be able to update a single field (or a 
set of fields) in a set of rows without touching the fields not being 
addressed, alternately creating a new row if no UNIQUE conflicts arise.


[sqlite] INERT OR REPLACE behavior

2006-01-07 Thread Isaac Raway
I am using a pretty simple INSERT OR REPLACE query. I noticed this bit 
in the description of the ON CONFLICT syntax for REPLACE:


> When a UNIQUE constraint violation occurs, the pre-existing rows that 
are causing the constraint violation are removed prior to inserting or 
updating the current row.


This makes sense in many contexts I image, however it would be 
immeasurably useful if an alternate behavior was available, perhaps as a 
separate conflict algorithm. That alternate behavior would be to instead 
of dropping the row existing, simply update the provided fields in 
place, leaving the unmodified fields as is. I'd call this behavior OR 
UPDATE as it would be exactly equivalent to doing an UPDATE on the 
existing row(s).


With the current behavior I'd have to do a query to see if the row 
exists, if not construct SQL for an insert, but if it does construct SQL 
for an update. This is complicated by the fact that if a certain field 
is a certain value, the other fields should not be modified: they are 
left as is in the database.


Perhaps someone with more experience in SQL queries (I have just done 
mostly pretty straight forward MySQL and SQLite) can show me how I might 
be able to do the same thing using only SQL? I'd rather not write it all 
in my client app (written Delphi) just for the brevity that SQL usually 
offers.


Thanks for any thoughts and maybe a new option eventually,
-Isaac