Re: [sqlite] Of shared cache, table locks and transactions

2007-01-14 Thread Peter James

On 1/14/07, Dan Kennedy <[EMAIL PROTECTED]> wrote:


So you're right, when you use shared-cache mode there is less
concurrency in some circumstances.



Thanks for your confirmation and additional comments on the situation!  It's
good to know that I'm interpreting the source correctly at least. :-)

I suppose this must be where the "read_uncommitted" pragma came from, as
that would seem to be the "workaround" for this behavior in cases where it
would be acceptable to do so.

Anyway thanks again,
Pete.


Re: [sqlite] Restricting integer primary key values

2007-01-14 Thread drh
"Brett Keating" <[EMAIL PROTECTED]> wrote:
> Hi,
> 
> I don't want to spam the list but basically, if "id INTEGER PRIMARY KEY
> CHECK (id > 0 and id < 0x)" or something along those lines will
> work, please let me know. It's not clear if sqlite3 supports checks on
> primary keys from what I can tell.
> 

SQLite does not support hexadecimal constants.  Use
4294967295 instead of 0x and it should work.
--
D. Richard Hipp  <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Restricting integer primary key values

2007-01-14 Thread Brett Keating
Hi,

I don't want to spam the list but basically, if "id INTEGER PRIMARY KEY
CHECK (id > 0 and id < 0x)" or something along those lines will
work, please let me know. It's not clear if sqlite3 supports checks on
primary keys from what I can tell.

Thanks,
Brett

-Original Message-
From: Brett Keating [mailto:[EMAIL PROTECTED] 
Sent: Sunday, January 14, 2007 5:14 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] Restricting integer primary key values

Hi,
 
I'm interested in the ability to restrict the values my integer primary
key is allowed to take. Specifically, I would like to restrict the value
to be between (but not including) 0x and 0x. In other
words, a 32 bit number where all zeroes and all ones is not allowed. Is
this possible in sqlite3?
 
Thanks,
Brett


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Restricting integer primary key values

2007-01-14 Thread Brett Keating
Hi,
 
I'm interested in the ability to restrict the values my integer primary
key is allowed to take. Specifically, I would like to restrict the value
to be between (but not including) 0x and 0x. In other
words, a 32 bit number where all zeroes and all ones is not allowed. Is
this possible in sqlite3?
 
Thanks,
Brett


Re: [sqlite] Re: How to optimize a select that gets 17 rows from a 700k row DB (via perl using DBI)?

2007-01-14 Thread Kees Nuyt

Hi Jonathan,

Splitting and automatic updating of history via triggers isn't
very complicated.

BEGIN TRANSACTION;

CREATE TABLE StatsCurrent (
MachineVARCHAR(16) NOT NULL,
Load   REAL,
ScratchCHAR(4),
MemINTEGER,
MemPctFree INTEGER,
Procs  INTEGER,
Users  INTEGER,
Timestamp  VARCHAR(20) not null,
MessageVARCHAR(160),
PRIMARY KEY (Machine)
);

CREATE TABLE StatsHistory (
MachineVARCHAR(16) NOT NULL,
Load   REAL,
ScratchCHAR(4),
MemINTEGER,
MemPctFree INTEGER,
Procs  INTEGER,
Users  INTEGER,
Timestamp  VARCHAR(20) not null,
MessageVARCHAR(160),
PRIMARY KEY (Machine,Timestamp)
);
CREATE INDEX TS ON StatsHistory (Timestamp);

CREATE TRIGGER insert_stats 
AFTER INSERT ON StatsCurrent FOR EACH ROW
BEGIN
INSERT INTO StatsHistory
(Machine, Load, Scratch, Mem, MemPctFree, Procs, Users,
Timestamp, Message)
VALUES
(new.Machine, new.Load, new.Scratch, new.Mem,
new.MemPctFree, new.Procs, new.Users, new.Timestamp,
new.Message);
END;

CREATE VIEW show_stats AS
SELECT *
  FROM StatsCurrent
 ORDER BY Load, Mem*MemPctFree desc, Scratch desc;
COMMIT;

BEGIN TRANSACTION;
INSERT OR REPLACE into StatsCurrent
(Machine, Load, Scratch, Mem, MemPctFree, Procs, Users,
Timestamp, Message)
VALUES
('m1', 0.11, 1010, 2000, 10, 45, 4, '2006-12-31 23:30:01',
'm1 sample 1');
INSERT OR REPLACE into StatsCurrent
(Machine, Load, Scratch, Mem, MemPctFree, Procs, Users,
Timestamp, Message)
VALUES
('m2', 0.12, 1020, 2000, 11, 45, 4, '2006-12-31 23:30:02',
'm2 sample 1');
INSERT OR REPLACE into StatsCurrent
(Machine, Load, Scratch, Mem, MemPctFree, Procs, Users,
Timestamp, Message)
VALUES
('m3', 0.13, 1030, 3000, 12, 45, 4, '2006-12-31 23:30:03',
'm3 sample 1');
COMMIT;

BEGIN TRANSACTION;
INSERT OR REPLACE into StatsCurrent
(Machine, Load, Scratch, Mem, MemPctFree, Procs, Users,
Timestamp, Message)
VALUES
('m1', 0.16, 1030, 1000, 19, 45, 4, '2006-12-31 23:31:01',
'm1 sample 2');
INSERT OR REPLACE into StatsCurrent
(Machine, Load, Scratch, Mem, MemPctFree, Procs, Users,
Timestamp, Message)
VALUES
('m2', 0.15, 1020, 2000, 18, 45, 4, '2006-12-31 23:31:02',
'm2 sample 2');
INSERT OR REPLACE into StatsCurrent
(Machine, Load, Scratch, Mem, MemPctFree, Procs, Users,
Timestamp, Message)
VALUES
('m3', 0.14, 1010, 3000, 17, 45, 4, '2006-12-31 23:31:03',
'm3 sample 2');
COMMIT;

SELECT * FROM show_stats;

m3 0.14 1010 3000 17 45 4 2006-12-31 23:31:03 m3 sample 2
m2 0.15 1020 2000 18 45 4 2006-12-31 23:31:02 m2 sample 2
m1 0.16 1030 1000 19 45 4 2006-12-31 23:31:01 m1 sample 2

(more or less tested, StatsHistory really gets all the rows)

HTH

On Sat, 13 Jan 2007 18:35:20 -0500, you wrote:

>First off, thanks for the help and sorry for the formatting of the 
>message.  I didn't know how it was going to turn out and I probably was 
>overly optimistic as well as too verbose.
>
>Secondly, as I feared, seems like it was an XY question, so sorry for that 
>as well.
>
>I'll address the two replies I can see so far, and some of the info in 
>each section will likely overlap.
>
>Nico:
>
>I guess that's the trick, to have the "current" or at least "recent" 
>database and then the historical one.  As of now, the process of polling 
>the 17 machines takes about 40 seconds or so (when I first started running 
>the process minutely, it was 20, so you can see I have to do something 
>soon :))
>
>So assuming the two-db model, what's the trick to it?  Here are some ideas 
>off the top of my head--can you (or any reader) please give me your 
>thoughts (be as brutal as you like--I'm under no illusion that I know what 
>I'm talking about):
>
>1)  The "current" table only ever has 17 rows. 
>a)Have some kind of thing built in to the script that runs 
>minutely to copy the "current" data to the historical DB before kicking 
>off the part that updates the current data.
>b)Add a trigger to the DB where the SQLite engine takes care of 
>the copy somehow--this would probably be more difficult since I don't know 
>how to add a trigger and I am thinking that the historical database will 
>be in a different file altogether.
>c)Something I haven't thought of
>
>2)  The current table is only allowed to have a maximum on N rows.  Upon 
>reaching this size, data are moved to the historical database and only the 
>most recent observations for each machine are left in the current DB.  Not 
>sure how I could do that.  Is there a way to do this within SQLite?
>
>3)  A job runs every night or week (at a time when people are least likely 
>to be using the page such as 3 am) that transfers the data from the 
>current DB to the historical, leaving only the most recent observation for 
>each machine.
>
>Jay:
>

Re: Re: [sqlite] Re: How to optimize a select that gets 17 rows from a 700k row DB (via perl using DBI)?

2007-01-14 Thread Miha Vrhovnik
I'm gonna cut all the content and say just one thing.
If Sqlite supported table partitioning this would be piece of cake without any 
complications.

http://www.postgresql.org/docs/8.2/static/ddl-partitioning.html

--
It's time to get rid of your current e-mail client ...
... and start using si.Mail.

It's small & free. ( http://simail.sourceforge.net/ )

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] FTS1

2007-01-14 Thread Alberto Simões

Hi

I am reading
http://www.sqlite.org/cvstrac/wiki?p=FtsOne
about the Full Text Search plugin to SQLite and trying to compile it,
and I am wondering why it is not activated using the configure script,
or at least why it does not contain a Makefile in its directory. I
know it is beta (forever... makes me think of gmail) and that it is
not supported, but it would be easier for users to compile it and test
it.

Just an idea ;)
Now, I'll continue learning how to create a .so file ;)

Cheers
Alberto

--
Alberto Simões

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: How to optimize a select that gets 17 rows from a 700k row DB (via perl using DBI)?

2007-01-14 Thread Jay Sprenkle

On 1/13/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:


Jay:

The closer to real-time, the better.  The most often a cron job can run
under Linux is minutely, and minutely is pretty good.  I guess I could
have the summary process occur at the end of the script that polls the
machines.  It could generate static HTML, which would presumably make the
page load super fast.  However, under the current regime, the process of
creating that summary is going to take at least 10 seconds.  40 seconds
for polling + 10 seconds for summarizing=50 seconds, and that number is
only going to get bigger!  So I'll have to figure out a better table
structure anyway.


You don't have to run under cron for something like that. Loading and
unloading the program several times a minute is not very efficient anyway.
Just let it run continuously and use sleep() (or a timer) to yield your time
slice until
the next time you want to run.

What's the advantage of a database for this application? If all you need is
to load balance it would seem simpler to just query each machine for it's
load and react accordingly. I'm not sure if Perl supports SOAP
interfaces or serializing data over an http connection. You might look
into that for later.


Are indices something that only work if you create them BEFORE you

start adding data?

No. The index on stats.Timestamp should speed up finding
the record max(Timestamp). It will speed up queries for existing data to.
It's like a table of contents for a book.

Here's what I would try:

1. Can you speed up this?
select Machine, max(Timestamp) as M from stats group by machine

If this is trying to get the machine with the latest time stamp then
perhaps this might be faster:
 select Machine, Timestamp as M from stats order by Timestamp desc limit 1
It gets one record instead of summarizing a lot of data.

Your code does a join of two tables on the machine column and timestamp:

select a.* from stats a, (select Machine, max(Timestamp) as M from stats
group by machine) b where a.machine=b.machine and a.timestamp=b.M order by
load, Mem*MemPctFree desc, Scratch desc;

Did you index both tables on ( machine, timestamp )?
It's got to match them up so an index will speed up the search of both sides
when it tries to match them up.






Additional thoughts:

In general, I think splitting the tables up is the way to go.  Any further
comments/suggestions appreciated!

Jonathan








-

To unsubscribe, send email to [EMAIL PROTECTED]


-






--
--
The PixAddixImage Collector suite:
http://groups-beta.google.com/group/pixaddix

SqliteImporter and SqliteReplicator: Command line utilities for Sqlite
http://www.reddawn.net/~jsprenkl/Sqlite

Cthulhu Bucks!
http://www.cthulhubucks.com


AW: [sqlite] Of shared cache, table locks and transactions

2007-01-14 Thread Michael Ruck
Yes, I second this opinion. However I believe sqlite is ACID, just not when
shared cache mode is enabled...

Mike 

-Ursprüngliche Nachricht-
Von: Ken [mailto:[EMAIL PROTECTED] 
Gesendet: Sonntag, 14. Januar 2007 17:00
An: sqlite-users@sqlite.org; [EMAIL PROTECTED]
Betreff: Re: [sqlite] Of shared cache, table locks and transactions

Yes exactly my point for some time now!!!
   
  I think it would be nice if sqlite could optionally maintain a pre-write
state version of the cached page in memory. This really means maintaining
some form of page versioning, which is already done via the pager writing to
the journal. The pager must write the original page that undergoes
modification to the journal file before it can manipulate the cached
version. 
   
  To expedite performance the journaled pages could be maintained in memory
as well as written to disk. That way a writer and reader could co-exist. 
 Writer modifies a page, (acquiring a Write page lock). 
  Make a copy in memory, saving a reference to this via the cache as
well as its offset in the journal. 
 The origainal Page is then written to the disk journal.
 If the Journal cache exceeds memory capacity, Just release pages and
retain a file offset pointer in memory.
   
 The reader when encountering a locked page, could then check the cached
journal pages.  
 If not found in the cache use the file offset reference and read this
in from the journal file.
  
  This would take care of the simple case of writer blocking! As there is
only ever 1 writer. The original page is sufficient to take care of
(isolation) in ACID.
   
  As it stands today, sqlite imho, is  ACD, it is not have isolated
transactional capabilities.
   
  
Dan Kennedy <[EMAIL PROTECTED]> wrote:
  On Sat, 2007-01-13 at 23:55 -0800, Peter James wrote:
> Hey folks...
> 
> I have a situation that caused me a little head-scratching and I'm 
> wondering if it's intended behavior or not.
> 
> I'm running a server thread (roughly based on test_server.c in the 
> distro) on top of the 3.3.6 library. The effectve call sequence in 
> question (all from one thread) looks something like this:
> 
> sqlite3_open("/path/to/db", _one);
> 
> sqlite3_prepare(db_one, "CREATE TABLE foo (id);", -1, _one, 
> NULL); sqlite3_step(stmt_one); sqlite3_finalize(stmt_one);
> 
> sqlite3_prepare(db_one, "BEGIN DEFERRED;", -1, _one, NULL); 
> sqlite3_step(stmt_one); sqlite3_finalize(stmt_one);
> 
> sqlite3_prepare(db_one, "INSERT INTO foo VALUES (123);", -1, 
> _one, NULL); sqlite3_step(stmt_one); sqlite3_finalize(stmt_one);
> 
> sqlite3_prepare(db_one, "SELECT count(*) FROM foo;", -1, _one, 
> NULL); sqlite3_step(stmt_one); // point of interest #1 
> sqlite3_column_int(stmt_one, 0); sqlite3_finalize(stmt_one);
> 
> // new connection here, previous transaction still pending...
> sqlite3_open("/path/to/db", _two);
> 
> sqlite3_prepare(db, "SELECT count(*) FROM foo;", -1, _two, NULL); 
> // point of interest #2 sqlite3_step(stmt_two); // point of interest 
> #3 sqlite3_column_int(stmt_two, 0); sqlite3_finalize(stmt_two);
> 
> If shared cache is DISabled, then I get "1" on the first point of 
> interest and "0" on the third point of interest, which is what I'd 
> expect. The database file is at a RESERVED lock state in both 
> locations, and the first point of interest gets uncommitted data since 
> it's in the same connection, while the second point of interest can't 
> yet see that data since it's a different connection and the transaction is
not yet committed.
> 
> On the other hand, if shared cache is ENabled, then I get "1" on the 
> first point of interest and SQLITE_LOCKED at the second point of 
> interest. This would seem to indicate an actual degradation of 
> concurrency by using shared caching. Without shared caching, readers 
> in the same thread as a pending writer are allowed. With shared 
> caching, they are not. The EXPLAIN output seems to confirm that this 
> is a result of the binary nature of table locks vs. the staged nature of
sqlite file locks.

Here's how I figure it:

When the shared-cache was DISabled, the first connection loaded it's own
cache and then modified it (the INSERT statement). No writing to the disk
has occured at this point, only the cache owned by the first connection. 

The second connection then loaded up it's own cache (from the file on
disk) and queried it. No problem here.

However when the shared-cache was ENabled the second connection piggy-backed
onto (i.e shares) the cache opened by the first connection.
Since the pages corresponding to table "foo" in this cache contain
uncommitted modifications, SQLite prevents the second connection from
reading them - returning SQLITE_LOCKED. Otherwise, the second connection
would be reading uncommitted data.

So you're right, when you use shared-cache mode there is less concurrency in
some circumstances.

Dan.



-
To unsubscribe,