[SQL] Repeatable reads

2000-09-19 Thread Michael Richards

It appears that postgres 7.0 does not support repeatable read for 
transaction isolation. Is this planned? If so, when?

-Michael
_
 http://fastmail.ca/ - Fast Free Web Email for Canadians



[SQL] Table locking.

2001-01-10 Thread Michael Richards

Hi.

I've always had trouble understanding how to properly deal with the 
following case.

I need to run an insert on a table but I don't want any other 
transactions to commit the insert before me. Within the logic of the 
program, I run a select to see if there is a value there, then the 
insert. Between these 2 statements someone could commit one that 
causes the insert to fail.

Is there a better way to do this than locking the table in SHARE ROW 
EXCLUSIVE MODE and trying to be certain that anything else that may 
call an insert there asks for the same type of lock?

-Michael
_
 http://fastmail.ca/ - Fast Free Web Email for Canadians



[SQL] deferred constraints failing on commit

2001-01-16 Thread Michael Richards

Hi.

I'm having trouble with committing a transaction. Intuitively it 
should work but does not.

I've got a table with 2 foreign keys, minrev and maxrev. They refer 
to a the revisionid value in another table. I need to update the 
second table to reduce the revisionid, then insert a new row. At the 
end of this all the keys match up yet the commit fails.

urdr=> begin;
BEGIN
urdr=> update objects set revisionid=2 where id=2 and 
revisionid=;  
   
UPDATE 1
urdr=> insert into objects 
(id,typeid,repositoryid,parentid,deleted,permissions,revisionid,name) 
values (2,2,1,NULL,'f',NULL,,'test.sql'); 
INSERT 246107 1
urdr=> select id,revisionid from objects; 
 id | revisionid 
+
  1 |   
  2 |  1
  2 |  2
  2 |   
(4 rows)
urdr=> select * from objcatalog ;
 objectid | repositoryid |  minrev  |  maxrev  |   key|  data
--+--+--+--+--+--
2 |1 |  |  | mimetype |text/plain
(1 row)

urdr=> commit;
ERROR:   referential integrity violation - key in objects 
still referenced from objcatalog

At commit all the keys check out properly. minrev and maxrev both 
point to the same revisionid in the row we just inserted.

Is this a bug or me just misreading how things should work again?

-Michael
_
 http://fastmail.ca/ - Fast Free Web Email for Canadians



Re: [SQL] deferred constraints failing on commit

2001-01-16 Thread Michael Richards

Here is a test case that illustrates the problem. I figured I was 
doing it all wrong before and didn't bother to distill and include a 
test case.

create table objects(
revisionid int4,
primary key (revisionid));

create table objcatalog(
minrev int4,
maxrev int4,
foreign key (minrev) references objects(revisionid) INITIALLY 
DEFERRED,
foreign key (maxrev) references objects(revisionid) INITIALLY 
DEFERRED);

insert into objects values (999);
insert into objcatalog values (999,999);

begin;
SET CONSTRAINTS ALL DEFERRED;
update objects set revisionid=1;
insert into objects values (999);

select * from objects;
select * from objcatalog;
commit;

-Michael

> Can you send the full schema of the tables you are using for
> this?
>
> On Tue, 16 Jan 2001, Michael Richards wrote:
>
>> Hi.
>>
>> I'm having trouble with committing a transaction. Intuitively it
>> should work but does not.
>>
>> I've got a table with 2 foreign keys, minrev and maxrev. They
>> refer to a the revisionid value in another table. I need to
>> update the second table to reduce the revisionid, then insert a
>> new row. At the end of this all the keys match up yet the commit
>> fails.
>>
>> urdr=> begin;
>> BEGIN
>> urdr=> update objects set revisionid=2 where id=2 and
>> revisionid=;
>>
>> UPDATE 1
>> urdr=> insert into objects
>> (id,typeid,repositoryid,parentid,deleted,permissions,revisionid,n
>> ame) values (2,2,1,NULL,'f',NULL,,'test.sql');
>> INSERT 246107 1
>> urdr=> select id,revisionid from objects;
>> id | revisionid
>> +
>> 1 |   
>> 2 |  1
>> 2 |  2
>> 2 |   
>> (4 rows)
>> urdr=> select * from objcatalog ;
>> objectid | repositoryid |  minrev  |  maxrev  |   key|  data
>> --+--+--+--+--+--
>>  2 |1 |  |  | mimetype
>> |text/plain (1 row)
>>
>> urdr=> commit;
>> ERROR:   referential integrity violation - key in
>> objects still referenced from objcatalog
>>
>> At commit all the keys check out properly. minrev and maxrev both
>> point to the same revisionid in the row we just inserted.
>>
>> Is this a bug or me just misreading how things should work again?
>>
>> -Michael
>> _
>> http://fastmail.ca/ - Fast Free Web Email for Canadians
>>

_
 http://fastmail.ca/ - Fast Free Web Email for Canadians



Re: [SQL] PostgreSQL HOWTO

2001-01-18 Thread Michael Richards

> from http://www.linux.org/docs/ldp/howto/PostgreSQL-HOWTO-11.html
>
>> Best web-scripting (and compiling) language is PHP+Zend
>> compiler PHP is extremely powerful as it combines the power of
>> Perl, Java, C++, Javascript into one single language and it
>> runs on all OSes - unixes and Windows NT/95.

Is everyone sure that this HOWTO is not a work of fiction sort of 
like that report on the computer virus that will cause your 
motherboard to catch fire and burn down your house in the middle of 
the night?

As I understand Zend is a compiler/interpreter that uses a optimised 
bytecode to run a little faster than the normal apache/php. It shares 
few of the features of perl, even fewer of Java. C++? Last time I 
checked, PHP couldn't do OOP. Next thing we know it will be as 
efficient as assembler and as object oriented as SmallTalk.

-Michael
_
 http://fastmail.ca/ - Fast Free Web Email for Canadians



Re: [SQL] PostgreSQL HOWTO

2001-01-18 Thread Michael Richards

>>> Best web-scripting (and compiling) language is PHP+Zend
>>> compiler PHP is extremely powerful as it combines the power of
>>> Perl, Java, C++, Javascript into one single language and it
>>> runs on all OSes - unixes and Windows NT/95.
>>
>> it seems that the author never used any other think then PHP ...
>
> I am afraid I would disagree. I have used all of the languages he
> metions and for the Web, PHP is the best.

The comparison is neither scientific (nothing is quantified or 
specified as criteria) nor is it accurate. 

Since you've used all the languages, you should agree that there are 
very few things in common with any of them or with PHP. Zend parses 
the contents and compiles it into an optimised bytecode and executes 
that. This bytecode can be cached. 

PHP is server based and nothing runs on the client machine as in 
JavaScript. 

PHP is not a compiled nor an object oriented language as in C++. 
There is no inheritance, polymorphism, or exception handling it has 
little in common with C++, let alone taking the "powerful features" 
from it.

Same deal with Java. PHP may be executed similar to a VM, but come 
on, a JVM is not even close to PHP. On the JCL side of things PHP 
lacks any sort of security manager, objects, reflection, exceptions 
or platform independent bytecode. It has very little in common with 
Java as well.

I build web email systems for a living and I find C++ works best for 
me because my implementation is efficient, protects my source code 
for outside licenses and allows me to use OOP within my CGI. For me, 
it was the best of all mentioned. This is an opinion and I've stated 
as to why I chose C++.

My opinion is a sharp contrast to what is written above. I could have 
just said.

No, you're all wrong, C++ is the best. 

That is not very helpful. Nor is the HOWTO because it is riddled with 
not only statements like that, but also facts that are totally wrong.

PHP also does not run on all OSes. I'm sure if I sat down and looked 
I could find a large number of platforms it does not run on. How 
about MS-DOS? Xenix? OS/2? AS/400? Might be silly examples, but 
still points out how silly the "runs on all OSes" statement is.

-Michael
_
 http://fastmail.ca/ - Fast Free Web Email for Canadians



[SQL] 7.1 REFERENCES contstraints

2001-05-06 Thread Michael Richards

I see that the REFERENCES constraint is a little more restrictive in 
7.1. I need to have a table with a constraint so one of it's columns 
must exist in a second table. This is not a key, since there may be N 
columns in the second table that match. Any ideas on how I should 
create this?

CREATE table foo(
id serial,
permissions int4,
FOREIGN KEY (permissions) REFERENCES bar(permid));

used to work in 7.0 but now it complains with:
psql:ddl.sql:103: ERROR:  UNIQUE constraint matching given keys for 
referenced table "permissions" not found

-Michael
_
 http://fastmail.ca/ - Fast Free Web Email for Canadians
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[SQL] Constraints...

2001-05-12 Thread Michael Richards

Does anyone know how I can make a constraint on a key to enforce a 1 
to n relationship where n>0?

I've invented an example to show the sort of constraint I need:
CREATE TABLE permissions (
  id int4,
  userid int4,
  perm int4,
  primary key (id,userid)
);
CREATE TABLE objects (
  id int4,
  perm int4 NOT NULL,
  data text
);

INSERT INTO permissions (id,userid,perm) VALUES (1,1001,7);
INSERT INTO permissions (id,userid,perm) VALUES (1,1002,6);
INSERT INTO permissions (id,userid,perm) VALUES (1,1003,4);
INSERT INTO permissions (id,userid,perm) VALUES (2,1001,0);

So I want to allow something like:
INSERT INTO objects (id,perm,data) VALUES (1,1,'everyone can read');
INSERT INTO objects (id,perm,data) VALUES (2,1,'everyone can read');
INSERT INTO objects (id,perm,data) VALUES (3,2,'nobody can read');

But disallow an insert like:
INSERT INTO objects (id,perm,data) VALUES (,1,'bad perm example');

Is this possible? 

-Michael

_
 http://fastmail.ca/ - Fast Free Web Email for Canadians
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[SQL] index/join madness

2001-05-22 Thread Michael Richards

Ok, I've built the most insane query ever. It joins 11 tables, most 
of which are the same table, just extracting different values. Here 
is the query plan:
Nested Loop  (cost=0.00..5011.89 rows=1 width=156)
  ->  Nested Loop  (cost=0.00..4191.82 rows=1 width=140)
->  Nested Loop  (cost=0.00..4189.79 rows=1 width=112)
  ->  Nested Loop  (cost=0.00..4188.58 rows=1 width=104)
->  Nested Loop  (cost=0.00..4186.55 rows=1 
width=88)
  ->  Nested Loop  (cost=0.00..3366.48 rows=1 
width=72)
->  Nested Loop  (cost=0.00..2546.41 
rows=1 width=68)
  ->  Nested Loop  
(cost=0.00..1726.34 rows=1 width=52)
->  Nested Loop  
(cost=0.00..906.27 rows=1 width=32)
  ->  Seq Scan on 
formdata f6  (cost=0.00..904.16 rows=1 width=4)
  ->  Index Scan 
using users_pkey on users u  (cost=0.00..2.02 rows=1 width=28)
SubPlan
  ->  Seq 
Scan on sentletters  (cost=0.00..0.00 rows=1 width=4)
->  Seq Scan on formdata 
f5  (cost=0.00..818.42 rows=131 width=20)
  ->  Seq Scan on formdata f2  
(cost=0.00..818.42 rows=131 width=16)
->  Seq Scan on formdata f1  
(cost=0.00..818.42 rows=131 width=4)
  ->  Seq Scan on formdata f3  
(cost=0.00..818.42 rows=131 width=16)
->  Index Scan using formmvl_pkey on formmvl m  
(cost=0.00..2.02 rows=1 width=16)
  ->  Seq Scan on relations r  (cost=0.00..1.12 rows=7 
width=8)
->  Index Scan using users_pkey on users u2  (cost=0.00..2.02 
rows=1 width=28)
  ->  Seq Scan on formdata f4  (cost=0.00..818.42 rows=131 width=16)

If anyone has a screen wide enough to see this, you will see that the 
majority of the time is spent doing sequential scans on the formdata 
table. This table needs formid, fieldid and userid to find the value 
I'm looking for.

It has one index defined on:
Index "formdata_pkey"
 Attribute |  Type   
---+-
 formid| integer
 occid | integer
 userid| integer
 fieldid   | integer
unique btree (primary key)

In my case I'm ignoring occid since it's always 1 for these values. 
Is there any way I can coerce this into using a multifield index? My 
joins generally look like: 
JOIN formdata AS f2 ON (u.id=f2.userid AND f2.formid=1 AND 
f2.fieldid=2)

I'm a little curious as to why it's not using the primary key...

Finally, I'm planning on moving this to 7.2 and converting all the 
joins to use outer joins. Will there be a significant penalty in 
performance running outer joins?

-Michael
_
 http://fastmail.ca/ - Fast Free Web Email for Canadians
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[SQL] Left Joins...

2001-05-27 Thread Michael Richards

I've got a select that pulls many values from the same table. 
Basicaly for a given formid there can be many fields each one 
depending on a definition. So form1 may be broken down as follows:
fieldid 
  1  firstname
  2  lasname
  3  postal code

Rather than sticking this data in XML (which is hard to query and 
index) Each one occupies a row in my formdata table.

I've got a nasty query that joins a table onto itself like 22 times. 
I'm wondering if there might be a better way to do this, and also how 
I can left join every additional table on the first one. By this I 
mean that if f1 matches my criteria and therefore isn't null, then 
every other joined field will occur, null or not...

Here is a snippet of my query so you can see what I'm doing:
SELECT 
f1.strval,f2.strval,f3.strval,f4.strval,f5.strval,f6.strval,f7.strval,
f8.strval,f9.strval,f10.strval,f11.strval,f12.strval,f13.strval,f14.st
rval
,f15.strval,f16.strval,f17.strval,f18.strval,f19.strval,f20.strval,m1.
strval
  FROM formdata AS f1
LEFT JOIN formdata AS f2 ON (f2.formid=4 AND f2.occid=1 AND 
f2.fieldid=2 AND f2.userid=f1.userid)
LEFT JOIN formdata AS f3 ON (f3.formid=4 AND f3.occid=1 AND 
f3.fieldid=3 AND f3.userid=f1.userid)
LEFT JOIN formdata AS f4 ON (f4.formid=4 AND f4.occid=1 AND 
f4.fieldid=4 AND f4.userid=f1.userid)
LEFT JOIN formdata AS f5 ON (f5.formid=4 AND f5.occid=1 AND 
f5.fieldid=5 AND f5.userid=f1.userid)
LEFT JOIN formdata AS f6 ON (f6.formid=4 AND f6.occid=1 AND 
f6.fieldid=6 AND f6.userid=f1.userid)
LEFT JOIN formdata AS f7 ON (f7.formid=4 AND f7.occid=1 AND 
f7.fieldid=7 AND f7.userid=f1.userid)
LEFT JOIN formdata AS f8 ON (f8.formid=4 AND f8.occid=1 AND 
f8.fieldid=8 AND f8.userid=f1.userid)
LEFT JOIN formdata AS f9 ON (f9.formid=4 AND f9.occid=1 AND 
f9.fieldid=9 AND f9.userid=f1.userid)
[...]

So I don't care if f2..f22 do not exist, but f1 must exist...

Any ideas?

-Michael
_
 http://fastmail.ca/ - Fast Free Web Email for Canadians
---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[SQL] finding a maximum or minimum sum

2001-06-11 Thread Michael Richards

I have a table that looks like so:
userid | amount
---
   1   |  $500
   2   |  $400
   2   | $-100
   2   |   $10
   3   |   $10
   3   |   $10

I run a select sum(amount) from payments group by userid
userid | sum
--
  1| $500
  2| $310
  3|  $20

I need to modify this query so it returns the minimum, maximum and 
average sums. Is there any way I can do this?

-Michael
_
 http://fastmail.ca/ - Fast Free Web Email for Canadians
---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [SQL] finding a maximum or minimum sum

2001-06-11 Thread Michael Richards

Prfect! This is exactly what I needed. Didn't know postgres 
supported subselects like that. Thanks.

-Michael

> select min(amtsum), max(amtsum), avg(amtsum)
> from (select sum(amount) as amtsum from payments group by userid)
> ss;
>
> In prior versions you'd need to do the initial select into a temp
> table and then select min/max/avg from that.

_
 http://fastmail.ca/ - Fast Free Web Email for Canadians
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[SQL] Exclusion List

2001-08-01 Thread Michael Richards

I've got 2 tables, one with a list of users (has only about 5000 
entries) and another with a list of userids that have already been 
sent letters. I'm trying to efficiently join these two so I get every 
user who hasn't been sent a letter. The problem is, coupled with the 
5 other joins on the user table the query can take about 5 minutes to 
run. It's performing the joins and then reducing the list. 

The reduction of the list doesn't seem to be terribly efficient. Here 
are some strategies I've been looking at:

select id from users WHERE 
  id not in (select userid from sentletters where lettertype=1) AND
  aclgroup IN (1,2);   

Seq Scan on users  (cost=0.00..217751.39 rows=5369 width=4)
  SubPlan
->  Seq Scan on sentletters  (cost=0.00..81.06 rows=4405 width=4)

select id from users WHERE not exists 
  (select userid from sentletters where lettertype=1 AND userid=id) 
  AND aclgroup IN (1,2);

Seq Scan on users  (cost=0.00..10980.07 rows=1 width=4)
  SubPlan
->  Index Scan using sentletters_userid_key on sentletters  
(cost=0.00..2.02 rows=1 width=4)

select id from users AS u 
  LEFT JOIN sentletters AS sl ON (lettertype=1 AND sl.userid=u.id) 
where sl.userid IS NULL AND u.aclgroup IN (1,2);

Hash Join  (cost=81.06..550.18 rows=5322 width=12)
  ->  Seq Scan on users u  (cost=0.00..152.53 rows=5322 width=4)
  ->  Hash  (cost=70.05..70.05 rows=4405 width=8)
->  Seq Scan on sentletters sl  (cost=0.00..70.05 rows=4405 
width=8)

All but the last which is an odd way to do it have nasty query plan 
and this isn't even talking about the joins.

I then need to join these results with a table that has about 200,000 
rows and is joined on the userid and some other key elements.

Any suggestions on this? The full query causing the problem is:
select u.id,u.firstname,u.surname,f2.strval,f3.strval,f4.strval,
   f5.strval,u2.firstname,u2.surname,m.strval 
from 
  users as u 
  JOIN dft_formdata as f1 ON 
(u.id=f1.userid AND u.aclgroup IN (1,2) AND f1.formid=1 AND 
 f1.fieldid=1) 
  LEFT JOIN dft_formdata as f2 ON 
(u.id=f2.userid AND f2.formid=1 AND f2.fieldid=2) 
  JOIN dft_formdata as f3 on 
(u.id=f3.userid AND f3.formid=1 AND f3.fieldid=3) 
  JOIN dft_formdata as f4 on 
(u.id=f4.userid AND f4.formid=1 AND f4.fieldid=5) 
  JOIN relations as r on 
(u.id=r.relatedto AND r.type=2) 
  JOIN users as u2 on 
(u2.id=r.userid AND u2.aclgroup=200) 
  JOIN dft_formdata as f5 on 
(u.id=f5.userid AND f5.formid=1 AND f5.fieldid=4) 
  JOIN dft_formmvl as m on 
(m.id=f5.intval and m.mvlid=1) 
  JOIN ft_formdata as f6 on 
(u.id=f6.userid AND f6.formid=1 AND f6.fieldid=155 AND 
 f6.intval=3) 
  WHERE not exists 
(select userid from sentletters WHERE userid=u.id);

Nested Loop  (cost=0.00..11280.10 rows=1 width=164)
 ->  Nested Loop  (cost=0.00..11276.36 rows=1 width=160)
  ->  Nested Loop  (cost=0.00..11274.33 rows=1 width=144)
   ->  Nested Loop  (cost=0.00..11270.59 rows=1 width=124)
->  Nested Loop  (cost=0.00..11268.56 rows=1 width=96)
 ->  Nested Loop  (cost=0.00..10981.55 rows=1 width=88)
  ->  Nested Loop  (cost=0.00..10977.82 rows=1 width=72)
   ->  Nested Loop  (cost=0.00..10974.10 rows=1 width=56)
->  Nested Loop  (cost=0.00..10970.37 rows=1 width=32)
 ->  Seq Scan on users u  (cost=0.00..10966.65 rows=1 
width=28)
  SubPlan
   ->  Index Scan using sentletters_userid_key on 
sentletters  (cost=0.00..2.01 rows=1 width=4)
 ->  Index Scan using dft_formdata_userid_field on 
dft_formdata f1  (cost=0.00..3.71 rows=1 width=4)
->  Index Scan using dft_formdata_userid_field on 
dft_formdata f2  (cost=0.00..3.71 rows=1 width=24)
   ->  Index Scan using dft_formdata_userid_field on dft_formdata 
f3  (cost=0.00..3.71 rows=1 width=16)
  ->  Index Scan using dft_formdata_userid_field on dft_formdata 
f4  (cost=0.00..3.71 rows=1 width=16)
 ->  Seq Scan on relations r  (cost=0.00..185.43 rows=5079 
width=8)
->  Index Scan using users_pkey on users u2  (cost=0.00..2.02 
rows=1 width=28)
   ->  Index Scan using dft_formdata_userid_field on dft_formdata f5  
(cost=0.00..3.71 rows=1 width=20)
  ->  Index Scan using dft_formmvl_pkey on dft_formmvl m  
(cost=0.00..2.02 rows=1 width=16)
 ->  Index Scan using dft_formdata_userid_field on dft_formdata f6  
(cost=0.00..3.71 rows=1 width=4)


Yes I know the query itself is really nasty but I think 5 minutes is 
a little excessive.

-Michael
_
 http://fastmail.ca/ - Fast Free Web Email for Canadians
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] Exclusion List

2001-08-01 Thread Michael Richards

> "Michael Richards" <[EMAIL PROTECTED]> writes:
>> The reduction of the list doesn't seem to be terribly efficient.
>> Here are some strategies I've been looking at:
>
>> select id from users WHERE
>> id not in (select userid from sentletters where lettertype=1) AND
>> aclgroup IN (1,2);
>
> Try an EXCEPT, along the lines of
>
> (select id from users where conditions) except
> (select userid from sentletters where other-conditions);

I tried except but in this case I'd have to use this to extract the 
list of ids and then re-join it with users again to get the rest of 
the data from the users table :(

-Michael

_
 http://fastmail.ca/ - Fast Free Web Email for Canadians
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] Indexes on functions

2001-10-17 Thread Michael Richards

I'm going to write a function in C that parses XML. Is there any way 
to index the output of this function? I've got 10,000,000 rows that 
contain XML data and I need to efficiently find the ones that contain 
the proper keys. I tried pulling the values I want from the data and 
putting it in its own table but with an average of 20 items that just 
grows to an unmanageable size.

-Michael
_
 http://fastmail.ca/ - Fast Free Web Email for Canadians
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[SQL] When will vacuum go away?

2001-10-15 Thread Michael Richards

I've been watching for this for some time. First it was 7.0, then 
7.1. Does anyone have any idea on when the row re-use code will be 
ready? 

Currently I'm running into trouble with an OLTP database. It grows 
like crazy, has only 3,000,000 rows and vacuum takes a good 1/2 hour. 
Given trouble with Great Bridge is there any info out there on when 
7.2 might hit the streets?

-Michael
_
 http://fastmail.ca/ - Fast Free Web Email for Canadians
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[SQL] changing the size of a column without dump/restore

2002-11-25 Thread Michael Richards
I've got a huge database table and I need to increase the size of a 
varchar from like 100 to 200 characters. As I recall the size is just 
a restriction and doesn't actually affect the format of the table 
file. 

Rather than dumping/restoring a 5Gb table with 20,000,000 rows which 
will take all day and night, is there anything I can twiddle in the 
system tables to change this size? I'd of course be backing up the 
data just in case!

-Michael
_
http://fastmail.ca/ - Fast Secure Web Email for Canadians
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] command in C

2000-07-25 Thread Michael Richards

> am i use "PQexec()" or "conn.query()" for inserting informations
> in my BD ?

I wrote a class to encapsulate all the ugliness of it. Inside my 
class I'm calling PQexec().

-Michael


[SQL] Index selection on a large table

2000-07-25 Thread Michael Richards

Hi.

I believe this to be a bug, but I am submitting it to the SQL list as 
well in the case I overlooked something.

I'm running 
Postgres 7.0.2 on FreeBSD 4.0-STABLE/Intel 
compiled with [gcc version 2.95.2 19991024 (release)] 

I've got this interesting problem where the query plan is not what I 
expect. Every month we log details of users logging into their 
webmail accounts. Every week before the database is vacuumed the 
oldest week's entries are removed. The table can be quite large. Here 
is the relevant parts of its structure:

   Table "logins"
 Attribute |Type | Modifier 
---+-+--
 loginid   | varchar(16) | 
 ip| varchar(15) | 
[...]
Indices: logins_ip_idx,
 logins_loginid_idx

The indexes are as follows:
  Index "logins_ip_idx"
 Attribute |Type 
---+-
 ip| varchar(15)
btree

Index "logins_loginid_idx"
 Attribute |Type 
---+-
 loginid   | varchar(16)
btree

Size of the table:
fastmail=> select count(*) from logins;
  count  
-
 1082564
(1 row)

Now here is a query plan from a selection using loginid:
explain select * from logins where loginid='michael';
NOTICE:  QUERY PLAN:
Index Scan using logins_loginid_idx on logins  (cost=0.00..500.57 
rows=130 width=48)

As expected it uses the logins_loginid_idx to select the rows that 
match loginid='michael';

Now I should note that I store the IP's as type varchar(15).
The following query yeilds the questionable query plan:

explain select * from logins where ip='38.30.141.44';
NOTICE:  QUERY PLAN:
Seq Scan on logins  (cost=0.00..25248.51 rows=11602 width=48)

This one decides to ignore the fact that IP is indexed (and the table 
was vacuumed) and it does a slow-assed sequential scan through a 
million or so rows to pick out (in this case 3 matching rows).

Just to be sure, I re-vacuumed the table and tried the questionable 
query again with the same results. Here is the vacuum output in case 
it helps:

NOTICE:  --Relation logins--
NOTICE:  Pages 11717: Changed 1, reaped 0, Empty 0, New 0; Tup 
1082580: Vac 0, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 64, MaxLen 
92; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. CPU 
2.28s/15.38u sec.
NOTICE:  Index logins_ip_idx: Pages 4550; Tuples 1082580. CPU 
0.78s/2.65u sec.
NOTICE:  Index logins_loginid_idx: Pages 3881; Tuples 1082580. CPU 
0.62s/2.67u sec.
VACUUM

-Michael


Re: [SQL] on line numbers, drop table errors, and log files

2000-08-01 Thread Michael Richards

Hi.

Here are the options for pg_dump:
  -a dump out only the data, no schema
  -c clean (drop) schema prior to create
  -d dump data as INSERT, rather than COPY, commands
  -D dump data as INSERT commands with attribute names
  -hserver host name
  -i proceed when database version != pg_dump version
  -n suppress most quotes around identifiers
  -N enable most quotes around identifiers
  -o dump object ids (oids)
  -p   server port number
  -s dump out only the schema, no data
  -t  dump for this table only
  -u use password authentication
  -v verbose
  -x do not dump ACL's (grant/revoke)

So you are dumping it with the following:
Proper INSERT commands/attribs, pwd auth, data only

I would consider running:
pg_dump -D -u -c -f backup.dump
This will dump the schema and drop the objects before creating them.

-Michael

> 4) I run pg_dump this way: "pg_dump -D -u -a -f backup.dump
> main_v0_6" in order to create a backup of my "main_v0_6" database.
>  In order to regenerate the database, I first run a "\i
> create_tables.sql" script, then I run "\i backup.dump".  However,
> I end up having to hand edit the "backup.dump" file to place "drop
> sequence" statements before the "create sequence" statements
> because they are created by my "create_tables.sql" script.  I'm
> sure you Postgres gurus out there are not doing it this way, what
> is your method of database backup/recovery?

_
 http://fastmail.ca/ - Fast Free Web Email for Canadians


Re: [SQL] on line numbers, drop table errors, and log files

2000-08-02 Thread Michael Richards


> As I mentioned earlier, your solution worked great.  I am a bit
> puzzled about the syntax that is created by my old method, using
> "pg_dump -D -u -a".  I wonder why it creates "create sequence ..."
> commands instead of "update sequence ..."?

That is a good question. I do not know the answer. Since it is 
understood that you are dumping the data and not the schema, that 
would imply to me that the schema (sequences included) should already 
exist and need to be updated. Of course I feel that implicitly it 
should be cleaning out the contents of the tables when this is done 
as well.

Perhaps someone on the team could comment on this.

-Michael
_
 http://fastmail.ca/ - Fast Free Web Email for Canadians


[SQL] Database in recovery mode

2000-08-04 Thread Michael Richards

Hi.

I've got a postgres system that keeps going into recovery mode. I 
can't really find any docs on this. All of the postgres processes 
will be in the STOP state and when I try to connect it will say "The 
database is in recovery mode".

I suspect there is a query that is causing it to crash in some 
specific way that causes it. This is happening about once per day.

Any ideas?

-Michael
_
 http://fastmail.ca/ - Fast Free Web Email for Canadians