Re: [GENERAL] Strange result using transactions

2007-03-27 Thread Pavan Deolasee

On 3/27/07, Tom Lane [EMAIL PROTECTED] wrote:


Matthijs Melissen [EMAIL PROTECTED] writes:
 I am executing the following queries (id has a unique key):
 1) begin;
 1) delete from forum where id = 20;
 1) insert into forum (id, name) values (20, 'test');
 2) delete from forum where id = 20;
 1) commit;

 The problem is that process 2 gets the message 'DELETE 0'. I would
expect
 him to get the message 'DELETE 1'.

Why do you find that strange?  Process 1 hasn't committed its insert yet.



I think what he is saying that *after* txn 1 commits, txn 2 does not see the
record inserted by txn1. Isn't that a fair point ? I mean txn 2 can see the
DELETE operation of txn 1, but can not see the INSERT operation of the
same transaction.

Thanks,
Pavan

--

EnterpriseDB http://www.enterprisedb.com


Re: [GENERAL] Temporarily disable all table indices

2007-03-27 Thread Dmitry Koterov

Thanks!

*pg_indexes.indexdef* is exactly what I was looking for!

On 3/27/07, Erik Jones [EMAIL PROTECTED] wrote:


On Mar 26, 2007, at 5:24 PM, Dmitry Koterov wrote:

Hello.

I need to perform a mass operation (UPDATE) on each table row. E.g. -
modify one table column:

UPDATE tbl SET tbl_text = MD5(tbl_id);

The problem is that if this table contains a number of indices, such
UPDATE is very very slow on large table.

I have to drop all indices on the table, then run the update (very quick)
and after that - re-create all indices back. It is much more speedy.
Unfortunately the table structure may change in the future (e.g. - new
indices are added), so I don't know exactly in this abstraction layer, what
indices to drop and what - to re-create.

Is any way (or ready piece of code) to save all existed indices, drop them
all and then - re-create after a mass UPDATE?


No, but you can use the pg_indexes view (
http://www.postgresql.org/docs/8.2/interactive/view-pg-indexes.html) to
dynamically determine what indexes a table has.

erik jones [EMAIL PROTECTED]
software developer
615-296-0838
emma(r)






Re: [GENERAL] Strange result using transactions

2007-03-27 Thread Alban Hertroys
Pavan Deolasee wrote:
 On 3/27/07, Tom Lane [EMAIL PROTECTED] wrote:

 Matthijs Melissen [EMAIL PROTECTED] writes:
  I am executing the following queries (id has a unique key):
  1) begin;
  1) delete from forum where id = 20;
  1) insert into forum (id, name) values (20, 'test');
  2) delete from forum where id = 20;
  1) commit;

  The problem is that process 2 gets the message 'DELETE 0'. I would
 expect
  him to get the message 'DELETE 1'.

 Why do you find that strange?  Process 1 hasn't committed its insert yet.


 I think what he is saying that *after* txn 1 commits, txn 2 does not see
 the
 record inserted by txn1. Isn't that a fair point ? I mean txn 2 can see the
 DELETE operation of txn 1, but can not see the INSERT operation of the
 same transaction.

Not necessarily so. I'd expect only to see a successful delete in txn 2
if a record with id = 20 existed prior to the beginning of txn 1. If
that wasn't the case, then there's nothing to delete in txn 2.

-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Strange result using transactions

2007-03-27 Thread Matthijs Melissen

Alban Hertroys wrote:

Pavan Deolasee wrote:

On 3/27/07, Tom Lane [EMAIL PROTECTED] wrote:


Matthijs Melissen [EMAIL PROTECTED] writes:

I am executing the following queries (id has a unique key):
1) begin;
1) delete from forum where id = 20;
1) insert into forum (id, name) values (20, 'test');
2) delete from forum where id = 20;
1) commit;



The problem is that process 2 gets the message 'DELETE 0'. I would
expect him to get the message 'DELETE 1'.



Not necessarily so. I'd expect only to see a successful delete in txn
2 if a record with id = 20 existed prior to the beginning of txn 1.


That is exactly the problem. Sorry for not being clear about that.
I get DELETE 0 even if a record with id=20 already exists before both 
transactions. 



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Query that does not use indexes

2007-03-27 Thread Denis Gasparin
I have a query that performs a multiple join between four tables and
that doesn't use the defined indexes.

If I set enable_seqscan to off, the query obviously uses the indexes and
it is considerable faster than normal planned execution with
enable_seqscan=true.

Can you give me a reason why Postgresql is using seqscan when it should not?

I tryed also to vacuum analyze and reindex all the database but it
didn't change anything.

Thank you in advance,
Denis

 Database and query infos 

The database is made of four tables. Here it is an extract of the
definitition:

table order (70 records)
order_id serial not null primary key,
order_date timestamp not null

table order_part (233 records)
part_id serial not null primary key,
order_id integer references order(order_id)

table component (35 records)
serial_number serial not null primary key,
part_id integer not null references order_part(part_id)

table component_part (5 records)
serial_number integer not null references component(serial_number),
component_part_serial serial unique

Index component_part_1 on serial_number of component_part
Index component_part_id on part_id of component


Here it is the query:

select to_char(ORDER.ORDER_DATE::date,'DD-MM-') as ORDER_DATE ,
count(component_part_serial) as COMPONENTS_PARTS_WITH_SERIAL,
count(*) as TOTAL_COMPONENTS_PARTS
from ORDER inner join ORDER_PART using(ORDER_ID)
inner join COMPONENT using(PART_ID)
inner join COMPONENT_PART using(SERIAL_NUMBER)
where ORDER.ORDER_DATE::date between  '2007-03-01' and '2007-03-27'
group by ORDER.ORDER_DATE::date order by ORDER.ORDER_DATE::date


Here it is the explain analyze with seqscan to on:

 Sort  (cost=12697.04..12697.04 rows=1 width=24) (actual
time=1929.983..1929.991 rows=7 loops=1)
   Sort Key: (order.order_date)::date
   -  HashAggregate  (cost=12697.00..12697.03 rows=1 width=24) (actual
time=1929.898..1929.949 rows=7 loops=1)
 -  Hash Join  (cost=9462.76..12692.00 rows=667 width=24)
(actual time=1355.807..1823.750 rows=50125 loops=1)
   Hash Cond: (outer.serial_number = inner.serial_number)
   -  Seq Scan on component_part  (cost=0.00..2463.76
rows=50476 width=16) (actual time=0.011..93.194 rows=50476 loops=1)
   -  Hash  (cost=9451.14..9451.14 rows=4649 width=24)
(actual time=1333.016..1333.016 rows=50145 loops=1)
 -  Hash Join  (cost=34.84..9451.14 rows=4649
width=24) (actual time=1.350..1202.466 rows=50145 loops=1)
   Hash Cond: (outer.part_id = inner.part_id)
   -  Seq Scan on component 
(cost=0.00..7610.87 rows=351787 width=20) (actual time=0.004..603.470
rows=351787 loops=1)
   -  Hash  (cost=34.84..34.84 rows=3 width=12)
(actual time=1.313..1.313 rows=44 loops=1)
 -  Hash Join  (cost=7.40..34.84 rows=3
width=12) (actual time=0.943..1.221 rows=44 loops=1)
   Hash Cond: (outer.order_id =
inner.order_id)
   -  Seq Scan on order_part 
(cost=0.00..26.27 rows=227 width=8) (actual time=0.005..0.465 rows=233
loops=1)
   -  Hash  (cost=7.40..7.40 rows=1
width=12) (actual time=0.301..0.301 rows=28 loops=1)
 -  Seq Scan on order 
(cost=0.00..7.40 rows=1 width=12) (actual time=0.108..0.226 rows=28 loops=1)
   Filter:
(((order_date)::date = '2007-03-01'::date) AND ((order_date)::date =
'2007-03-27'::date))
 Total runtime: 1930.309 ms

Here it is the explain analyze with seqscan to off:

 Sort  (cost=19949.51..19949.51 rows=1 width=24) (actual
time=1165.948..1165.955 rows=7 loops=1)
   Sort Key: (order.order_date)::date
   -  HashAggregate  (cost=19949.47..19949.50 rows=1 width=24) (actual
time=1165.865..1165.916 rows=7 loops=1)
 -  Merge Join  (cost=15205.84..19944.47 rows=667 width=24)
(actual time=541.778..1051.830 rows=50125 loops=1)
   Merge Cond: (outer.serial_number = inner.serial_number)
   -  Sort  (cost=15205.84..15217.47 rows=4649 width=24)
(actual time=540.331..630.632 rows=50145 loops=1)
 Sort Key: component.serial_number
 -  Nested Loop  (cost=636.36..14922.66 rows=4649
width=24) (actual time=0.896..277.778 rows=50145 loops=1)
   -  Nested Loop  (cost=0.00..72.73 rows=3
width=12) (actual time=0.861..24.820 rows=44 loops=1)
 Join Filter: (outer.order_id =
inner.order_id)
 -  Index Scan using order_pkey on
order  (cost=0.00..27.47 rows=1 width=12) (actual time=0.142..0.307
rows=28 loops=1)
   Filter: (((order_date)::date =
'2007-03-01'::date) AND ((order_date)::date = '2007-03-27'::date))
 -  Index Scan using 

Re: [GENERAL] Strange result using transactions

2007-03-27 Thread Michael Fuhr
On Tue, Mar 27, 2007 at 12:41:53PM +0200, Matthijs Melissen wrote:
 I get DELETE 0 even if a record with id=20 already exists before both 
 transactions. 

Transaction 2 (T2) is deleting the version of the row with id = 20
that was visible to T2 when it executed its DELETE.  Since T1 deleted
that version of the row first, T2 finds no row to delete after T1
commits and releases its locks.  T2 doesn't know about the row that
T1 inserted because T1 hadn't committed yet when T2 executed its
DELETE.

Run T2 as a Serializable transaction and you'll see different
behavior:

1) begin;
1) delete from forum where id = 20;
1) insert into forum (id, name) values (20, 'test');
2) begin isolation level serializable;
2) delete from forum where id = 20;
1) commit;

When T1 commits T2 should fail with SQLSTATE 40001 SERIALIZATION
FAILURE (could not serialize access due to concurrent update).
T2 still doesn't know about the row that T1 inserted but now T2
knows that something happened to the version of the row it was
trying to delete.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] cutting out the middleperl

2007-03-27 Thread Merlin Moncure

On 22 Mar 2007 14:58:15 -0700, Kev [EMAIL PROTECTED] wrote:

Hi everyone,

I'm still in the design phase of a project.  I was just wondering if
anyone has any thoughts or experience on the idea of cutting the P out
of the LAMP (or in my case, WAMP for now) stack.  What I mean is
having
everything encapsulated into sql (or plpgsql or plperl where needed)
functions stored in the pgsql server, and have Apache communicate with
pgsql via a tiny C program that pretty much just checks whether the
incoming function is on the allowed list and has the proper data
types,
then passes it straight in.  Any errors are logged as potential
security
breaches.

I'm really new to mod_perl too, so another question would be if this
would be much faster than a simple perl script that did the same
thing.

I ask this because I realize I need to carefully check data coming
into
pgsql functions as well as at the client end.  Why maintain a bunch of
scripts with names similar to the functions they're calling and all
performing similar checks anyway?

I was kinda salivating at the thought of how fast things would be if
you
cut out the A as well, by using a Flash applet to give socket access
to
JavaScript.  But then I guess you have to make your pgsql server
itself
publicly accessible on some port.  Is that just asking for trouble?

I appreciate any comments or thoughts anyone might have on this.


IMO, I think 'thin middleware' approach is a great way to design
applications...so you are right on the money.  The web server. IMO,
should be mostly concerned about rendering html.  I don't think
eliminating the middleware is really practical.  While you could use a
thick-client javascript framework like GWT and write your queries in
javascript (getting data back via json), I don't think it's really
possible to secure this properly without killing the 'ease of
implementation' factor.

Then again, it's no worse then your typical old school visual basic or
delphi in-house application so common in the 90's.  I really miss the
simplicity of Delphi.

merlin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] ERROR: out of shared memory

2007-03-27 Thread Merlin Moncure

On 3/26/07, Tom Lane [EMAIL PROTECTED] wrote:

Sorin N. Ciolofan [EMAIL PROTECTED] writes:
I have to manage an application written in java which call another module
 written in java which uses Postgre DBMS in a Linux environment. I'm new to
 Postgres. The problem is that for large amounts of data the application
 throws an:
  org.postgresql.util.PSQLException: ERROR: out of shared memory

AFAIK the only very likely way to cause that is to touch enough
different tables in one transaction that you run out of lock entries.
While you could postpone the problem by increasing the
max_locks_per_transaction setting, I suspect there may be some basic
application misdesign involved here.  How many tables have you got?


or advisory locks...these are easy to spot.  query pg_locks and look
for entries of locktype 'advisory'.  I've already seen some apps in
the wild that use them, openads is one.

merlin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL][OT] cutting out the middleperl

2007-03-27 Thread Peter Childs

On 27/03/07, Merlin Moncure [EMAIL PROTECTED] wrote:

On 22 Mar 2007 14:58:15 -0700, Kev [EMAIL PROTECTED] wrote:
 Hi everyone,

 I'm still in the design phase of a project.  I was just wondering if
 anyone has any thoughts or experience on the idea of cutting the P out
 of the LAMP (or in my case, WAMP for now) stack.  What I mean is
 having
 everything encapsulated into sql (or plpgsql or plperl where needed)
 functions stored in the pgsql server, and have Apache communicate with
 pgsql via a tiny C program that pretty much just checks whether the
 incoming function is on the allowed list and has the proper data
 types,
 then passes it straight in.  Any errors are logged as potential
 security
 breaches.

 I'm really new to mod_perl too, so another question would be if this
 would be much faster than a simple perl script that did the same
 thing.

 I ask this because I realize I need to carefully check data coming
 into
 pgsql functions as well as at the client end.  Why maintain a bunch of
 scripts with names similar to the functions they're calling and all
 performing similar checks anyway?

 I was kinda salivating at the thought of how fast things would be if
 you
 cut out the A as well, by using a Flash applet to give socket access
 to
 JavaScript.  But then I guess you have to make your pgsql server
 itself
 publicly accessible on some port.  Is that just asking for trouble?

 I appreciate any comments or thoughts anyone might have on this.

IMO, I think 'thin middleware' approach is a great way to design
applications...so you are right on the money.  The web server. IMO,
should be mostly concerned about rendering html.  I don't think
eliminating the middleware is really practical.  While you could use a
thick-client javascript framework like GWT and write your queries in
javascript (getting data back via json), I don't think it's really
possible to secure this properly without killing the 'ease of
implementation' factor.

Then again, it's no worse then your typical old school visual basic or
delphi in-house application so common in the 90's.  I really miss the
simplicity of Delphi.



Strangely the in-house application is often still the better way to
go. The web can make everything 3 times more complicated than it needs
to be. Toolkits like GWT help this but you still need to write
middleware even when you can trust the trust the end user. Hence
most places still use in-house applications except the VB or Delphi
gets replaced with Ruby or Python. Here we use C++ and Qt but thats
another story.
The web should still be used for mass market apps and heavy
communication apps and not standard desktop answers. (Unless you
particularly like writing everything twice)
The secret is to use the right tool for the right job, and not try and
find the does everything spanner that fits all nuts and also undoes
screws too. Its never going to work in every case. Unfortunately some
people like this idea.

Peter.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Could not create relation: File exists error

2007-03-27 Thread Jesse Cleary

Tom Lane wrote:

Jesse Cleary [EMAIL PROTECTED] writes:
  

This script has been running successfully for several months (70-90 min each 
night).  Out of the blue I'm now getting the following error message after each 
psql command, except the last vacuum full analyze command:



  

ERROR:  could not create relation 1663/835021/4294967254: File exists



Hm, it seems suggestive that the last number is only a few counts short
of UINT_MAX.  Not sure what to make of it though.  Do you in fact have
files under $PGDATA/base/835021/ with names in that range?

regards, tom lane

  

Thanks Tom

Some further digging:  I see that the last numbers in the path repeat 
with every nightly run.  So I'm getting the exact same error messages 
about the same files every night.  They range from 4294840092-94

4294967252-58
4294967264-67

All of these files do exist in $PGDATA/base/835021/ and all have a file 
size  0 and a current timestamp coincident with the latest of my hourly 
update runs, not the maintenance script run.


One exception is notable - $PGDATA/base/835021/4294967264 - this file 
has a size of 0 and a timestamp that coincides with the date and time 
range of the nightly run when these errors first appeared.  This seems 
like a good clue, but not sure what I should do next?  Thanks...


Jesse





--
Jesse Cleary
Department of Marine Sciences   
UNC Chapel Hill

334 Chapman Hall
(919) 962-4987 


[EMAIL PROTECTED]











---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Constraint and Index with same name? (chicken and egg probelm)

2007-03-27 Thread David Brain

Hi,

This could well be a recurrence of this issue:

http://archives.postgresql.org/pgsql-general/2007-01/msg01801.php

for which there doesn't seem to have been a resolution.

I am running:

PostgreSQL 8.1.5 on x86_64-pc-linux-gnu, compiled by GCC 
x86_64-pc-linux-gnu-gcc (GCC) 4.1.1 (Gentoo 4.1.1-r1)


I seem to have wound up with what I can only assume is a constraint and 
index sharing the same name:


e.g.

cdr=# drop index cdrimportsession_pkey;
ERROR:  cannot drop index cdrimportsession_pkey because constraint 
cdrimportsession_pkey on table cdrimportsession requires it
HINT:  You may drop constraint cdrimportsession_pkey on table 
cdrimportsession instead.



cdr=# alter table cdrimportsession drop constraint cdrimportsession_pkey;
NOTICE:  constraint fk_cdrsummary_cdrimportsession on table cdrsummary 
depends on index cdrimportsession_pkey

ERROR:  cdrimportsession_pkey is an index


So the schema here is fairly straightforward - I have two tables, 
cdrimportsession and cdrsummary which has a FK into cdrimportsession.


I discovered this issue while trying to remove the FK constraint from 
the cdrsummary table - it complained about cdrimportsession_pkey being 
an index.


I can send the output of pgdump -s on this db if this would be helpful.

While it would be great to figure out _why_ this happened it would be 
even better to figure out a way of getting around it (I've already tried 
 renaming the cdrimportsession_pkey index - it renames, but I then have 
same issue just with different constraint/index names) as the tables 
involved are pretty huge and a dump/restore isn't really an option.


Thanks,

David.

--
David Brain - bandwidth.com
[EMAIL PROTECTED]
919.297.1078

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] cutting out the middleperl

2007-03-27 Thread Kenneth Downs
Kev, we have a GPL'd product targeting Postgres that has significant 
overlap with what you want, though in other areas we went in another 
direction.  The site is www.andromeda-project.org, and I've put some 
comments below:


Kev wrote:

Hi everyone,

I'm still in the design phase of a project.  I was just wondering if
anyone has any thoughts or experience on the idea of cutting the P out
of the LAMP (or in my case, WAMP for now) stack.  What I mean is
having
everything encapsulated into sql (or plpgsql or plperl where needed)
functions stored in the pgsql server, and have Apache communicate with
pgsql via a tiny C program that pretty much just checks whether the
incoming function is on the allowed list and has the proper data
types,
then passes it straight in.  Any errors are logged as potential
security
breaches.
  


Andromeda's goal is to implement all biz rules: constraints, automations 
and security, in the server. 

This in effect makes the web server a proxy to the database, which 
sounds like what you are after.   The P portion for us is PHP, not 
Perl, and it is small though non-zero.  It has only two jobs really.  In 
the one direction it converts HTTP requests into SQL, and in the other 
it converts SQL results into HTML.


In terms of experience, I sat down to write the first code 33 months 
ago, and it began to pay my bills about six months later.  All of the 
commercial bragging stuff is on the company website: http://www.secdat.com.



I'm really new to mod_perl too, so another question would be if this
would be much faster than a simple perl script that did the same
thing.
  


Can't say there.  My personal preference is for PHP because I can't 
understand Perl five minutes after I've written it.



I ask this because I realize I need to carefully check data coming
into
pgsql functions as well as at the client end.  Why maintain a bunch of
scripts with names similar to the functions they're calling and all
performing similar checks anyway?
  


Well actually we tackled that problem by decided to *preserve* direct 
table access through SQL as the standard API, which I realize is not the 
standard, but for the life of me I can't understand why, since it is 
such an amazingly simpler way to get what everyone says they are after.


Here's what I mean.  We write out a database spec in a plaintext file 
that includes security, constraints, and automations.  A builder 
program then generates the DDL, encodes the biz logic in triggers, and 
assigns table sel/ins/upd/del permissions to the tables.


No messy API to remember or manage.  Just specify the tables and 
columns, who can do what, and what the formulas are, and its all automatic.


A huge benefit to this is the basic ability to manipulate user's 
databases through direct SQL. 

It's also IMHO the only way to ensure that you can accomplish the task 
of having the web server be a proxy.  Its easy to convert HTTP into 
simple SQL insert/update etc., much harder to make it try to learn an API.



I was kinda salivating at the thought of how fast things would be if
you
cut out the A as well, by using a Flash applet to give socket access
to
JavaScript.  But then I guess you have to make your pgsql server
itself
publicly accessible on some port.  Is that just asking for trouble?

I appreciate any comments or thoughts anyone might have on this.

Thanks,
Kev


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq
  



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL][OT] cutting out the middleperl

2007-03-27 Thread Merlin Moncure

On 3/27/07, Peter Childs [EMAIL PROTECTED] wrote:

Strangely the in-house application is often still the better way to
go. The web can make everything 3 times more complicated than it needs
to be. Toolkits like GWT help this but you still need to write
middleware even when you can trust the trust the end user. Hence
most places still use in-house applications except the VB or Delphi
gets replaced with Ruby or Python. Here we use C++ and Qt but thats
another story.


I agree with everything you said except the point about the GWT.
Using a framework like this you can have your query in the javascript,
and pass it through directly the database and pass the data back using
extremely simple (think 10 line) php or perl rpc that renders query
result back in json to the browser.  In fact, you can write, compile,
and debug the app in java which is great advantage of gwt (imo).  Of
course, this is not an appropriate way of writing an application over
untrusted network but otoh, isvery RAD.

What you get is the limitation of working through the browser but you
can kiss goodbye to deployment headaches that plague classic thick
client apps because the runtime is 100% contained in the browser
rendering engine and some mighty .js files.  With a little bit of
clever programming you can get proper binding if you prefer that type
development.

merlin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] ERROR: out of shared memory

2007-03-27 Thread Tom Lane
Sorin N. Ciolofan [EMAIL PROTECTED] writes:
 It seems that the legacy application creates tables dynamically and the
 number of the created tables depends on the size of the input of the
 application. For the specific input which generated that error I've
 estimated a number of created tables of about 4000. 
 Could be this the problem?

If you have transactions that touch many of them within one transaction,
then yup, you could be out of locktable space.  Try increasing
max_locks_per_transaction.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL][OT] cutting out the middleperl

2007-03-27 Thread Csaba Nagy
 I agree with everything you said except the point about the GWT.
 Using a framework like this you can have your query in the javascript,
 and pass it through directly the database and pass the data back using
 extremely simple (think 10 line) php or perl rpc that renders query
 result back in json to the browser.  In fact, you can write, compile,
 and debug the app in java which is great advantage of gwt (imo).  Of
 course, this is not an appropriate way of writing an application over
 untrusted network but otoh, isvery RAD.

Untrusted is the key point here... in most of the real world cases you
will be far away from such trust that you would run SQL coming from the
end users browser...

 What you get is the limitation of working through the browser but you
 can kiss goodbye to deployment headaches that plague classic thick
 client apps because the runtime is 100% contained in the browser
 rendering engine and some mighty .js files.

And this draws the next problem, in the moment your .js is too mighty,
the users will come screaming after you once their browser starts to
regularly crash, drive the client box out of memory, bog it down to a
halt, etc.

There's no way you can replace all the functionality of a middleware
layer, but it's certainly true some cleverness can be placed at the
client side. Google mail is a very good example, I love that kind of
interface...

Cheers,
Csaba.



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] cutting out the middleperl

2007-03-27 Thread Randal L. Schwartz
 Kenneth == Kenneth Downs [EMAIL PROTECTED] writes:

Kenneth This in effect makes the web server a proxy to the database, which
Kenneth sounds like what you are after.  The P portion for us is PHP, not
Kenneth Perl, and it is small though non-zero.  It has only two jobs really.
Kenneth In the one direction it converts HTTP requests into SQL, and in the
Kenneth other it converts SQL results into HTML.

How do you control trust?  I presume you're not accepting raw SQL queries (or
even snippets) over the wire, so you have to have enough server-side mapping
code to map domain objects into database objects and domain verbs into
queries, and then authenticate and authorize that this verb is permitted by
the incoming user. That can't be just a trivial amount of code.  That's
usually a serious pile of code.

And please don't tell me you do all of that client-side. :)

-- 
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
merlyn@stonehenge.com URL:http://www.stonehenge.com/merlyn/
Perl/Unix/security consulting, Technical writing, Comedy, etc. etc.
See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] COPY losing information

2007-03-27 Thread Jaime Silvela

Just bringing back to life a message I sent last July.

The problem I was having was that when importing very large data sets, 
COPY seemed to drop some data. I built a script to use INSERTs, and same 
problem. My server runs 8.1.3 on Linux. Several people investigated, 
Reece Hart was unable to reproduce the problem using my same data file, 
and Tom Lane suggested a buggy client installation dropping info, or 
hardware problems in the server.


I've come back to this problem recently, and have found a couple of 
interesting things.

I'm using a 418MB data file. wc -l gives me 6,802,367 lines

On the server (Linux running Postgres 2.1.3). 4GB RAM, and the disk 
Postgres lives in is a network drive.

After copying the data file to the server and running psql locally.
- Import try 1: count(*) gives 6,801,664
- Import try 2: count(*) gives 6,802,241
- Every import gave a different count
The log file doesn't say anything other than
LOG: checkpoints are occurring too frequently (27 seconds apart), but 
that's should not be a problem, right?


I also tried it in my local-disk Windows installation,  which runs 
Postgres 8.1.0

- Import try 1: count(*) gives 6,824,366
- All imports since then give, correctly, count(*) = 6,802,367

This time, I FTP'd the file to my mac laptop, which is running 8.2.3. 
The import gave me the correct count every time.
Remembering the suggestion of hardware problems, I made an empty install 
of 8.2.3 on my server. On it, the data file was fully imported every 
single time.
I decided to copy all my data to 8.2.3 to test dependency of the bug on 
a) size of the database (30GB of data) or b) some problem with data or 
some stored procedure. I've tried the import, and again, it's correct 
every single time.


During the import of the database data to 8.2.3, I got mostly a clean 
set, except for the 3 error messages listed below which suggest a buggy 
kernel.


A bunch of questions:
1) Has anybody seen this type of behavior in 8.1.*, is there a known bug 
that might explain the problems in both the Linux server and the Windows 
box?
2) Those errors on the database import are troubling, but the affected 
only 3 tables. Wouldn't a buggy kernel give more trouble? Is it possible 
that this was a bug in 8.1.3's pg_dump? Also, since the database is so 
big, the output of pg_dump is split'd and bzip2'd, so there's room for 
error there too.
3) On Friday I'm going to upgrade the production database from 8.1.3 to 
8.2.3. Any caveats or words of advice?


Thank you,
Jaime


ERROR:  unexpected data beyond EOF in block 23662 of relation portfolio
HINT:  This has been seen to occur with buggy kernels; consider updating 
your system.

CONTEXT:  COPY portfolio, line 3426949: 210395 1974263 1   723
STATEMENT:  COPY portfolio (deal_id, security_id, amount, 
portfolio_version) FROM stdin;
ERROR:  unexpected data beyond EOF in block 4028 of relation 
coverage_test_val
HINT:  This has been seen to occur with buggy kernels; consider updating 
your system.
CONTEXT:  COPY coverage_test_val, line 338035: 340676  588 
2006-08-25  STEPUP_TRIGGER2 6   0.0 0.0   \

 7
STATEMENT:  COPY coverage_test_val (coverage_test_val_id, deal_id, 
observation_date, coverage_test_name, coverage_test\
_priority, coverage_test_value, coverage_test_trigger, 
coverage_test_type_id) FROM stdin;
ERROR:  unexpected data beyond EOF in block 4049 of relation 
deal_current_val
HINT:  This has been seen to occur with buggy kernels; consider updating 
your system.
CONTEXT:  COPY deal_current_val, line 511050: 612884   7008
2005-09-21  23  1.957871
STATEMENT:  COPY deal_current_val (deal_current_val_id, deal_id, 
observation_date, type_id, deal_current_val) FROM std\

in;



***
Bear Stearns is not responsible for any recommendation, solicitation,
offer or agreement or any information about any transaction, customer
account or account activity contained in this communication.

Bear Stearns does not provide tax, legal or accounting advice.  You
should consult your own tax, legal and accounting advisors before
engaging in any transaction. In order for Bear Stearns to comply with
Internal Revenue Service Circular 230 (if applicable), you are notified
that any discussion of U.S. federal tax issues contained or referred to
herein is not intended or written to be used, and cannot be used, for
the purpose of:  (A) avoiding penalties that may be imposed under the
Internal Revenue Code; nor (B) promoting, marketing or recommending to
another party any transaction or matter addressed herein.
***

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Every user has own database - how?

2007-03-27 Thread Filip Rembiałkowski

22 Mar 2007 17:01:20 -0700, Jaro [EMAIL PROTECTED]:

Hello

How create something like this:

UserName: user1 (he's root db_user1)
DatabaseName db_user1

UserName: user2  (he's root db_user2)
DatabaseName db_user2


I'd like create several users and several databases, ever user should
be root only for own database

maybe superuser right is not needed, just make them db owners
like this:

create user john password 'apple';
create database john owner john;

create user mary password 'strawberry';
create database mary owner mary;



--
Filip Rembiałkowski

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] COPY losing information

2007-03-27 Thread Jaime Silvela

Correction: my sever is running 8.1.3

Jaime Silvela wrote:

Just bringing back to life a message I sent last July.

The problem I was having was that when importing very large data sets, 
COPY seemed to drop some data. I built a script to use INSERTs, and 
same problem. My server runs 8.1.3 on Linux. Several people 
investigated, Reece Hart was unable to reproduce the problem using my 
same data file, and Tom Lane suggested a buggy client installation 
dropping info, or hardware problems in the server.


I've come back to this problem recently, and have found a couple of 
interesting things.

I'm using a 418MB data file. wc -l gives me 6,802,367 lines

On the server (Linux running Postgres 2.1.3). 4GB RAM, and the disk 
Postgres lives in is a network drive.

After copying the data file to the server and running psql locally.
- Import try 1: count(*) gives 6,801,664
- Import try 2: count(*) gives 6,802,241
- Every import gave a different count
The log file doesn't say anything other than
LOG: checkpoints are occurring too frequently (27 seconds apart), but 
that's should not be a problem, right?


I also tried it in my local-disk Windows installation,  which runs 
Postgres 8.1.0

- Import try 1: count(*) gives 6,824,366
- All imports since then give, correctly, count(*) = 6,802,367

This time, I FTP'd the file to my mac laptop, which is running 8.2.3. 
The import gave me the correct count every time.
Remembering the suggestion of hardware problems, I made an empty 
install of 8.2.3 on my server. On it, the data file was fully imported 
every single time.
I decided to copy all my data to 8.2.3 to test dependency of the bug 
on a) size of the database (30GB of data) or b) some problem with data 
or some stored procedure. I've tried the import, and again, it's 
correct every single time.


During the import of the database data to 8.2.3, I got mostly a clean 
set, except for the 3 error messages listed below which suggest a 
buggy kernel.


A bunch of questions:
1) Has anybody seen this type of behavior in 8.1.*, is there a known 
bug that might explain the problems in both the Linux server and the 
Windows box?
2) Those errors on the database import are troubling, but the affected 
only 3 tables. Wouldn't a buggy kernel give more trouble? Is it 
possible that this was a bug in 8.1.3's pg_dump? Also, since the 
database is so big, the output of pg_dump is split'd and bzip2'd, so 
there's room for error there too.
3) On Friday I'm going to upgrade the production database from 8.1.3 
to 8.2.3. Any caveats or words of advice?


Thank you,
Jaime


ERROR:  unexpected data beyond EOF in block 23662 of relation portfolio
HINT:  This has been seen to occur with buggy kernels; consider 
updating your system.

CONTEXT:  COPY portfolio, line 3426949: 210395 1974263 1   723
STATEMENT:  COPY portfolio (deal_id, security_id, amount, 
portfolio_version) FROM stdin;
ERROR:  unexpected data beyond EOF in block 4028 of relation 
coverage_test_val
HINT:  This has been seen to occur with buggy kernels; consider 
updating your system.
CONTEXT:  COPY coverage_test_val, line 338035: 340676  588 
2006-08-25  STEPUP_TRIGGER2 6   0.0 0.0   \

 7
STATEMENT:  COPY coverage_test_val (coverage_test_val_id, deal_id, 
observation_date, coverage_test_name, coverage_test\
_priority, coverage_test_value, coverage_test_trigger, 
coverage_test_type_id) FROM stdin;
ERROR:  unexpected data beyond EOF in block 4049 of relation 
deal_current_val
HINT:  This has been seen to occur with buggy kernels; consider 
updating your system.
CONTEXT:  COPY deal_current_val, line 511050: 612884   7008
2005-09-21  23  1.957871
STATEMENT:  COPY deal_current_val (deal_current_val_id, deal_id, 
observation_date, type_id, deal_current_val) FROM std\

in;





***
Bear Stearns is not responsible for any recommendation, solicitation,
offer or agreement or any information about any transaction, customer
account or account activity contained in this communication.

Bear Stearns does not provide tax, legal or accounting advice.  You
should consult your own tax, legal and accounting advisors before
engaging in any transaction. In order for Bear Stearns to comply with
Internal Revenue Service Circular 230 (if applicable), you are notified
that any discussion of U.S. federal tax issues contained or referred to
herein is not intended or written to be used, and cannot be used, for
the purpose of:  (A) avoiding penalties that may be imposed under the
Internal Revenue Code; nor (B) promoting, marketing or recommending to
another party any transaction or matter addressed herein.
***

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Is there a shortage of postgresql skilled ops people

2007-03-27 Thread Marc Evans

Hello -

Over the past couple of years I have made use of postgresql as my database 
of choice when developing new software. During that time, my clients have 
in multiple cases eventually come back to me and requested a re-targeting 
to Any database that we (my client) can find skilled ops staff to 
support. In a most recent case, professional recruiters were employed to 
try to find such people. The search was disappointing at best.


My question for this community is, what do enterprises that you deploy 
postgresql within do for skilled operations staffing? I can understand 
trying to convert a mysql or Oracle person to work on postgresql, but it 
would be very helpful to have a potential talent pool to draw from that 
was similar to those others. Finding people with HA, scaling and 
performance tuning knowledge is something that seems impossible to find 
except in people wanting to be developers.


The sad reality from what I have observed is that unless more people gain 
those skills and want to work in ops, it's becoming very hard for me to 
justify recommending postgresql for enterprise (or larger) scale projects.


What do others  do and/or experience?

Thanks in advance - Marc

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] cutting out the middleperl

2007-03-27 Thread Merlin Moncure

On 3/27/07, Randal L. Schwartz merlyn@stonehenge.com wrote:

 Kenneth == Kenneth Downs [EMAIL PROTECTED] writes:

Kenneth This in effect makes the web server a proxy to the database, which
Kenneth sounds like what you are after.  The P portion for us is PHP, not
Kenneth Perl, and it is small though non-zero.  It has only two jobs really.
Kenneth In the one direction it converts HTTP requests into SQL, and in the
Kenneth other it converts SQL results into HTML.

How do you control trust?  I presume you're not accepting raw SQL queries (or
even snippets) over the wire, so you have to have enough server-side mapping
code to map domain objects into database objects and domain verbs into
queries, and then authenticate and authorize that this verb is permitted by
the incoming user. That can't be just a trivial amount of code.  That's
usually a serious pile of code.

And please don't tell me you do all of that client-side. :)


looking at his project, it looks like you create tables and forms
using simple rule based system. very elegant imo, although I would
greatly prefer to to have the rules be in tables them selves, so I can
manipulate with sql, or self hosting dialogs.   very interesting
project i must sasy, it feels like alphora but with web spin on it.

merlin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL][OT] cutting out the middleperl

2007-03-27 Thread Merlin Moncure

On 3/27/07, Csaba Nagy [EMAIL PROTECTED] wrote:

 I agree with everything you said except the point about the GWT.
 Using a framework like this you can have your query in the javascript,
 and pass it through directly the database and pass the data back using
 extremely simple (think 10 line) php or perl rpc that renders query
 result back in json to the browser.  In fact, you can write, compile,
 and debug the app in java which is great advantage of gwt (imo).  Of
 course, this is not an appropriate way of writing an application over
 untrusted network but otoh, isvery RAD.

Untrusted is the key point here... in most of the real world cases you
will be far away from such trust that you would run SQL coming from the
end users browser...


well, untrusted meaning to general public. you can ssl encrypt the
session and do authentication in the middleware (10 line php becomes
50 lines).  The real danger is that someone reverse engineer your .js
app and execute arbitrary sql which is quite dangerous to any databse,
even after basic armoring.  However, in-house application development
is quite common, maybe the most common type of development.


 What you get is the limitation of working through the browser but you
 can kiss goodbye to deployment headaches that plague classic thick
 client apps because the runtime is 100% contained in the browser
 rendering engine and some mighty .js files.

And this draws the next problem, in the moment your .js is too mighty,
the users will come screaming after you once their browser starts to
regularly crash, drive the client box out of memory, bog it down to a
halt, etc.


maybe...google and others have pretty much nailed the leaky browser
problem on modern browsers imo.  I think you may find this is much
more reasonable than you might expect...

my point is that with thick server you can do very rapid development
eliminating the middleware completely and doing all work on
client/server.  and, reversing .js is only slightly more difficult
than reversing vb6 for example, which is arguably most popular, albeit
reviled quick'n'dirty application platform of all time.  If most of
the real work is done on the server, though, it's not so bad.

merln

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Is there a shortage of postgresql skilled ops people

2007-03-27 Thread Merlin Moncure

On 3/27/07, Marc Evans [EMAIL PROTECTED] wrote:

Hello -

Over the past couple of years I have made use of postgresql as my database
of choice when developing new software. During that time, my clients have
in multiple cases eventually come back to me and requested a re-targeting
to Any database that we (my client) can find skilled ops staff to
support. In a most recent case, professional recruiters were employed to
try to find such people. The search was disappointing at best.

My question for this community is, what do enterprises that you deploy
postgresql within do for skilled operations staffing? I can understand
trying to convert a mysql or Oracle person to work on postgresql, but it
would be very helpful to have a potential talent pool to draw from that
was similar to those others. Finding people with HA, scaling and
performance tuning knowledge is something that seems impossible to find
except in people wanting to be developers.

The sad reality from what I have observed is that unless more people gain
those skills and want to work in ops, it's becoming very hard for me to
justify recommending postgresql for enterprise (or larger) scale projects.

What do others  do and/or experience?


PostgreSQL talent is in high demand.  From perspective of
maintainability, this is probably the only drawback (but a serious
one) to choose it as a platform to run a company on.  There is, IMO, a
good reason for this...pg people tend to be very good and tend to stay
employed...

If I was in your position, I would suggest contracting is the best way
to go for those companies, either through yourself (the obvious
choice), or hook them up with some of the bigger names in the
postgresql community, command prompt, agliodbs, etc.

merlin

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] cutting out the middleperl

2007-03-27 Thread Kenneth Downs

Randal L. Schwartz wrote:

Kenneth == Kenneth Downs [EMAIL PROTECTED] writes:



Kenneth This in effect makes the web server a proxy to the database, which
Kenneth sounds like what you are after.  The P portion for us is PHP, not
Kenneth Perl, and it is small though non-zero.  It has only two jobs really.
Kenneth In the one direction it converts HTTP requests into SQL, and in the
Kenneth other it converts SQL results into HTML.

How do you control trust?  I presume you're not accepting raw SQL queries (or
even snippets) over the wire, so you have to have enough server-side mapping
code to map domain objects into database objects and domain verbs into
queries, and then authenticate and authorize that this verb is permitted by
the incoming user. That can't be just a trivial amount of code.  That's
usually a serious pile of code.

  


In a proxy or quasi-proxy situation the simplest scenario is direct 
table access, all other scenarios are more complicated and reduce to 
table access in the end.  So because the problem must be considered in 
terms of table access we ask what is required to pull that off, and the 
answer is:


a) the database is implementing security
b) users are using real accounts instead of connecting as a superuser 
and having the client do the security


When this is the case, there are only two implementation issues.  The 
first is how to manage trust (or authentication), and the second is the 
mundane issue of how to encode the queries.


Just a couple of weeks ago we discussed the trust issue, it comes down 
to the known design tradeoffs off HTTPS, sessions, dongles, user habits 
and so forth.  'nuf said on that.


As for the mundane question of how to encode the queries, the KISS 
principle says they will come over looking like HTML FORM actions (post 
or get).  So you'll have a list of input values with some hidden 
variables that control the action.


You need precious little code to translate these into SQL if you have a 
description of the database, we use the old-fashioned term data 
dictionary for this.  Our data dictionary lists the column names, types 
and sizes for each table (among other things).  Since all simple SQL 
commands are lists of column names and values, the SQL generation is 
child's play.  Our typical code might look like this:


if(gp('gp_mode')=='ins') {  // gp() retrieves a get/post variable
  $rowvalues=aFromGP(txt_);  // convert group of post vars into an 
associative array

  $table=gp('gp_table');   // fetch the table name from the stream
  SQLX_insert($table,$rowvalues);  // this routine generates an insert 
statement

}

The server will throw an error for constraint violations or security 
violations, the web layer doesn't concern itself with these things 
except to report them.


The only thing the web layer need do is handle the escaping of quotes to 
prevent SQL injection, but again, this is only to prevent the user from 
shooting himself in the foot, anything he injects we'd be happy to 
execute for him, since it all runs at his security level!


The shocking conclusion from points a) and b) at the top of this reply 
is this:  there is absolutely no difference, from a security 
perspective, between these this HTTP request:


index.php?gp_table=examplegp_mode=instxt_colname=valuetxt_colname=value

and this one:

index.php?gp_sql=insert+into+example+(column1,column2)+values+(value1,value2)

Amazing!  The simple fact is the user is either authorized to execute 
the query or he isn't.  If you connect to the database using his 
credentials then let him inject all the SQL he wants, if that's his idea 
of fun.




And please don't tell me you do all of that client-side. :)

  


Well, since you said please, and since we don't do it, I won't say it.


Re: [GENERAL] Constraint and Index with same name? (chicken and egg probelm)

2007-03-27 Thread Tom Lane
David Brain [EMAIL PROTECTED] writes:
 This could well be a recurrence of this issue:
 http://archives.postgresql.org/pgsql-general/2007-01/msg01801.php
 for which there doesn't seem to have been a resolution.

I never got a reproduceable case out of the other reporter ... can you
provide one?  His looked like it had something to do with foreign
key constraints named the same as unique/primary key constraints,
but AFAICS 8.1 won't let you create such, so I dunno how he got into
that state.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] cutting out the middleperl

2007-03-27 Thread Kenneth Downs

Merlin Moncure wrote:

On 3/27/07, Randal L. Schwartz merlyn@stonehenge.com wrote:

 Kenneth == Kenneth Downs [EMAIL PROTECTED] writes:

Kenneth This in effect makes the web server a proxy to the database, 
which
Kenneth sounds like what you are after.  The P portion for us is 
PHP, not
Kenneth Perl, and it is small though non-zero.  It has only two jobs 
really.
Kenneth In the one direction it converts HTTP requests into SQL, and 
in the

Kenneth other it converts SQL results into HTML.

How do you control trust?  I presume you're not accepting raw SQL 
queries (or
even snippets) over the wire, so you have to have enough server-side 
mapping

code to map domain objects into database objects and domain verbs into
queries, and then authenticate and authorize that this verb is 
permitted by

the incoming user. That can't be just a trivial amount of code.  That's
usually a serious pile of code.

And please don't tell me you do all of that client-side. :)


looking at his project, it looks like you create tables and forms
using simple rule based system. very elegant imo, although I would
greatly prefer to to have the rules be in tables them selves, so I can
manipulate with sql, or self hosting dialogs.   very interesting
project i must sasy, it feels like alphora but with web spin on it.


Actually we do put the rules in the tables and you can execute SQL 
directly, something that I so much take for granted now that I sometimes 
have to remind myself that most of the world cannot do this! 

If the website is not giving that impression, I'll have to correct that, 
ouch!  Can you tell me what gave you the impression we were just about 
web forms?



Thanks for the comments, elegant, now that's something I'll have to 
forward to Mom :)





merlin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] cutting out the middleperl

2007-03-27 Thread Kevin Field
 On 22 Mar 2007 14:58:15 -0700, Kev [EMAIL PROTECTED] wrote:
  Hi everyone,
 
  I'm still in the design phase of a project.  I was just wondering if
  anyone has any thoughts or experience on the idea of cutting the P out
  of the LAMP (or in my case, WAMP for now) stack.  What I mean is
  having
  everything encapsulated into sql (or plpgsql or plperl where needed)
  functions stored in the pgsql server, and have Apache communicate with
  pgsql via a tiny C program that pretty much just checks whether the
  incoming function is on the allowed list and has the proper data
  types,
  then passes it straight in.  Any errors are logged as potential
  security
  breaches.
 
  I'm really new to mod_perl too, so another question would be if this
  would be much faster than a simple perl script that did the same
  thing.
 
  I ask this because I realize I need to carefully check data coming
  into
  pgsql functions as well as at the client end.  Why maintain a bunch of
  scripts with names similar to the functions they're calling and all
  performing similar checks anyway?
 
  I was kinda salivating at the thought of how fast things would be if
  you
  cut out the A as well, by using a Flash applet to give socket access
  to
  JavaScript.  But then I guess you have to make your pgsql server
  itself
  publicly accessible on some port.  Is that just asking for trouble?
 
  I appreciate any comments or thoughts anyone might have on this.
 
 IMO, I think 'thin middleware' approach is a great way to design
 applications...so you are right on the money.  The web server. IMO,
 should be mostly concerned about rendering html.  I don't think
 eliminating the middleware is really practical.  While you could use a
 thick-client javascript framework like GWT and write your queries in
 javascript (getting data back via json), I don't think it's really
 possible to secure this properly without killing the 'ease of
 implementation' factor.
 
 Then again, it's no worse then your typical old school visual basic or
 delphi in-house application so common in the 90's.  I really miss the
 simplicity of Delphi.
 
 merlin

Hi Merlin,

Thanks for your reply, these are helpful comments.  Just wondering about the 
security factor, though--is there something specific that would be impossible 
to lock down?  I would think (but I'm no expert to be sure!) that a 
whitelist-only filter, I mean, if there weren't any buffer overflow 
vulnerabilities or anything like that, would be tough to sneak malicious SQL 
functions or other code through.  Or did you mean in some other way?

I don't think I would pass straight SQL queries anyway, if it makes a 
difference, but rather just the function name and the parameters separately, so 
that there's always an SQL function involved, but so that that's not obvious 
from the javascript end.  All that's obvious is the thin-perlness of it: my 
perl script could be calling another script by the name we pass it, or a perl 
subroutine, or something else.

Heh...Delphi was fun, except when I wanted to do some fancier things with it, 
even in high school...although our low-budget high school didn't exactly have 
the latest major version.

Thanks,
Kev


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Is there a shortage of postgresql skilled ops people

2007-03-27 Thread Erik Jones


On Mar 27, 2007, at 9:37 AM, Merlin Moncure wrote:


On 3/27/07, Marc Evans [EMAIL PROTECTED] wrote:

Hello -

Over the past couple of years I have made use of postgresql as my  
database
of choice when developing new software. During that time, my  
clients have
in multiple cases eventually come back to me and requested a re- 
targeting

to Any database that we (my client) can find skilled ops staff to
support. In a most recent case, professional recruiters were  
employed to

try to find such people. The search was disappointing at best.

My question for this community is, what do enterprises that you  
deploy
postgresql within do for skilled operations staffing? I can  
understand
trying to convert a mysql or Oracle person to work on postgresql,  
but it
would be very helpful to have a potential talent pool to draw from  
that

was similar to those others. Finding people with HA, scaling and
performance tuning knowledge is something that seems impossible to  
find

except in people wanting to be developers.

The sad reality from what I have observed is that unless more  
people gain
those skills and want to work in ops, it's becoming very hard for  
me to
justify recommending postgresql for enterprise (or larger) scale  
projects.


What do others  do and/or experience?


PostgreSQL talent is in high demand.  From perspective of
maintainability, this is probably the only drawback (but a serious
one) to choose it as a platform to run a company on.  There is, IMO, a
good reason for this...pg people tend to be very good and tend to stay
employed...

If I was in your position, I would suggest contracting is the best way
to go for those companies, either through yourself (the obvious
choice), or hook them up with some of the bigger names in the
postgresql community, command prompt, agliodbs, etc.


Not having looked myself, this is as much a question as a suggestion,  
but are there not postgres dba training seminars/courses you could  
recommend they send their dba's to?


erik jones [EMAIL PROTECTED]
software developer
615-296-0838
emma(r)





Re: [GENERAL] cutting out the middleperl

2007-03-27 Thread Kevin Field
Hi Kenneth,

This is wonderful news.  I will definitely be checking into it as soon as I 
have a moment.  Thanks!

Kev

 Kev, we have a GPL'd product targeting Postgres that has significant 
 overlap with what you want, though in other areas we went in another 
 direction.  The site is www.andromeda-project.org, and I've put some 
 comments below:
 
 Kev wrote:
  Hi everyone,
 
  I'm still in the design phase of a project.  I was just wondering if
  anyone has any thoughts or experience on the idea of cutting the P out
  of the LAMP (or in my case, WAMP for now) stack.  What I mean is
  having
  everything encapsulated into sql (or plpgsql or plperl where needed)
  functions stored in the pgsql server, and have Apache communicate with
  pgsql via a tiny C program that pretty much just checks whether the
  incoming function is on the allowed list and has the proper data
  types,
  then passes it straight in.  Any errors are logged as potential
  security
  breaches.

 
 Andromeda's goal is to implement all biz rules: constraints, automations 
 and security, in the server. 
 
 This in effect makes the web server a proxy to the database, which 
 sounds like what you are after.   The P portion for us is PHP, not 
 Perl, and it is small though non-zero.  It has only two jobs really.  In 
 the one direction it converts HTTP requests into SQL, and in the other 
 it converts SQL results into HTML.
 
 In terms of experience, I sat down to write the first code 33 months 
 ago, and it began to pay my bills about six months later.  All of the 
 commercial bragging stuff is on the company website: http://www.secdat.com.
 
  I'm really new to mod_perl too, so another question would be if this
  would be much faster than a simple perl script that did the same
  thing.

 
 Can't say there.  My personal preference is for PHP because I can't 
 understand Perl five minutes after I've written it.
 
  I ask this because I realize I need to carefully check data coming
  into
  pgsql functions as well as at the client end.  Why maintain a bunch of
  scripts with names similar to the functions they're calling and all
  performing similar checks anyway?

 
 Well actually we tackled that problem by decided to *preserve* direct 
 table access through SQL as the standard API, which I realize is not the 
 standard, but for the life of me I can't understand why, since it is 
 such an amazingly simpler way to get what everyone says they are after.
 
 Here's what I mean.  We write out a database spec in a plaintext file 
 that includes security, constraints, and automations.  A builder 
 program then generates the DDL, encodes the biz logic in triggers, and 
 assigns table sel/ins/upd/del permissions to the tables.
 
 No messy API to remember or manage.  Just specify the tables and 
 columns, who can do what, and what the formulas are, and its all automatic.
 
 A huge benefit to this is the basic ability to manipulate user's 
 databases through direct SQL. 
 
 It's also IMHO the only way to ensure that you can accomplish the task 
 of having the web server be a proxy.  Its easy to convert HTTP into 
 simple SQL insert/update etc., much harder to make it try to learn an API.
 
  I was kinda salivating at the thought of how fast things would be if
  you
  cut out the A as well, by using a Flash applet to give socket access
  to
  JavaScript.  But then I guess you have to make your pgsql server
  itself
  publicly accessible on some port.  Is that just asking for trouble?
 
  I appreciate any comments or thoughts anyone might have on this.
 
  Thanks,
  Kev
 
 
  ---(end of broadcast)---
  TIP 3: Have you checked our extensive FAQ?
 
 http://www.postgresql.org/docs/faq

 


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Constraint and Index with same name? (chicken and egg probelm)

2007-03-27 Thread David Brain
Not sure I can provide one either I'm afraid, I don't think I did 
anything unusual.  But I can provide you with some background as to how 
this DB was created so if you can see any steps that may have been 
problematic.


This db was created as part of an upgrade to new hardware, using Slony 
to replicate the data to copy data and keep things in sync during the 
initial test period.


1. Created basic db schema, minus indexes and constraints.
2. Setup Slony replication to copy/sync data from master db.
3. Once syc complete, re-added indexes and constraints to new DB using
pgadmin (by copying the ddl from the original db - again using pgadmin)
4. Removed FK constraint from the old db - worked ok.
5. Attempted to remove FK constraint from new DB - hit the problem I am 
seeing now.


The db is now semi production, but I'll happily provide whatever info I 
can short of needing a restart (and even a restart could be scheduled if 
necessary).


Let me know if I can provide any more info.

David.

Tom Lane wrote:

David Brain [EMAIL PROTECTED] writes:

This could well be a recurrence of this issue:
http://archives.postgresql.org/pgsql-general/2007-01/msg01801.php
for which there doesn't seem to have been a resolution.


I never got a reproduceable case out of the other reporter ... can you
provide one?  His looked like it had something to do with foreign
key constraints named the same as unique/primary key constraints,
but AFAICS 8.1 won't let you create such, so I dunno how he got into
that state.

regards, tom lane


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Is there a shortage of postgresql skilled ops people

2007-03-27 Thread Oleg Bartunov

Marc,
I forwarded your message to pgsql-advocacy list.

The same situation in Russia ! I think, that the major problem here is 
lack of official training courses on PostgreSQL and certificates.

Official mean something that was accepted by the PostgreSQL community.
This is a real pity, since we (developers) are working on adding nice
features, making porting popular software solutions to PostgreSQL easier,
but there are no certified postgresql admins available on market. 
For example, I and Teodor last year participated in porting of very

popular accounting enterprize solution from MS SQL to PostgreSQL and
there are about 800,000 installations already, so in principle, we have
big market, but people needed to be educated and certified, so company
could decide to switch from MS SQL to PostgreSQL.

Probably, it's time to sponsor our book-writers and other enthusiasts
to write PostgreSQL Administration handbook, which we (community) will
accept, support and translate to different languages. I'm willing
to contribute Full Text Search chapter, for example. AFAIK, we have enough
people, already wrote PostgreSQL books. I like Corry's book, for example.
I don't know how much it might costs, but I'm sure community has money for this.

As for certificates, I see no real problem. We need to design nice 
certificate, translate to different languages, publish on www.postgresql and

approve a list of people, who can sign certificate. We have many members of
our community in different regions/countries, so this is not a problem.

btw, probably, this project could be a nice introducing for PostgreSQL EU.


Oleg
On Tue, 27 Mar 2007, Marc Evans wrote:


Hello -

Over the past couple of years I have made use of postgresql as my database of 
choice when developing new software. During that time, my clients have in 
multiple cases eventually come back to me and requested a re-targeting to 
Any database that we (my client) can find skilled ops staff to support. In 
a most recent case, professional recruiters were employed to try to find such 
people. The search was disappointing at best.


My question for this community is, what do enterprises that you deploy 
postgresql within do for skilled operations staffing? I can understand trying 
to convert a mysql or Oracle person to work on postgresql, but it would be 
very helpful to have a potential talent pool to draw from that was similar to 
those others. Finding people with HA, scaling and performance tuning 
knowledge is something that seems impossible to find except in people wanting 
to be developers.


The sad reality from what I have observed is that unless more people gain 
those skills and want to work in ops, it's becoming very hard for me to 
justify recommending postgresql for enterprise (or larger) scale projects.


What do others  do and/or experience?

Thanks in advance - Marc

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Constraint and Index with same name? (chicken and egg probelm)

2007-03-27 Thread Tom Lane
David Brain [EMAIL PROTECTED] writes:
 The db is now semi production, but I'll happily provide whatever info I 
 can short of needing a restart (and even a restart could be scheduled if 
 necessary).
 Let me know if I can provide any more info.

Can you send me the output of pg_dump -s (ie, no data, just schema)?
off list please...

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Populate a calendar table

2007-03-27 Thread Raymond O'Donnell

'Lo all,

I've created a calendar table based on an article I found on the web, 
but I can't figure out what's wrong with the query I've written to 
populate it. Here's the table -


CREATE TABLE aux_dates
(
  the_date date NOT NULL,
  the_year smallint NOT NULL,
  the_month smallint NOT NULL,
  the_day smallint NOT NULL,
  month_name character varying(12),
  day_name character varying(12),
  CONSTRAINT aux_dates_pkey PRIMARY KEY (the_date)
)

- and here's what I've come up with to populate it -

insert into aux_dates
select * from (
select
d.dates as the_date,
extract (year from d.dates) as the_year,
extract (month from d.dates) as the_month,
extract (day from d.dates) as the_day,
to_char(extract (month from d.dates), 'FMmonth') as month_name,
to_char(extract (day from d.dates), 'FMday') as day_name
  from
  (
select ('2007-01-01'::date + s.a) as dates
from generate_series(0, 14) as s(a)
  ) d
) dd;

The error I get is:

ERROR: . is not a number
SQL state: 22P02

Any help will be appreciated!

Thanks,

Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Priorities for users or queries?

2007-03-27 Thread Bruce Momjian

Added to TODO:

* Allow configuration of backend priorities via the operating system

  Though backend priorities make priority inversion during lock
  waits possible, research shows that this is not a huge problem.
  http://archives.postgresql.org/pgsql-general/2007-02/msg00493.php


---

Ron Mayer wrote:
 Bruce Momjian wrote:
  Hard to argue with that.
 
 Is it a strong enough argument to add a TODO?
 
 
 I'm thinking some sort of TODO might be called for.
 
 Perhaps two TODOs?
   * Use the OS's priority features to prioritize
 backends (and document that it might work
 better with OS's that support priority inheritance).
   * Investigate if postgresql could develop an
 additional priority mechanism instead of using
 the OS's.
 
  Ron Mayer wrote:
  Magnus Hagander wrote: ...
  quite likely to suffer from priority inversion
  ... CMU paper... tested PostgreSQL (and DB2) on TPC-C 
  and TPC-W ...found that...I/O scheduling through 
  CPU priorities is a big win for postgresql.
 
  http://www.cs.cmu.edu/~bianca/icde04.pdf
 
 Setting priorities seems a rather common request,
 supposedly coming up every couple months [5].
 
 The paper referenced [1] suggests that even with
 naive schedulers, use of CPU priorities is very
 effective for CPU and I/O intensive PostgreSQL
 workloads.
 
 If someone eventually finds a workload that does suffer
 worse performance due to priority inversion,
 (a) they could switch to an OS and scheduler
 that supports priority inheritance;
 (b) it'd be an interesting case for a paper
 rebutting the CMU one; and
 (c) they don't have to use priorities.
 
 If a user does find he wants priority inheritance it
 seems Linux[1], BSD[2], some flavors of Windows[3],
 and Solaris[4] all seem to be options; even though
 I've only seen PostgreSQL specifically tested for
 priority inversion problems with Linux (which did
 not find problems but found additional benefit of
 using priority inheritance).
 
 
 
 
 [1] Linux with Priority inheritance showing benefits for
 PostgreSQL
 http://www.cs.cmu.edu/~bianca/icde04.pdf
 [2] BSD priority inheritance work mentioned:
 http://www.freebsd.org/news/status/report-july-2004-dec-2004.html
 [3] Windows priority inheritance stuff:
 http://msdn2.microsoft.com/en-us/library/aa915356.aspx
 [4] Solaris priority inheritance stuff
 http://safari5.bvdep.com/0131482092/ch17lev1sec7
 http://www.itworld.com/AppDev/1170/swol-1218-insidesolaris/
 [5] Tom suggests that priorities are a often requested feature.
 http://svr5.postgresql.org/pgsql-performance/2006-05/msg00463.php
 
 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faq

-- 
  Bruce Momjian  [EMAIL PROTECTED]  http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [pgsql-advocacy] [GENERAL] Is there a shortage of postgresql skilled ops people

2007-03-27 Thread Federico

On 3/27/07, Oleg Bartunov oleg@sai.msu.su wrote:

Marc,
I forwarded your message to pgsql-advocacy list.

The same situation in Russia ! I think, that the major problem here is
lack of official training courses on PostgreSQL and certificates.
Official mean something that was accepted by the PostgreSQL community.
This is a real pity, since we (developers) are working on adding nice
features, making porting popular software solutions to PostgreSQL easier,
but there are no certified postgresql admins available on market.
For example, I and Teodor last year participated in porting of very
popular accounting enterprize solution from MS SQL to PostgreSQL and
there are about 800,000 installations already, so in principle, we have
big market, but people needed to be educated and certified, so company
could decide to switch from MS SQL to PostgreSQL.

Probably, it's time to sponsor our book-writers and other enthusiasts
to write PostgreSQL Administration handbook, which we (community) will
accept, support and translate to different languages. I'm willing
to contribute Full Text Search chapter, for example. AFAIK, we have enough
people, already wrote PostgreSQL books. I like Corry's book, for example.
I don't know how much it might costs, but I'm sure community has money for this.

As for certificates, I see no real problem. We need to design nice
certificate, translate to different languages, publish on www.postgresql and
approve a list of people, who can sign certificate. We have many members of
our community in different regions/countries, so this is not a problem.

btw, probably, this project could be a nice introducing for PostgreSQL EU.



Hi Oleg,
I agree with your question.

I've just sent a message that question about the official
certification from postgresql.org on the advocacy list.

I hope that the european group can discuss about this important
question and eventually create an european relationship structure
(maybe at pgday ;) .

Regards
Federico

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [pgsql-advocacy] [GENERAL] Is there a shortage of postgresql skilled ops people

2007-03-27 Thread Peter Eisentraut
Oleg Bartunov wrote:
 Probably, it's time to sponsor our book-writers and other enthusiasts
 to write PostgreSQL Administration handbook, which we (community)
 will accept, support and translate to different languages.

http://www.postgresql.org/docs/current/static/admin.html

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [pgsql-advocacy] [GENERAL] Is there a shortage of postgresql skilled ops people

2007-03-27 Thread Joshua D. Drake

Peter Eisentraut wrote:

Oleg Bartunov wrote:

Probably, it's time to sponsor our book-writers and other enthusiasts
to write PostgreSQL Administration handbook, which we (community)
will accept, support and translate to different languages.


http://www.postgresql.org/docs/current/static/admin.html



With all kudos to that link, that is not a handbook it is a reference.

Joshua D. Drake

--

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Populate a calendar table

2007-03-27 Thread Peter Eisentraut
Raymond O'Donnell wrote:
      to_char(extract (month from d.dates), 'FMmonth') as month_name,
      to_char(extract (day from d.dates), 'FMday') as day_name

These formatting patterns are invalid. Check the documentation for the 
real ones.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [pgsql-advocacy] [GENERAL] Is there a shortage of postgresql skilled ops people

2007-03-27 Thread Bruce Momjian
Joshua D. Drake wrote:
 Peter Eisentraut wrote:
  Oleg Bartunov wrote:
  Probably, it's time to sponsor our book-writers and other enthusiasts
  to write PostgreSQL Administration handbook, which we (community)
  will accept, support and translate to different languages.
  
  http://www.postgresql.org/docs/current/static/admin.html
  
 
 With all kudos to that link, that is not a handbook it is a reference.

Well, there is a lot of descriptive text in the admin section.

-- 
  Bruce Momjian  [EMAIL PROTECTED]  http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Populate a calendar table

2007-03-27 Thread Osvaldo Rosario Kussama

Raymond O'Donnell escreveu:

'Lo all,

I've created a calendar table based on an article I found on the web, 
but I can't figure out what's wrong with the query I've written to 
populate it. Here's the table -


CREATE TABLE aux_dates
(
  the_date date NOT NULL,
  the_year smallint NOT NULL,
  the_month smallint NOT NULL,
  the_day smallint NOT NULL,
  month_name character varying(12),
  day_name character varying(12),
  CONSTRAINT aux_dates_pkey PRIMARY KEY (the_date)
)

- and here's what I've come up with to populate it -

insert into aux_dates
select * from (
select
d.dates as the_date,
extract (year from d.dates) as the_year,
extract (month from d.dates) as the_month,
extract (day from d.dates) as the_day,
to_char(extract (month from d.dates), 'FMmonth') as month_name,
to_char(extract (day from d.dates), 'FMday') as day_name
  from
  (
select ('2007-01-01'::date + s.a) as dates
from generate_series(0, 14) as s(a)
  ) d
) dd;

The error I get is:

ERROR: . is not a number
SQL state: 22P02

Any help will be appreciated!





Try:
 to_char(d.dates, 'FMmonth') as month_name,
 to_char(d.dates, 'FMday') as day_name

[]s
Osvaldo


___ 
Yahoo! Mail - Sempre a melhor opção para você! 
Experimente já e veja as novidades. 
http://br.yahoo.com/mailbeta/tudonovo/


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [pgsql-advocacy] [GENERAL] Is there a shortage of postgresql skilled ops people

2007-03-27 Thread Oleg Bartunov

On Tue, 27 Mar 2007, Peter Eisentraut wrote:


Oleg Bartunov wrote:

Probably, it's time to sponsor our book-writers and other enthusiasts
to write PostgreSQL Administration handbook, which we (community)
will accept, support and translate to different languages.


http://www.postgresql.org/docs/current/static/admin.html


Thanks, I know it. I meant sort of training program.

Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Could not create relation: File exists error

2007-03-27 Thread Tom Lane
Jesse Cleary [EMAIL PROTECTED] writes:
 One exception is notable - $PGDATA/base/835021/4294967264 - this file 
 has a size of 0 and a timestamp that coincides with the date and time 
 range of the nightly run when these errors first appeared.  This seems 
 like a good clue, but not sure what I should do next?  Thanks...

Hm.  I guess the next question is whether any of those 429... numbers
appear in pg_class.relfilenode of your database?

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Problem with transactions

2007-03-27 Thread Richard Huxton

Matthijs Melissen wrote:

I want two users to execute the following queries:

1) delete from forum where id = 'A';
1) insert into forum (id, name) values ('A', 'testa');

2) delete from forum where id = 'A';
2) insert into forum (id, name) values ('A', 'testb');

id is a unique key. The numbers designate the user (1 and 2). The 
problem is that I don't know in which order the queries are executed (I 
only know each user executes its DELETE query before the INSERT query). 
I can't use UPDATE because I don't know in advance that there exist a 
row with id A.


So how are you picking A? If you don't know whether that key is 
already in the database, why has user2 deleted user1's row?


If you want to know whether there is a row with that key in the database 
why not use a SELECT?


How do I prevent the queries from being executed in the wrong order and 
thus causing an 'duplicate key violates unique constraint' error?


You don't say what the wrong order is, and why.
You're trying to insert a duplicate key - you should get an error.

I think you're going to have to explain what it is you're trying to 
achieve. You're not trying to re-invent the SERIAL type are you?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Temporarily disable all table indices

2007-03-27 Thread George Pavlov
a (possibly slightly more user-friendly) alternative to the catalog
table is pg_dump, e.g.:

pg_dump -d your_db_name -t your_table -s | grep 'CREATE INDEX' 



 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of 
 Dmitry Koterov
 Sent: Tuesday, March 27, 2007 3:10 AM
 To: Erik Jones
 Cc: Postgres General
 Subject: Re: [GENERAL] Temporarily disable all table indices
 
 Thanks!
 
 pg_indexes.indexdef is exactly what I was looking for!
 
 
 On 3/27/07, Erik Jones  [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED]  wrote:
 
   
   On Mar 26, 2007, at 5:24 PM, Dmitry Koterov wrote:
 
 
   Hello.
   
   I need to perform a mass operation (UPDATE) on 
 each table row. E.g. - modify one table column:
   
   UPDATE tbl SET tbl_text = MD5(tbl_id); 
   
   The problem is that if this table contains a 
 number of indices, such UPDATE is very very slow on large table. 
   
   I have to drop all indices on the table, then 
 run the update (very quick) and after that - re-create all 
 indices back. It is much more speedy. Unfortunately the table 
 structure may change in the future ( e.g. - new indices are 
 added), so I don't know exactly in this abstraction layer, 
 what indices to drop and what - to re-create. 
   
   Is any way (or ready piece of code) to save all 
 existed indices, drop them all and then - re-create after a 
 mass UPDATE? 
   
 
 
   No, but you can use the pg_indexes view ( 
 http://www.postgresql.org/docs/8.2/interactive/view-pg-indexes
 .html 
 http://www.postgresql.org/docs/8.2/interactive/view-pg-indexe
s.html ) to dynamically determine what indexes a table has.
   
   
   
   erik jones [EMAIL PROTECTED]
   software developer
   615-296-0838
   emma(r)
 
 
 
 
 
 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [pgsql-advocacy] [GENERAL] Is there a shortage of postgresql skilled ops people

2007-03-27 Thread Oleg Bartunov

On Tue, 27 Mar 2007, Bruce Momjian wrote:


Joshua D. Drake wrote:

Peter Eisentraut wrote:

Oleg Bartunov wrote:

Probably, it's time to sponsor our book-writers and other enthusiasts
to write PostgreSQL Administration handbook, which we (community)
will accept, support and translate to different languages.


http://www.postgresql.org/docs/current/static/admin.html



With all kudos to that link, that is not a handbook it is a reference.


Well, there is a lot of descriptive text in the admin section.


Lecturers should know better, but I think training course should include
control questions, the order of lecturers, how much time should be
enough to learn a lesson well, practical tasks, etc. This is what people
expects. admin.html is a good foundation, of course. We need better 
illustration, on the whole, everything which makes courses professional

(I'm not a specialist, sorry). My young colleagues (Nikolay and Ivan)
are trying to setup Pgsql master class and spent several days to create
training live cd, which is a good idea.

Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] cutting out the middleperl

2007-03-27 Thread Aidan Van Dyk
Kev wrote:

 Hi everyone,
 
 I'm still in the design phase of a project.  I was just wondering if
 anyone has any thoughts or experience on the idea of cutting the P out
 of the LAMP (or in my case, WAMP for now) stack.  What I mean is
 having
 everything encapsulated into sql (or plpgsql or plperl where needed)
 functions stored in the pgsql server, and have Apache communicate with
 pgsql via a tiny C program that pretty much just checks whether the
 incoming function is on the allowed list and has the proper data
 types,
 then passes it straight in.  Any errors are logged as potential
 security
 breaches.

Sounds something like mod_libpq:
   http://asmith.id.au/mod_libpq.html




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Strange behaviour under heavy load

2007-03-27 Thread Dmitry Koterov

Hello.

I have found that sometimes heavy loaded PostgreSQL begins to run all the
queries slower than usual, sometimes - 5 and more times slower. I cannot
reprocude that, but symptoms are the following: queries work very fast for
5-10 minutes, and after that - significant slowdown (every query, even a
simple one, works 5-10 and ever more times slower), disk write activity
grows too (but not everytime - I don't know exactli if there is a direct
correlation). 2-3 seconds, then - performance restores back to normal.

Autovacuum is turned off.

Machine has 2 processors, huge memory, fast SCSI disks.

I understand that there is too less information. Please advice what to
monitor for better problem discovering.


Re: [GENERAL] Could not create relation: File exists error

2007-03-27 Thread Tom Lane
Jesse Cleary [EMAIL PROTECTED] writes:
 This script has been running successfully for several months (70-90 min each 
 night).  Out of the blue I'm now getting the following error message after 
 each psql command, except the last vacuum full analyze command:
 ERROR:  could not create relation 1663/835021/4294967254: File exists

 Postgres 8.0.8 with

After looking back at the 8.0 code I'm pretty sure I know approximately
what is happening, though not the exact details.  Somehow,
CheckMaxObjectId is firing and forcing the OID counter up to
almost-maximum, which constrains the numbers that REINDEX and CLUSTER
try to select as file names.  And there wasn't any code in 8.0 to
recover from a chance filename collision, hence the error.

A fairly likely cause for this is that one of the tables being
CLUSTERed has OIDs and there is a row with an almost-maximum OID in
there --- when the row is copied across to the newly clustered table,
its OID would be shown to CheckMaxObjectId.  So every night, the OID
counter would have the exact same value just after the CLUSTER step,
and subsequent reindexes would always try to pick the same filenames
as they did before.

We fixed that whole horrid mess in 8.1, so really the best answer
would be to update to 8.1 or 8.2.  If you can't do that, are you
actually using the OIDs in these tables?  If not, ALTER TABLE SET
WITHOUT OIDS would be a good and quick fix.  Failing that, I think
you need to find the high-numbered OIDs and get rid of them
(just delete and reinsert the rows should work).

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Is there a shortage of postgresql skilled ops people

2007-03-27 Thread Scott Marlowe
On Tue, 2007-03-27 at 08:54, Marc Evans wrote:
 Hello -
 
 Over the past couple of years I have made use of postgresql as my database 
 of choice when developing new software. During that time, my clients have 
 in multiple cases eventually come back to me and requested a re-targeting 
 to Any database that we (my client) can find skilled ops staff to 
 support. In a most recent case, professional recruiters were employed to 
 try to find such people. The search was disappointing at best.
 
 My question for this community is, what do enterprises that you deploy 
 postgresql within do for skilled operations staffing? I can understand 
 trying to convert a mysql or Oracle person to work on postgresql, but it 
 would be very helpful to have a potential talent pool to draw from that 
 was similar to those others. Finding people with HA, scaling and 
 performance tuning knowledge is something that seems impossible to find 
 except in people wanting to be developers.
 
 The sad reality from what I have observed is that unless more people gain 
 those skills and want to work in ops, it's becoming very hard for me to 
 justify recommending postgresql for enterprise (or larger) scale projects.
 
 What do others  do and/or experience?

I got my current job when a recruiter was scouring the pgsql mailing
lists and emailed me asking me if I was interested in working said
company.

I guess that's one way to look for pgsql people.  She just happened to
catch me right after my last company had decided to switch to Windows
and I'd decided to take a severance package and a short vacation.

I know at least three other people who would make damned good pgsql
admins, but who aren't necessarily looking for that job right now.  I'm
sure most other pgsql users are in the same boat.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] cutting out the middleperl

2007-03-27 Thread Steve Atkins


On Mar 27, 2007, at 7:34 AM, Aidan Van Dyk wrote:


Kev wrote:


Hi everyone,

I'm still in the design phase of a project.  I was just wondering if
anyone has any thoughts or experience on the idea of cutting the P  
out

of the LAMP (or in my case, WAMP for now) stack.  What I mean is
having
everything encapsulated into sql (or plpgsql or plperl where needed)
functions stored in the pgsql server, and have Apache communicate  
with

pgsql via a tiny C program that pretty much just checks whether the
incoming function is on the allowed list and has the proper data
types,
then passes it straight in.  Any errors are logged as potential
security
breaches.


Sounds something like mod_libpq:
   http://asmith.id.au/mod_libpq.html


Or SQL-on-rails
http://www.sqlonrails.org/

Cheers,
  Steve

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Strange behaviour under heavy load

2007-03-27 Thread Oleg Bartunov

hmm,

looks like checkpoint ? 
Check checkpoint settings in postgresql.conf.



Oleg
On Tue, 27 Mar 2007, Dmitry Koterov wrote:


Hello.

I have found that sometimes heavy loaded PostgreSQL begins to run all the
queries slower than usual, sometimes - 5 and more times slower. I cannot
reprocude that, but symptoms are the following: queries work very fast for
5-10 minutes, and after that - significant slowdown (every query, even a
simple one, works 5-10 and ever more times slower), disk write activity
grows too (but not everytime - I don't know exactli if there is a direct
correlation). 2-3 seconds, then - performance restores back to normal.

Autovacuum is turned off.

Machine has 2 processors, huge memory, fast SCSI disks.

I understand that there is too less information. Please advice what to
monitor for better problem discovering.



Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [pgsql-advocacy] [GENERAL] Is there a shortage of postgresql skilled ops people

2007-03-27 Thread Bruce Momjian
Oleg Bartunov wrote:
 On Tue, 27 Mar 2007, Bruce Momjian wrote:
 
  Joshua D. Drake wrote:
  Peter Eisentraut wrote:
  Oleg Bartunov wrote:
  Probably, it's time to sponsor our book-writers and other enthusiasts
  to write PostgreSQL Administration handbook, which we (community)
  will accept, support and translate to different languages.
 
  http://www.postgresql.org/docs/current/static/admin.html
 
 
  With all kudos to that link, that is not a handbook it is a reference.
 
  Well, there is a lot of descriptive text in the admin section.
 
 Lecturers should know better, but I think training course should include
 control questions, the order of lecturers, how much time should be
 enough to learn a lesson well, practical tasks, etc. This is what people
 expects. admin.html is a good foundation, of course. We need better 
 illustration, on the whole, everything which makes courses professional
 (I'm not a specialist, sorry). My young colleagues (Nikolay and Ivan)
 are trying to setup Pgsql master class and spent several days to create
 training live cd, which is a good idea.

My point is that the admin manual is more than a reference, not that the
admin manual is a _training_ manual.

-- 
  Bruce Momjian  [EMAIL PROTECTED]  http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Constraint and Index with same name? (chicken and egg probelm)

2007-03-27 Thread Tom Lane
David Brain [EMAIL PROTECTED] writes:
Tom Lane wrote:
 Hm, I don't see fk_cdrsummary_cdrimportsession in there anywhere?

 That is _very_ odd - I can see it in pgadmin, and also in pg_constraint, 
 but it's not showing up in pg_dump or on a '\d' in psql.

Oh really?  (looks at code...)  Hah, I have a theory.  Both pg_dump and
psql's \d command assume that tables with pg_class.reltriggers = 0 must
not have any foreign keys, and so they don't bother looking into
pg_constraint for FKs.  You mentioned that this was a Slony slave DB,
and I know that Slony sometimes plays tricks with zeroing reltriggers
temporarily.  Or it might not be Slony's fault --- if you did a
data-only dump/restore with --disable-triggers and a pre-8.1 pg_dump,
it would also zero reltriggers; then if it failed before putting back
the correct reltriggers value at the end, you could wind up in this
state.

I'm not yet sure how reltriggers = 0 would result in the observed failure,
but if you fix it do things work any better?  You should first check
to see if any tables have bogus counts:

SELECT relname, reltriggers FROM pg_class WHERE
reltriggers != (SELECT count(*) FROM pg_trigger where pg_class.oid = tgrelid);

If so you can fix them with

UPDATE pg_class SET reltriggers = 
  (SELECT count(*) FROM pg_trigger where pg_class.oid = tgrelid) 
WHERE relname = 'whatever';

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Populate a calendar table

2007-03-27 Thread Raymond O'Donnell

On 27/03/2007 17:00, Tom Lane wrote:


I think you want just to_char(d.dates, 'FMmonth') and so on.
What you're invoking above is to_char(numeric) which has entirely
different format codes...


Duh! Of course.I didn't spot that.

Thanks to all who replied.

Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Is there a shortage of postgresql skilled ops people

2007-03-27 Thread Merlin Moncure

On 3/27/07, Erik Jones [EMAIL PROTECTED] wrote:

Not having looked myself, this is as much a question as a suggestion, but
are there not postgres dba training seminars/courses you could recommend
they send their dba's to?


There are some classes out there but in my opinion your best bet (from
point of view of looking for good talent) is to get people that found
their way to postgresql themselves.  In that sense you want to hook up
with people from the mailing lists or develop contacts from within the
community.  So, training classes are useful for beefing up on
knowledge and learning new tricks, but  postgresql dbas are born, not
made :)

merlin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] cutting out the middleperl

2007-03-27 Thread Merlin Moncure

On 3/27/07, Steve Atkins [EMAIL PROTECTED] wrote:

Or SQL-on-rails
 http://www.sqlonrails.org/


LOL!

merlin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Could not create relation: File exists error

2007-03-27 Thread Jesse Cleary

Tom Lane wrote:

Jesse Cleary [EMAIL PROTECTED] writes:
  

This script has been running successfully for several months (70-90 min each 
night).  Out of the blue I'm now getting the following error message after each 
psql command, except the last vacuum full analyze command:
ERROR:  could not create relation 1663/835021/4294967254: File exists



  

Postgres 8.0.8 with



After looking back at the 8.0 code I'm pretty sure I know approximately
what is happening, though not the exact details.  Somehow,
CheckMaxObjectId is firing and forcing the OID counter up to
almost-maximum, which constrains the numbers that REINDEX and CLUSTER
try to select as file names.  And there wasn't any code in 8.0 to
recover from a chance filename collision, hence the error.

A fairly likely cause for this is that one of the tables being
CLUSTERed has OIDs and there is a row with an almost-maximum OID in
there --- when the row is copied across to the newly clustered table,
its OID would be shown to CheckMaxObjectId.  So every night, the OID
counter would have the exact same value just after the CLUSTER step,
and subsequent reindexes would always try to pick the same filenames
as they did before.

We fixed that whole horrid mess in 8.1, so really the best answer
would be to update to 8.1 or 8.2.  If you can't do that, are you
actually using the OIDs in these tables?  If not, ALTER TABLE SET
WITHOUT OIDS would be a good and quick fix.  Failing that, I think
you need to find the high-numbered OIDs and get rid of them
(just delete and reinsert the rows should work).

regards, tom lane

  
Thanks Tom - that seems to be it.  OIDs on each table were up in the 
4.29496 billion range and two tables had max OIDs just exactly prior to 
the error message filename IDs.  Updating PG is not an option for now, 
but I can drop the OID field as you suggest.  We actually ran into 
another OID max issue with a previous instance of this DB and so have a 
seq field to supply a unique ID in our mapping application instead of 
using OID.  I didn't realize that our OID field was even still around 
'til now.


I ran my CLUSTER and REINDEX commands on the tables I dropped the OID 
from and they ran smoothly.  Will keep the list posted if my nightly 
maintenance acts weird, but I think this will fix it.


Thanks so much for your advice and time solving this - I and many others 
on the list really appreciate your efforts...


Jesse


--

Jesse Cleary
Department of Marine Sciences   
UNC Chapel Hill

334 Chapman Hall
(919) 962-4987 


[EMAIL PROTECTED]











---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] redhat debug info

2007-03-27 Thread Joseph S
Fedora/Redhat has debuginfo packages that drop files with debug symbols 
in /usr/src/debug/, and gdb can use them.  Does anyone know how this 
works and how I can get those files from the pg tarball?


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Is there a shortage of postgresql skilled ops people

2007-03-27 Thread Tony Caduto


The sad reality from what I have observed is that unless more people gain 
those skills and want to work in ops, it's becoming very hard for me to 
justify recommending postgresql for enterprise (or larger) scale projects.


What do others  do and/or experience?



  

I think there are people around, but maybe they don't want to move etc.

If a PostgreSQL job where ever to show up in Milwaukee, I would apply 
for it in a heartbeat.



Another thing is this, how hard could it possibly be for a MS SQL DBA or 
Oracle DBA to pick up using PostgreSQL?
I don't think it would take a decent admin of any database to come up to 
speed in a very short time as long as they were interested in doing so.


--
Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql
Your best bet for Postgresql Administration 



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Priorities for users or queries?

2007-03-27 Thread Simon Riggs
On Fri, 2007-02-23 at 12:07 -0800, Ron Mayer wrote:
 Jim Nasby wrote:
  The problem with using simple OS priority settings is you leave yourself
  wide open to priority inversion.
 
 Which is why you either
  (a) note that papers studying priority inversion on RDBMS's
  find that it's a non issue on many RDBMS workloads; and
  (except for real-time databases) you tend to still get
  at least partial benefits even in the face of priority
  inversions.
 or
  (b) use a scheduler in your OS that supports priority
  inheritance or other mechanisms to avoid priority
  inversion problems.
  If you want to use priority inheritance to avoid
  the priority inversion settings it appears versions
  of Linux, BSD, Windows, and Solaris at least give
  you the ability to do so.
 
  There is already work being done on a queuing system; take a look at the
  bizgres archives.
 
 Which is cool; but not quite the same as priorities.
 
 It seems to me that Bizgres and/or PostgreSQL would not
 want to re-implement OS features like schedulers.

Its now a TODO item, so I thought I'd add a few more notes for later
implementors.

Some feedback from earlier lives: Teradata's scheduling feature was
regularly used, as was the query queuing system. Both seem to be
effective and desirable as distinct features. There were some problems
in early days with priority inversions, but these were mainly caused by
heavily CPU bound queries interacting with heavily I/O bound queries.
Notably this meant that occasional rogue queries would bring the server
to its knees and this took a long time to identify, isolate and bring to
justice. I would hope to learn from lessons like that for PostgreSQL.

We do already have a home-grown priority mechanism in PostgreSQL:
vacuum_delay. Interestingly it handles both I/O and CPU quite well.

The Bizgres queueing feature is specifically designed to allow the
system to utilise large memories effectively without over-subscription.
If you set a query to a lower priority when its taking up lots of RAM,
you'll probably lose much of the benefit.

Simple scheduling seems to work best in practice. Both Teradata and
Microstrategy have provided implementation with just 3 levels of
priority: H, M, L, together with simple rules for when no queries exist
at higher levels.

None of this is patented or patentable, if kept very generic, IMHO.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Is there a shortage of postgresql skilled ops people

2007-03-27 Thread Scott Marlowe
On Tue, 2007-03-27 at 15:09, Tony Caduto wrote:
  The sad reality from what I have observed is that unless more people gain 
  those skills and want to work in ops, it's becoming very hard for me to 
  justify recommending postgresql for enterprise (or larger) scale projects.
 
  What do others  do and/or experience?
  
 

 I think there are people around, but maybe they don't want to move etc.
 
 If a PostgreSQL job where ever to show up in Milwaukee, I would apply 
 for it in a heartbeat.
 
 
 Another thing is this, how hard could it possibly be for a MS SQL DBA or 
 Oracle DBA to pick up using PostgreSQL?
 I don't think it would take a decent admin of any database to come up to 
 speed in a very short time as long as they were interested in doing so.

I've certainly converted a few MySQL and MSSQL dbas in the past.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] redhat debug info

2007-03-27 Thread Peter Wiersig
On Tue, Mar 27, 2007 at 03:57:49PM -0400, Joseph S wrote:
 Does anyone know how this works and how I can get those files
 from the pg tarball?

My guess: strip -o

Peter

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Is there a shortage of postgresql skilled ops people

2007-03-27 Thread Jorge Godoy
Merlin Moncure [EMAIL PROTECTED] writes:

 On 3/27/07, Erik Jones [EMAIL PROTECTED] wrote:
 Not having looked myself, this is as much a question as a suggestion, but
 are there not postgres dba training seminars/courses you could recommend
 they send their dba's to?

 There are some classes out there but in my opinion your best bet (from
 point of view of looking for good talent) is to get people that found
 their way to postgresql themselves.  In that sense you want to hook up
 with people from the mailing lists or develop contacts from within the
 community.  So, training classes are useful for beefing up on
 knowledge and learning new tricks, but  postgresql dbas are born, not
 made :)

I have the same opinion.  Just look around and see how many certified
something are there and how many of them *really* know the product, its
details, how to work with it.

Certifications don't even certify the minimum knowledge.  They are like tests
that we do in school: they show how we are feeling and what we know (or
memorized during the night) at the instant of the test.  Some people even
cheat on tests (not that I'm saying it is done or is common with certification
tests...).

So, if I have a good memory to retain information for a week, I'll excel in
certification tests.  But then, what after that week?

I'm against certifications for any product.  It just doesn't show the
reality. 


-- 
Jorge Godoy  [EMAIL PROTECTED]

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] redhat debug info

2007-03-27 Thread Geoffrey

Joseph S wrote:
Fedora/Redhat has debuginfo packages that drop files with debug symbols 
in /usr/src/debug/, and gdb can use them.  Does anyone know how this 
works and how I can get those files from the pg tarball?


Where have you found these packages?  Is this an rpm you have located?

--
Until later, Geoffrey

Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety.
 - Benjamin Franklin

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Is there a shortage of postgresql skilled ops people

2007-03-27 Thread Scott Marlowe
On Tue, 2007-03-27 at 15:58, Jorge Godoy wrote:

 I have the same opinion.  Just look around and see how many certified
 something are there and how many of them *really* know the product, its
 details, how to work with it.
 
 Certifications don't even certify the minimum knowledge.  They are like tests
 that we do in school: they show how we are feeling and what we know (or
 memorized during the night) at the instant of the test.  Some people even
 cheat on tests (not that I'm saying it is done or is common with certification
 tests...).
 
 So, if I have a good memory to retain information for a week, I'll excel in
 certification tests.  But then, what after that week?
 
 I'm against certifications for any product.  It just doesn't show the
 reality. 

I would say that really depends on the certification.  My flatmate is an
RHCE, and that is a pretty rigorous certification.  Lots of applied
knowledge to fixing purposely broken computer systems.

OTOH, I've read the MCSE study guides before and was very underwhelmed. 
Seemed like a guide on which button to push to get a banana.

But neither one is a substitute for 20+ years of on the job experience
of a system.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] custom type for storing a HTML color

2007-03-27 Thread Justin Dearing

Hello, I currently store html color codes in that database with the
following DOMAIN:

CREATE DOMAIN html_color AS char(7) CHECK (VALUE ~ '^#[A-Fa-f0-9]{6}$');

Has anyone created a custom type that has additional functionality (eg
format the input or output to other formats, retrieve red, green or
blue values as 0-255 integers, etc. This is good enough for my uses at
the moment, but has anyone invented a better wheel than mine?

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Is there a shortage of postgresql skilled ops people

2007-03-27 Thread Chris Browne
[EMAIL PROTECTED] (Tony Caduto) writes:
 Another thing is this, how hard could it possibly be for a MS SQL DBA
 or Oracle DBA to pick up using PostgreSQL?
 I don't think it would take a decent admin of any database to come up
 to speed in a very short time as long as they were interested in doing
 so.

It's not that big a stretch as long as there is interest.

Those two are probably among the more painful from the specific
perspective that both depend on, in effect, hiding the OS from the
user to a great extent.

PostgreSQL *doesn't* have layers to hide that there is an OS.

In that particular sense, DB2 and Informix are probably moderately
easier jumps.

There is also the factor that Oracle and Microsoft have the habit of
pretending that their products define what the applications are, as
opposed to merely being instances of the sort of application.  If
people have drunk the koolaid and think that it *must* be like
Oracle to be a proper DBMS, well, there's some painful unlearning
ahead.  Users of not-quite-so-smugly-market-leading systems are
somewhat less likely to fall into that particular hole.
-- 
let name=cbbrowne and tld=linuxfinances.info in name ^ @ ^ tld;;
http://linuxfinances.info/info/advocacy.html
This Bloody Century
Early this century there was a worldwide socialist revolution. The
great battles were then between International Socialism, National
Socialism, and Democratic Socialism. Democratic Socialism won because
the inertia of democracy prevented the socialism from doing as much
damage here. Capitalism first reemerged from the ashes of National
Socialism, in Germany and Japan. It is now reemerging from the ashes
of International Socialism.  Next?

After all, inertia works both ways...
-- Mark Miller

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] redhat debug info

2007-03-27 Thread Tom Lane
Joseph S jks@selectacast.net writes:
 Fedora/Redhat has debuginfo packages that drop files with debug symbols 
 in /usr/src/debug/, and gdb can use them.  Does anyone know how this 
 works and how I can get those files from the pg tarball?

You download and install the postgresql-debuginfo RPM that exactly
matches your other postgresql RPM(s).  AFAIK the debuginfo RPMs are
available but not installed by default ...

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] redhat debug info

2007-03-27 Thread Joseph S

Tom Lane wrote:

Joseph S jks@selectacast.net writes:
Fedora/Redhat has debuginfo packages that drop files with debug symbols 
in /usr/src/debug/, and gdb can use them.  Does anyone know how this 
works and how I can get those files from the pg tarball?


You download and install the postgresql-debuginfo RPM that exactly
matches your other postgresql RPM(s).  AFAIK the debuginfo RPMs are
available but not installed by default ...

regards, tom lane


I don't use rpms, I build from the tarballs, hence my question.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Is there a shortage of postgresql skilled ops people

2007-03-27 Thread Erik Jones


On Mar 27, 2007, at 3:58 PM, Jorge Godoy wrote:


Merlin Moncure [EMAIL PROTECTED] writes:


On 3/27/07, Erik Jones [EMAIL PROTECTED] wrote:
Not having looked myself, this is as much a question as a  
suggestion, but
are there not postgres dba training seminars/courses you could  
recommend

they send their dba's to?


There are some classes out there but in my opinion your best bet  
(from

point of view of looking for good talent) is to get people that found
their way to postgresql themselves.  In that sense you want to  
hook up
with people from the mailing lists or develop contacts from within  
the

community.  So, training classes are useful for beefing up on
knowledge and learning new tricks, but  postgresql dbas are born, not
made :)


I have the same opinion.  Just look around and see how many certified
something are there and how many of them *really* know the  
product, its

details, how to work with it.

Certifications don't even certify the minimum knowledge.  They are  
like tests
that we do in school: they show how we are feeling and what we  
know (or
memorized during the night) at the instant of the test.  Some  
people even
cheat on tests (not that I'm saying it is done or is common with  
certification

tests...).

So, if I have a good memory to retain information for a week, I'll  
excel in

certification tests.  But then, what after that week?

I'm against certifications for any product.  It just doesn't show the
reality.


While I agree with everything you guys have said on this, my point  
was that client's like seeing that kind of stuff.  I'm sure a lot of  
companies would give that second thought to converting their systems  
over if they had what they perceived as decent training available for  
their existing staffs.


erik jones [EMAIL PROTECTED]
software developer
615-296-0838
emma(r)





Re: [GENERAL] redhat debug info

2007-03-27 Thread Tom Lane
Joseph S jks@selectacast.net writes:
 Tom Lane wrote:
 You download and install the postgresql-debuginfo RPM that exactly
 matches your other postgresql RPM(s).

 I don't use rpms, I build from the tarballs, hence my question.

You'd have to dig into the RPM code enough to figure out how it
separates the debug info out of the executables.  I've never paid
any attention, it's just something that happens magically at the
end of the RPM build process...

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] redhat debug info

2007-03-27 Thread Geoffrey

Tom Lane wrote:

Joseph S jks@selectacast.net writes:
Fedora/Redhat has debuginfo packages that drop files with debug symbols 
in /usr/src/debug/, and gdb can use them.  Does anyone know how this 
works and how I can get those files from the pg tarball?


You download and install the postgresql-debuginfo RPM that exactly
matches your other postgresql RPM(s).  AFAIK the debuginfo RPMs are
available but not installed by default ...


I just downloaded 8.2.3 for RHWS 5 but did not see a debuginfo rpm. 
Also looked for them same for 7.4.16 but didn't find a debuginfo rpm on 
the postgresql download site either.


--
Until later, Geoffrey

Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety.
 - Benjamin Franklin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] redhat debug info

2007-03-27 Thread Tom Lane
Geoffrey [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 You download and install the postgresql-debuginfo RPM that exactly
 matches your other postgresql RPM(s).

 I just downloaded 8.2.3 for RHWS 5 but did not see a debuginfo rpm. 
 Also looked for them same for 7.4.16 but didn't find a debuginfo rpm on 
 the postgresql download site either.

Hmm.  Red Hat makes their debuginfo RPMs available in the normal course
of things (they're usually in a debug/ subdirectory of wherever you
find the RPMs) but I'm not sure whether Devrim et al have a policy about
whether to upload their debuginfo RPMs to the PG servers.  They may feel
it's mostly a waste of bandwidth.

Anyway, the short answer is to download the source RPM and build it for
yourself ... most likely you want the source available anyway, if you're
going to be doing any debugging ...

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] redhat debug info

2007-03-27 Thread Devrim GÜNDÜZ
Hi,

On Tue, 2007-03-27 at 22:04 -0400, Tom Lane wrote:
 Red Hat makes their debuginfo RPMs available in the normal course
 of things (they're usually in a debug/ subdirectory of wherever you
 find the RPMs) 

Fedora and Red Hat 5 users can download debuginfo packages via yum.
Fedora users need to use debuginfo channel (yum --enablerepo
debuginfo ...), and I think it is the same for RHEL 5.

Red Hat provides debuginfo packages via their FTP site (ftp.redhat.com).
You don't need to be a RHN subscriber to get a debuginfo package. 

 but I'm not sure whether Devrim et al have a policy about whether to
 upload their debuginfo RPMs to the PG servers.  They may feel it's
 mostly a waste of bandwidth.

Exactly. I don't want to upload a package which is really big.

 Anyway, the short answer is to download the source RPM and build it
 for yourself ... 

Yes, agreed.

Regards,
-- 
Devrim GÜNDÜZ
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, ODBCng - http://www.commandprompt.com/




signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] redhat debug info

2007-03-27 Thread Joseph S

Geoffrey wrote:

Joseph S wrote:
Fedora/Redhat has debuginfo packages that drop files with debug 
symbols in /usr/src/debug/, and gdb can use them.  Does anyone know 
how this works and how I can get those files from the pg tarball?


Where have you found these packages?  Is this an rpm you have located?


From my yum conf file:
http://download.fedora.redhat.com/pub/fedora/linux/core/$releasever/$basearch/debug/

Also when you build src rpms you get a debuginfo package.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] custom type for storing a HTML color

2007-03-27 Thread Kevin Hunter
On 27 Mar 2007 05:10p -0400, Justin Dearing wrote:
 Hello, I currently store html color codes in that database with the
 following DOMAIN:
 
 CREATE DOMAIN html_color AS char(7) CHECK (VALUE ~ '^#[A-Fa-f0-9]{6}$');
 
 Has anyone created a custom type that has additional functionality (eg
 format the input or output to other formats, retrieve red, green or
 blue values as 0-255 integers, etc. This is good enough for my uses at
 the moment, but has anyone invented a better wheel than mine?

I have no idea of your specific environment/application, but it seems to
me that CHAR(7) is a waste of (at least) 3 bytes per row (or 4 bytes,
since you're not worrying about an alpha channel with HTML).

In essence, an HTML color is composed of 3 bytes, 8 bits each for red,
green, and blue.  If the capitalization of the [A-F] characters isn't
important, you could simply store the binary equivalent and convert it
to the hexadecimal equivalent when you need it.  This would then
alleviate the need for the constraint as it's simply a number.  (Well,
move the check to the next layer, I suppose.)

For instance: #aab329 could be stored as

   a ab 32 9
1010 1010  1011 0011  0010 1001

the binary number (10101010 10110011 00101001)_2 = (11,176,985)_10, or
basically a number that can be stored in 3 bytes.

That being said, I /am/ curious if someone has created a better wheel?
Besides, I don't even know what a DOMAIN is!

/me scurries off to find out about DOMAINs

Kevin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq