Re: [GENERAL] Turning on logging

2007-06-03 Thread Pavel Stehule

2007/6/3, Owen Hartnett [EMAIL PROTECTED]:


I know this is in the docs somewhere, and it's probably staring me in
the face, but I haven't been able to find it:

I'm running 8.2.4 through npgsql - how do I log:

1) connections to the database

2) updates, deletes, adds




Hello,

look postgresql.conf in your data directory and set

log_connection = on
log_statement = 'mod'

Regards
Pavel

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Transactional DDL

2007-06-03 Thread PFC

On Sun, 03 Jun 2007 01:39:11 +0200, Tom Lane [EMAIL PROTECTED] wrote:


Jaime Casanova [EMAIL PROTECTED] writes:

Tom's example seems to show that mysql inserts a commit immidiatelly
after a DDL but this one example shows the thing is worse than that.


Actually, I think their behavior is just DDL issues a COMMIT, so that
after that you are out of the transaction and the INSERT commits
immediately.  Some experimentation shows that mysql doesn't issue a
warning for rollback-outside-a-transaction, so the lack of any complaint
at the rollback step is just standard mysql-ism.


	Yes, their manual explains this and warns against it. The full list is  
here :


http://www.mysql.org/doc/refman/5.1/en/implicit-commit.html




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] general features

2007-06-03 Thread Badawy, Mohamed
Hi all 

am very new to postgres  actually am just collecting information about
but I having a problem with time so I was asking if someone could help
me with quick anserws about these features in postgres.

 

1-max amount of available storage data. 
2-Clustering support. 
3-price. 
4-speed.

 

 

 

Best Regards,,,

 

Mohamed Badawy

 


THIS EMAIL AND ANY ATTACHED FILES ARE CONFIDENTIAL. 
If you are not the intended recipient you are notified that any disclosure, 
reproduction,copying, distribution, or action taken in reliance on the contents 
of this information is strictly prohibited. If you have received this 
transmission in error please notify the sender immediately and then delete this 
email. Email transmission cannot be guaranteed to be secure or error free as 
information could be intercepted, corrupted, lost, destroyed,arrive late or 
incomplete, or contain viruses. The company/sender accepts no liability for any 
changes made to this email during transmission or any damage caused by any 
virus transmitted by this email. Any views or opinions expressed in this email 
are solely those of the author and do not necessarily represent those of the 
company. The company/sender accepts no liability for the content of this email, 
or for the consequences of any actions taken on the basis of the information 
provided, unless that information is subsequently confirmed in writing.

--
AMS Advanced Medical Services GmbH, Mannheim
Registered office: Am Exerzierplatz 2, 68167 Mannheim
Register court: Amtsgericht Mannheim, HRB 7462
Managing Director: Dr. Christian Carls
--



Re: [GENERAL] general features

2007-06-03 Thread A. Kretschmer
am  Sun, dem 03.06.2007, um 10:34:47 +0200 mailte Badawy, Mohamed folgendes:
 Hi all
 
 am very new to postgres  actually am just collecting information about but I
 having a problem with time so I was asking if someone could help me with quick
 anserws about these features in postgres.
 
  
 
 1-max amount of available storage data.

http://www.postgresql.org/docs/faqs.FAQ.html#item4.4


 2-Clustering support.

What do you mean exactly?


 3-price.

http://www.postgresql.org/docs/faqs.FAQ.html#item1.3


 4-speed.

very high ;-)


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Need a wee bit more info on PostgreSQL's SSL security options

2007-06-03 Thread Martijn van Oosterhout
On Sun, Jun 03, 2007 at 12:21:14AM +0200, Andreas wrote:
 Hi,
 
 I've got it so far:
 Server-OS: Debian 3.1 sarge
 PostgreSQL: Debian's binary PG 8.1.8 (still the most recent version
 available)
 
 Following a tutorial (actually for OpenVPN as I didn't find any for PG
 that goes beyond what is found in the main docu) I created a CA, server
 and client certificate, updated postgresql.conf and pg_hba.conf, did a
 restart of PG and connected from a windows box with pgAdmin.
 NICE :)
 
 Now as far as I see, even though I have my postgresql.crt+key in place,
 I still have to provide username and password, right?

Yes. postgresql can check that the client provides valid certificates,
you cannot however yet authenticate with certificates.

 Can I further check the security of the server? The aim will be to have
 the port open to the Internet.

Try to connect without SSL?

 Is there a documentation, that covers those matters more deeply than
 chapter 16.8 and 20.1 of PG's main documentation?
 Especially the whole client-side topic is rather thin for a newbie.

There's 29.16:
http://www.postgresql.org/docs/8.2/interactive/libpq-ssl.html

As for CRL, I think that was only added after 8.1.

Other than that I don't know.

Hope this helps,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


[GENERAL] Strange delimiters problem

2007-06-03 Thread Andrej Kastrin

Dear all,

I am experiencing difficulty in trying to copy the file structure below 
into PostgreSQL table:


||001||,||Classification||,||Economics||,||N||
||001||,||Classification||,||Trends||,||Y||
etc...

Although I can preprocess the input file with e.g awk, I would like to 
know is it possible to do that directly in PostgreSQL.


Thanks in advance for any suggestions.

Best, Andrej

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Turning on logging

2007-06-03 Thread Bill Moran
Owen Hartnett [EMAIL PROTECTED] wrote:

 
 I know this is in the docs somewhere, and it's probably staring me in 
 the face, but I haven't been able to find it:
 
 I'm running 8.2.4 through npgsql - how do I log:
 
 1) connections to the database
 
 2) updates, deletes, adds
 
 
 Is this set in the ./configuration?  Or in the startup command line?

It can be specified on the startup command line, but it's much easier
to put it in postgresql.conf:
http://www.postgresql.org/docs/8.2/static/config-setting.html

There are parameters for the specific things you want to control:
http://www.postgresql.org/docs/8.2/static/runtime-config-logging.html

HTH

-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Strange delimiters problem

2007-06-03 Thread hubert depesz lubaczewski

On 6/3/07, Andrej Kastrin [EMAIL PROTECTED] wrote:


||001||,||Classification||,||Economics||,||N||
||001||,||Classification||,||Trends||,||Y||
etc...



it looks like you should be able to read it using COPY command.
something like:
copy some_table from stdin with delimiter ',' csv quote '||';

depesz


Re: [GENERAL] general features

2007-06-03 Thread PFC



1-max amount of available storage data.


How many hard disks can you buy ?


2-Clustering support.


Please be more precise.


3-price.


Free.


4-speed.


Proportional to the expertise of the DBA.
	Postgres can be extremely fast if used correctly, it can totally suck if  
the database is badly designed.
	But this is completely normal. It is as fast as the other major players,  
and all of them need expertise to work well.
	If you're a newbie, you'll make newbie errors, fortunately this list has  
many friendly  knowledgeable people who can help you, and the docs are  
excellent.



THIS EMAIL AND ANY ATTACHED FILES ARE CONFIDENTIAL.


Aw. Come on.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] general features

2007-06-03 Thread Badawy, Mohamed
 

Thanks for all of the replies,,,

 

What I mean by clustering is to put the database on many machines.

 

 

Best Regards,,,

 

Mohamed Badawy

 


THIS EMAIL AND ANY ATTACHED FILES ARE CONFIDENTIAL. 
If you are not the intended recipient you are notified that any disclosure, 
reproduction,copying, distribution, or action taken in reliance on the contents 
of this information is strictly prohibited. If you have received this 
transmission in error please notify the sender immediately and then delete this 
email. Email transmission cannot be guaranteed to be secure or error free as 
information could be intercepted, corrupted, lost, destroyed,arrive late or 
incomplete, or contain viruses. The company/sender accepts no liability for any 
changes made to this email during transmission or any damage caused by any 
virus transmitted by this email. Any views or opinions expressed in this email 
are solely those of the author and do not necessarily represent those of the 
company. The company/sender accepts no liability for the content of this email, 
or for the consequences of any actions taken on the basis of the information 
provided, unless that information is subsequently confirmed in writing.

--
AMS Advanced Medical Services GmbH, Mannheim
Registered office: Am Exerzierplatz 2, 68167 Mannheim
Register court: Amtsgericht Mannheim, HRB 7462
Managing Director: Dr. Christian Carls
--



Re: [GENERAL] multimaster

2007-06-03 Thread Alexander Staubo

On 6/3/07, Ian Harding [EMAIL PROTECTED] wrote:

Rails propaganda explicitly proposes not repeating yourself and

[...]

The creator of RoR explicitly discourages use of RI, rules, triggers,
etc in the database as unnecessary.  His disciples take this as
gospel.


The creator of Rails is not the only developer working on Rails. There
are Rails developers who disagree with him on these issues, and his
disciples does not equate the community. On the other hand, I admit
that this mindset has a negative impact on the development of Rails as
a whole.

I consider myself a moderate pragmatist, and I think both sides are
slightly wrong; the database side not pragmatic enough, and the Rails
side pragmatic at the cost of moving too much database logic into the
app.

For example, part of the point of having validations declared on the
model is so that you can raise user-friendly errors (and pipe them
through gettext for localization) such as Your password must be at
least 4 characters long and contain only letters and digits.
Databases don't support this, and so applications end up having to
duplicate data-validation logic in order to find out what kind of user
input is invalid.

There might be hoops you could jump through to reduce the duplication.
You could, perhaps, introspect the schema and see that the password
column has a check constraint with a certain expression (eg.,
password ~ '^\w+$'). On insertion failure, you use the expression
string to generate a select -- eg., select password ~ '^\w+$' from
(select 'foobar'::text as password) as t -- on every column you have
to see what failed. I don't think PostgreSQL had full support for ANSI
SQLSTATE column context information yet, but even if it had, I think
you would not get information about *all* failing columns, only the
first one (anyone know?), so you would still needt run the values
through the database with a select.

At this point you don't have an error message, but you could store
these in a separate table (table_name, column_name, message) or
perhaps (table_name, constraint_name, message) and look them up on
failure.

Another option might be to use triggers that call raise -- which may
be acceptable for check constraints, but breaks the idiom for the
others type of constraints; at best you will be repeating yourself.
Another idea: I believe SQLSTATE (as implemented by PostgreSQL)
currently lacks context information about which columns failed a
constraint, but you had this, you could correlate

None of this is terribly idiomatic, and involves a bunch of glue
between application and database which needs to incorporated into a
database layer.

This may be a case for letting constraints have an optional
description; this way the schema would also be self-documenting, eg.:

 create table foo (
   id serial,
   name text check (name ~ '^(\w|\s)+$') or raise error 'Name must
contain letters, digits and spaces only'
 );

This still isn't enough for the app side -- if multiple columns fail
to validate, the app needs to know about them all, not just the first
one:

# create table persons (name text check (name != '') not null unique,
age int check (age  0));
# insert into persons (name, age) values ('', 0);
ERROR:  new row for relation persons violates check constraint
test_name_check1


I have used rails for a couple projects and I repeated myself
religiously in database constraints.  This revealed a couple
disturbing properties of rails that made me not want to use it,
notably, assuming an empty input box equates to an empty string vice
null for text data types, even if the table is defined as accepting
nulls.


An empty string is not null! Null means the value is missing, which is
clearly not the case here. I would say Rails is exactly in the right
here. When an HTML form is posted, empty input boxes are declared as
empty strings, which what the user entered. The problem is not with
Rails/ActiveRecord but with your form handling. If you want empty
boxes to become null, add some client-side JavaScript logic that sets
the disabled attribute on empty input elements before form is
submitted; this will prevent the client from sending the value.

Alexander.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] multimaster

2007-06-03 Thread Ian Harding

On 6/2/07, Alexander Staubo [EMAIL PROTECTED] wrote:

On 6/2/07, Martijn van Oosterhout [EMAIL PROTECTED] wrote:
 I don't know if it's a general problem, but I've been involved in a
 using rails and it appears to have it's own way of declaring the
 database. It presumes to handle referential integrity and uniqueness in
 the application code (!).

I think you've been misled. True, Rails/ActiveRecord does bear the
signs of having been designed for MySQL/MyISAM, which has neither
transactions nor referential integrity, but this does not mean that
Rails does not support these constructs, or that Rails users don't use
them. I value my data integrity, so all my relations have RI, unique
constraints, null constraints, etc. as in any well-designed schema.



Rails propaganda explicitly proposes not repeating yourself and
since the RI constraints are defined in the rails models, and they are
enforced in the framework with a graceful feedback mechanism for
users, they implicitly denigrate defining constraints in the DB as
Repeating yourself.

The creator of RoR explicitly discourages use of RI, rules, triggers,
etc in the database as unnecessary.  His disciples take this as
gospel.

I have used rails for a couple projects and I repeated myself
religiously in database constraints.  This revealed a couple
disturbing properties of rails that made me not want to use it,
notably, assuming an empty input box equates to an empty string vice
null for text data types, even if the table is defined as accepting
nulls.

- Ian

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] general features

2007-06-03 Thread Ron Johnson

On 06/03/07 09:08, Badawy, Mohamed wrote:
[snip]


Thanks for all of the replies,,,

What I mean by clustering is to put the database on many machines.


Single database on many machines?

Do you mean federation/horizontal scaling, or DR replication or 
something different.  PostgreSQL has master-slave replication using 
Slony-1.


If you mean simultaneous access to the same disks from multiple 
machines without corrupting the data, you'll have to go to a 
proprietary system.  10K (15K if you've got the scratch) RPM RAID10, 
8GB RAM and a couple of dual-core Opterons is a *really* fast 
database box.  You might not need more than that.


But if you *do* need the continuous uptime that shared-disk 
clustering and rolling in-place upgrades gives you, then Rdb/VMS 
can't be beat.  It'll *cost*, though.


--
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


NULLS and User Input WAS Re: [GENERAL] multimaster

2007-06-03 Thread Ian Harding


An empty string is not null! Null means the value is missing, which is
clearly not the case here. I would say Rails is exactly in the right
here. When an HTML form is posted, empty input boxes are declared as
empty strings, which what the user entered. The problem is not with
Rails/ActiveRecord but with your form handling. If you want empty
boxes to become null, add some client-side JavaScript logic that sets
the disabled attribute on empty input elements before form is
submitted; this will prevent the client from sending the value.


The user was presented an _opportunity_ to enter data and did not.
The data is unknown.  I don't know how you can say ...The user
entered an empty string.  There is no empty string key on the
keyboard.

I have no idea why I got such hard pushback on this.  This is the
EXACT same behaviour other types use.  If a number field is presented
to the user and submitted with no value, NULL Is inserted.  Not zero,
which is the numeric equivalent of the empty string, but NULL.   Same
with date types.  Why not say they entered '1/1/1970' by default if
they entered nothing?

http://dev.rubyonrails.org/ticket/3301

- Ian



Alexander.



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: NULLS and User Input WAS Re: [GENERAL] multimaster

2007-06-03 Thread Alexander Staubo

On 6/3/07, Ian Harding [EMAIL PROTECTED] wrote:

 An empty string is not null! Null means the value is missing, which is
 clearly not the case here. I would say Rails is exactly in the right
 here. When an HTML form is posted, empty input boxes are declared as
 empty strings, which what the user entered. The problem is not with
 Rails/ActiveRecord but with your form handling. If you want empty
 boxes to become null, add some client-side JavaScript logic that sets
 the disabled attribute on empty input elements before form is
 submitted; this will prevent the client from sending the value.

The user was presented an _opportunity_ to enter data and did not.
The data is unknown.  I don't know how you can say ...The user
entered an empty string.  There is no empty string key on the
keyboard.


Not at all. If the input box already contained a string, and the user
erased the contents of the input box, then the user has, in effect,
entered an empty string. Not a null. This is a UI layer issue, not a
database issue.


I have no idea why I got such hard pushback on this.  This is the
EXACT same behaviour other types use.  If a number field is presented
to the user and submitted with no value, NULL Is inserted.  Not zero,
which is the numeric equivalent of the empty string, but NULL.   Same
with date types.  Why not say they entered '1/1/1970' by default if
they entered nothing?


Ah, no. An empty string is not a valid number -- in fact, it is the
absence of a number; the same goes for dates. An empty string,
however, is a valid string, since a string is (in this context)
defined as a sequence of 0 to n characters.)

Your patch is awful because it would mean there was no way to enter an
empty string in the database. A one-character string containing a
single space is not an empty string.

Alexander.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: NULLS and User Input WAS Re: [GENERAL] multimaster

2007-06-03 Thread Ian Harding

On 6/3/07, Alexander Staubo [EMAIL PROTECTED] wrote:

On 6/3/07, Ian Harding [EMAIL PROTECTED] wrote:
  An empty string is not null! Null means the value is missing, which is
  clearly not the case here. I would say Rails is exactly in the right
  here. When an HTML form is posted, empty input boxes are declared as
  empty strings, which what the user entered. The problem is not with
  Rails/ActiveRecord but with your form handling. If you want empty
  boxes to become null, add some client-side JavaScript logic that sets
  the disabled attribute on empty input elements before form is
  submitted; this will prevent the client from sending the value.

 The user was presented an _opportunity_ to enter data and did not.
 The data is unknown.  I don't know how you can say ...The user
 entered an empty string.  There is no empty string key on the
 keyboard.

Not at all. If the input box already contained a string, and the user
erased the contents of the input box, then the user has, in effect,
entered an empty string. Not a null. This is a UI layer issue, not a
database issue.

 I have no idea why I got such hard pushback on this.  This is the
 EXACT same behaviour other types use.  If a number field is presented
 to the user and submitted with no value, NULL Is inserted.  Not zero,
 which is the numeric equivalent of the empty string, but NULL.   Same
 with date types.  Why not say they entered '1/1/1970' by default if
 they entered nothing?

Ah, no. An empty string is not a valid number -- in fact, it is the
absence of a number; the same goes for dates. An empty string,
however, is a valid string, since a string is (in this context)
defined as a sequence of 0 to n characters.)

Your patch is awful because it would mean there was no way to enter an
empty string in the database. A one-character string containing a
single space is not an empty string.


Yeah, it is awful ;^)  However the existing system is equally awful
because there is no way to enter NULL!

Properly implemented, the rails model would allow you to indicate
nullability and use null if no data is provided.

- Ian



Alexander.



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Strange delimiters problem

2007-06-03 Thread Andrej Kastrin

hubert depesz lubaczewski wrote:
On 6/3/07, *Andrej Kastrin* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


||001||,||Classification||,||Economics||,||N||
||001||,||Classification||,||Trends||,||Y||
etc...


it looks like you should be able to read it using COPY command.
something like:
copy some_table from stdin with delimiter ',' csv quote '||';

depesz

Thanks,
but I receive the following error message when I'm trying to run the 
proposed copy command:


ERROR:  COPY quote must be a single character

I use PostgreSQL 8.1.5.

Best, Andrej



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] multimaster

2007-06-03 Thread Martijn van Oosterhout
On Sun, Jun 03, 2007 at 07:47:04PM +0200, Alexander Staubo wrote:
 The creator of Rails is not the only developer working on Rails. There
 are Rails developers who disagree with him on these issues, and his
 disciples does not equate the community. On the other hand, I admit
 that this mindset has a negative impact on the development of Rails as
 a whole.

Indeed, it certainly left me with a bad taste in my mouth.

 For example, part of the point of having validations declared on the
 model is so that you can raise user-friendly errors (and pipe them
 through gettext for localization) such as Your password must be at
 least 4 characters long and contain only letters and digits.
 Databases don't support this, and so applications end up having to
 duplicate data-validation logic in order to find out what kind of user
 input is invalid.

I think you're confusing validation and integrity constraints. The
example you're giving could be implemented in either the DB or the app.
Personnaly I'd do it in the app since it's something that doesn't
affect the integrity of the data. If I go in and manually change
someone's password to something not following that rule it's not going
to affect anything.

Integrity constraints are different: if you violate them your data has
serious problem. They are the assumptions of your model upon which
everything depends. We're talking referential integrity and uniqueness.
These things *cannot* be checked reliably in the app, and you shouldn't
try. You assume the constraints are valid and feel confident they are,
because the database has checked them for you.

 This still isn't enough for the app side -- if multiple columns fail
 to validate, the app needs to know about them all, not just the first
 one:

Basically, validation in this case is completely orthoginal to
integrity checks. If the data being validated isn't crucial to the
operation of the app, I wouldn't bother putting it in the database. But
fundamental integrity constraints, the database is the only place.

I wish you success in your efforts to making rails a little more sane
in this area.

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] monitor stats

2007-06-03 Thread Diego Fernando Arce

hello again

if, but, I need it is real time not in archives

lynx /usr/share/doc/postgresql-8.1.8/html/monitoring-stats.html

SELECT pg_stat_get_backend_pid(s.backendid) AS procpid,
  pg_stat_get_backend_activity(s.backendid) AS current_query
   FROM (SELECT pg_stat_get_backend_idset() AS backendid) AS s;

in 8.1.4

aslecol=# SELECT pg_stat_get_backend_pid(s.backendid) AS
procpid,pg_stat_get_backend_activity(s.backendid) AS current_query FROM
(SELECT pg_stat_get_backend_idset() AS backendid) AS s;
procpid
|
current_q
uery

-+--
--
  25506 | IDLE
  25507 | select
entidad,producto,num_doc,f_pago,to_char(sald_mora::numeric(12,2),'999,999,
999.99'),fecha,sald_mora::numeric(12,2) from bcsc_facics where 1=1 and
fecha_car
ga in (select max(fecha_carga) from bcsc_facics where codcli='1448998') and
bcsc_facics.codcli='1448998' order by fecha_carga
desc,entidad,producto,num_doc
  25524 | IDLE
   6447 | IDLE
598 | IDLE

in 8.1.8

aslecol=# SELECT pg_stat_get_backend_pid(s.backendid) AS
procpid,pg_stat_get_backend_activity(s.backendid) AS current_query FROM
(SELECT pg_stat_get_backend_idset() AS backendid) AS s;
procpid | current_query
-+---
(0 filas)




On 5/29/07, Shoaib Mir [EMAIL PROTECTED] wrote:


Make sure there are some queries been executed on the server from
pg_stat_activity. Make sure your logging level for log files is such that it
can log the queries in db server log file.

In addition to log_statement you can also set log_min_duration_statement
to 0 and that will also log queries in your log file.

--
Shoaib Mir
EnterpriseDB (www.enterprisedb.co,)

On 5/29/07, Diego Fernando Arce  [EMAIL PROTECTED] wrote:

 show log_statement;

  log_statement
 ---
  all
 (1 fila)


 select pg_stat_get_backend_pid(s.backendid
 ),pg_stat_get_backend_activity(s.backendid) from (select
 pg_stat_get_backend_idset() as backendid)s;

  pg_stat_get_backend_pid | pg_stat_get_backend_activity
 -+--
 (0 filas)
 executing in another terminal other query


 On 5/30/07, Shoaib Mir [EMAIL PROTECTED]  wrote:
 
  Did you do a reload or restart of the server after doing this change?
 
  what do you see when you do this query? -- show log_statement
 
  --
  Shoaib Mir
  EnterpriseDB ( www.enterprisedb.com)
 
  On 5/29/07, Diego Fernando Arce [EMAIL PROTECTED]  wrote:
  
   this is a part of postgrsql.conf
  
  
   log_statement = 'all'   # none, mod, ddl, all
   #log_hostname = off
  
  
  
   
#---
   # RUNTIME STATISTICS
  
   
#---
  
   # - Statistics Monitoring -
  
   #log_parser_stats = off
   #log_planner_stats = off
   #log_executor_stats = off
   #log_statement_stats = off
  
   # - Query/Index Statistics Collector -
  
   stats_start_collector = on
   stats_command_string = on
   #stats_block_level = off
   #stats_row_level = off
   #stats_reset_on_server_start = off
  
  
   *in 8.1.6 and 8.1.4 it works *
  
  
  
  
   On 5/29/07, Shoaib Mir [EMAIL PROTECTED]  wrote:
   
Did you try doing log_statements = all in your postgresql.conffile?
   
--
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
   
On 5/29/07, Diego Fernando Arce [EMAIL PROTECTED]  wrote:

 hello, I have a question, does not work monitor stats in
 postgres 8.1.8

  I cannot see querys in execution

 help me please..

 DiegoF

   
   
  
 




Re: NULLS and User Input WAS Re: [GENERAL] multimaster

2007-06-03 Thread Alexander Staubo

On 6/3/07, Ian Harding [EMAIL PROTECTED] wrote:

On 6/3/07, Alexander Staubo [EMAIL PROTECTED] wrote:
 Your patch is awful because it would mean there was no way to enter an
 empty string in the database. A one-character string containing a
 single space is not an empty string.

Yeah, it is awful ;^)  However the existing system is equally awful
because there is no way to enter NULL!


But there is. One could, quite convincingly, I think, argue that the
parsing of '' (empty string) into nil/null is data model-specific. One
solution, then, is to add this rule to the model:

class User  ActiveRecord::Base
 ...

 def description=(value)
   value = nil if value.blank?
   self.write_attribute(:description, value)
 end

end

You can easily refactor this into a plugin, which you could then invoke thus:

class User  ActiveRecord::Base
 null_when_empty :description
 ...
end

This is getting very Rails-specific, so I'll stop here. I would be
happy to send you the code (it's probably around 15 lines) for such a
plugin privately if you like.


Properly implemented, the rails model would allow you to indicate
nullability and use null if no data is provided.


The preferred approach nowadays is not to clutter the Rails (or in
this case, ActiveRecord) core unduly with all sorts of app-specific
solutions, and instead move code out into plugins. Plugins that, over
time, prove to be universally useful, would be considered for
inclusion into the core. So a plugin is a start.

Alexander.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] why postgresql over other RDBMS

2007-06-03 Thread Scott Ribe
 So it works right now, except it doesn't have (yet) the infrastructure to
 keep the scans synchronized

Perhaps you only got one read of the table because the process is
essentially self-synchronizing. Whenever one process gets ahead, it
requires a disk read for the next page, which causes it to block for a
relatively long time, during which time the other two processes either
proceed reading rows from cache, or come to the end of the cache and block
waiting for the same page to be read from disk. Obviously not a guarantee,
as indexing a relatively more expensive type COULD cause one process to get
multiple pages behind, and memory usage by other processes COULD cause
intervening pages to be flushed from cache. But I have a suspicion that the
experiment was not just a happy fluke, that there will be a strong tendency
for multiple simultaneous index operations to stay sufficiently closely
synch'd that the table will only be read from disk once. (Especially when
such operations are done while the database is otherwise quiescent, as would
be the typical case during a restore.)

-- 
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303) 722-0567 voice



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] multimaster

2007-06-03 Thread Alexander Staubo

On 6/3/07, Martijn van Oosterhout [EMAIL PROTECTED] wrote:

 For example, part of the point of having validations declared on the
 model is so that you can raise user-friendly errors (and pipe them
 through gettext for localization) such as Your password must be at
 least 4 characters long and contain only letters and digits.
 Databases don't support this, and so applications end up having to
 duplicate data-validation logic in order to find out what kind of user
 input is invalid.

I think you're confusing validation and integrity constraints. The
example you're giving could be implemented in either the DB or the app.


No, I was pointing out that Rails supports uniqueness and referential
integrity, but that it implemented validations as a general construct
in order to (among other things) provide user-friendly messages.

But what I said also applies to uniqueness and foreign key
constraints. Databases, including PostgreSQL, makes it hard for an
application to determine what part of the data failed when it did. You
get an error for some arbitrary column, but not all columns; and the
error does not (as far as I know) actually contain the column that
failed.


Personnaly I'd do it in the app since it's something that doesn't
affect the integrity of the data. If I go in and manually change
someone's password to something not following that rule it's not going
to affect anything.


I agree with you and I don't; as it stands now, it's too hard to
implement validation in the database alone, for the reasons I stated
earlier. But I would love for it to be possible, so that I can be sure
that not even plain SQL can screw up the data.

Alexander.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] SQL Manager 2007 for PostgreSQL released

2007-06-03 Thread Tino Wildenhain

L. Berger wrote:

On May 23, 8:47 am, EMS Database Management Solutions
(SQLManager.net) [EMAIL PROTECTED] wrote:

We, here at EMS Database Management Solutions, are pleased to announce
SQL Manager 2007 for PostgreSQL - the new major version of the
powerful PostgreSQL administration and development tool!

You can download SQL Manager 2007 for PostgreSQL 
at:http://www.sqlmanager.net/products/postgresql/manager/download

You can purchase SQL Manager 2007 for PostgreSQL 
at:http://www.sqlmanager.net/products/postgresql/manager/buy

What's new in SQL Manager 2007 for PostgreSQL?


snip


Thanks for this, but is there any plan to launch something like this
for use on Linux admin servers? Something that I could install on a
server, and perhaps work with a web interface? I would love some
recommendations.


Maybe I'm a bit old fashioned, but I would never ever consider
graphical frontends for a server. I mean, ssh -L and
local GUI client work very well here. Even web based stuff
seems dangerous (if not just for educational purposes)

Regards
Tino

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Continuous PITR (was Re: multimaster)

2007-06-03 Thread Scott Ribe
 P.S. it's not the the cloud anymore, it's the tubes.

It was always tubes. The cloud was just a convenient simplification for the
technically declined ;-)

-- 
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303) 722-0567 voice



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Corruption of files in PostgreSQL

2007-06-03 Thread Scott Ribe
I don't use lo_import and lo_export myself, but is there any way to log
their usage? It certainly sounds as though step 1 for this user is to keep
track of how much data is handed to PG for each file, and how much data is
returned to PG for each file (and how much data is in the file at the time
of the request).

-- 
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303) 722-0567 voice



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] High-availability

2007-06-03 Thread Lew

Alexander Staubo wrote:

As a side-note, I sat up pgpool-II today, and was pleasantly surprised
about how easy it all was; within two minutes I had two databases in
perfect sync on my laptop. It has limitations (such as in its handling
of sequences), but compared to Slony it's like a breath of fresh
mountain air.


Err, the setup is, I mean. Once you have Slony up and running, it's
pretty smooth.


I wonder what the OP means by real-time.  The standard definition is within 
a deterministic time bound.


Replication implies latency.  Ignoring latency or wishing it away will not help.

It is possible to manage latency.  One strategy is to minimize it.  There are 
others.


Also remember the ancient proverb, applicable when two or more nodes are 
trying to agree on what time it is:

Man with two watches never knows correct time.

I think of this category of issue as the Special Relativity of information.

--
Lew

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Multiple customers sharing one database?

2007-06-03 Thread Lew

Rick Schumeyer wrote:
I'm developing an application that will be used by several independent 
customers.  Conceptually, the data from one customer has no relation 
at all to another customer.  In fact, each customer's data is private, 
and you would never combine data from different customers.  


Michael Glaesemann wrote:
Unless the database schema is shared by the different customers, I'd set 
up a separate database for each. There's better security with separate 
databases, and since different customer's data would never be combined, 
there's no advantage to putting them in the same one. Per database 
overhead is probably going to be negligible compared to the 
infrastructure you'd want to put in place for security.


I am always confused by the overloaded term database in such discussions. 
Do we mean the RDBMS engine, i.e., run separate instances of PG?  I sure would 
recommend against that.  Or do we mean it as the PG documentation does, e.g.,

http://www.postgresql.org/docs/8.1/interactive/ddl-schemas.html


A PostgreSQL database cluster contains one or more named databases. Users and 
groups of users are shared across the entire cluster, but no other data is 
shared across databases. Any given client connection to the server can access 
only the data in a single database, the one specified in the connection request.


That seems to be PostgreSQL's answer to the OP's question.

Note:  Users of a cluster do not necessarily have the privilege to access every database in the cluster. Sharing of user names means that there cannot be different users named, say, joe in two databases in the same cluster; but the system can be configured to allow joe access to only some of the databases. 


In the OP's case, some of the databases is one of the databases.

--
Lew

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Delete with subquery deleting all records

2007-06-03 Thread Lew

Francisco Reyes wrote:

Lew writes:

Strange?  Why?  Did you expect a particular statistical distribution?  
Perhaps 



The impression was that one query was returning everything.. and the 
other only the records that did not exist in the one table.


you were surprised by the extent of the situation, not thinking there 
could be 100 records that didn't match?


Surprised that the outer join actually did ONLY display records that did 
not exist in the second table, even though I did not have a where clause 
to not list the records with a NULL value.


You only looked at some of the records, not all of them, correct?

Ah, yes, you did say,

I checked a number of them.


Your evaluation of a whole data set by manual examination of a small subset of 
the returned results cannot be certain.


Did you try SELECT COUNT(*) to check if the queries differed in the size of 
their returned result sets?



That is what I expected, BUT it only returned records that did NOT exist in the 
second table. It did not, as far as I could check, return all records.


You mean as far as you did check.  You still do not know the truth of your 
assertion that the outer join returned only a subset of the records.


SELECT COUNT( DISTINCT export_messages.export_id )
FROM export_messages
LEFT OUTER JOIN exports ON (export_messages.export_id = exports.export_id);

vs.

SELECT COUNT( DISTINCT export_messages.export_id )
FROM export_messages;

will reveal.

--
Lew

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] insane index scan times

2007-06-03 Thread Sergei Shelukhin
Create table entries (id bigint primary key, somedata varchar(500));
/* insert ~12mil rows of data, somedata mostly empty */

create table stuff (id bigint, bah int, primary key (id, bah));

insert into stuff (id,bah) select id, 1 from entries;

create index blah on stuff(id);

vacuum full analyze;

set enable_seqscan = on;

explain select * from entries inner join stuff on entries.id =
stuff.id;

 -  Seq Scan on stuff  (cost=0.00..193344.00 rows=12550400 width=12)


set enable_seqscan = off;

explain select * from entries inner join stuff on entries.id =
stuff.id;

 -  Index Scan using blah on stuff  (cost=0.00..25406232.30
rows=12550400 width=12)

Query execution resuls are consistent w/explain. wtf? no I mean,
WTF?!!!

Sorry. But I 'm amazed.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] High-availability

2007-06-03 Thread Lew

Madison Kelly wrote:
  Being a quite small company, proprietary hardware and fancy software 
licenses are not possible (ie: 'use oracle' won't help).


How much data do you put in the DB?  Oracle has a free version, but it has 
size limits.


(Ducking the slings and arrows of outraged PG fans: I prefer Postgre, I really 
do.)


--
Lew

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Faster data type for one-length values

2007-06-03 Thread Lew

Ron Mayer wrote:

Alvaro Herrera wrote:

André Volpato wrote:


The ammount of space saved seems pretty clear to me.

Yeah, zero most of the time due to alignment.


So trading off more I/O for less CPU?

I wonder if for any I/O bound database servers
it might be worth packing tightly rather than
aligning indexes on one-byte data.


The OP didn't say one-byte data, they said one-char data.

--
Lew

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] collision in serial numbers after INSERT?

2007-06-03 Thread Steve Lefevre

Bill Moran wrote:

Don't do that. Please let us know what site recommended that so I can
send an email to the author correcting them.
  

Hello Bill -

The 'offending' site and article is at
http://www.sitepoint.com/article/site-mysql-postgresql-2/3


Instead, do SELECT currval('seqname'), which is guaranteed to be isolated
from other sessions.
  
I've also gotten other advice to SELECT next_val ( whatever the exact 
wording is) will reserve that serial number for you. Is that true?


So l

If you use the code above, sooner or later you're going to get bit.
  

Thanks!


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] autovacuum vacuums entire database...is this right?

2007-06-03 Thread [EMAIL PROTECTED]
It's been about a month and a half, and I'm getting this VACUUM
again.  This time, I'm wondering if there's any way to tell if
autovacuum is doing a database-wide vacuum for the sake of xid
wraparound or for some other reason.  Is there some sort of entry that
gets put into the log, and if so, what log level would it be at?

If this doesn't get logged, could I make this a feature request?

Thanks!
--Richard



On Apr 15, 6:35 am, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 Hi,

 I'm just wondering if autovacuum is ever supposed to vacuum the entire
 database during one of its runs.  As far as I remember, it's supposed
 to vacuum one table at a time, based on the
 autovacuum_vacuum_threshold, autovacuum_analyze_threshold, etc.
 settings.

 For some reason, autovacuum decided to run a vacuum on my entire
 database (29GB large), and it's taking forever:

 select now(), query_start, current_query, backend_start, procpid,
 usename from pg_stat_activity where current_query  'IDLE';
   now  |  query_start  |
 current_query | backend_start | procpid | usename
 ---+---
 +---+---+-+--
  2007-04-15 06:34:27.925042-07 | 2007-04-14 22:23:31.283894-07 |
 VACUUM| 2007-04-14 22:23:31.274121-07 |9406 | postgres

 Is this expected behavior?

 --Richard



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] There can be only one! How to avoid the highlander-problem.

2007-06-03 Thread Lew

Erwin Brandstetter wrote:

CREATE TABLE mankind
(
 man_id integer primary key,
 people_id integer NOT NULL,  -- references table people .., but 
that's irrelevant here ..
 king boolean NOT NULL DEFAULT false
);


The trouble with this is that it models kingship as an attribute of every 
man.  (What, no female rulers allowed?)  The overhead of being not king is 
carried in every mankind record. This may suffice for your particular model, 
but if you were designing for evolution you'd have a problem.  Every new 
attribute of mankind would need a new column in the table - isDuke, 
isNoble, isHogSlopCleaner.


I would model kingship (or other attributes) in a separate table and use 
PRIMARY KEY to enforce, or a TRIGGER - there is a large, possibly unbounded 
set of ways to do this.


Here's one attempt, feel free to rate it good, bad or ugly (I am ignoring my 
mistrust of auto-incremented integer surrogate keys):


CREATE TABLE mankind
(
  man_id INTEGER PRIMARY KEY
  -- kings belong to nations, not vice versa
  -- , other information about a man
);
CREATE TABLE nationalities
(
  man_id INTEGER FOREIGN KEY REFERENCES mankind ( man_id ),
  nation_id INTEGER FOREIGN KEY REFERENCES nations ( nation_id ),
  PRIMARY KEY ( man_id, nation_id )
);
CREATE TABLE nations
(
  nation_id INTEGER PRIMARY KEY
  , king INTEGER FOREIGN KEY REFERENCES mankind ( man_id )
  -- , other information about a nation
  , FOREIGN KEY ( king, nation_id )
REFERENCES nationalities ( man_id, nation_id )
);

The circular foreign-key relationships might be problematic - would someone 
comment on that?


To handle that I would ensure that any transaction that updates
nations (king) checks that the pretender's man_id is already correctly 
entered in nations.


--
Lew

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Multiple customers sharing one database?

2007-06-03 Thread Michael Glaesemann


On Jun 3, 2007, at 12:45 , Lew wrote:


Michael Glaesemann wrote:

Unless the database schema is shared by the different customers,  
I'd set up a separate database for each. There's better security  
with separate databases, and since different customer's data would  
never be combined, there's no advantage to putting them in the  
same one. Per database overhead is probably going to be negligible  
compared to the infrastructure you'd want to put in place for  
security.


I am always confused by the overloaded term database in such  
discussions. Do we mean the RDBMS engine, i.e., run separate  
instances of PG?  I sure would recommend against that.  Or do we  
mean it as the PG documentation does, e.g.,

http://www.postgresql.org/docs/8.1/interactive/ddl-schemas.html


I likewise try to be careful in my usage of database and database  
server as they are different things. I meant (and used) database, not  
database server.


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] collision in serial numbers after INSERT?

2007-06-03 Thread Bill Moran
Steve Lefevre [EMAIL PROTECTED] wrote:

 Bill Moran wrote:
  Don't do that. Please let us know what site recommended that so I can
  send an email to the author correcting them.

 Hello Bill -
 
 The 'offending' site and article is at
 http://www.sitepoint.com/article/site-mysql-postgresql-2/3

My goodness, that article is ancient.  2001.  I have a hard time
believing he's going to update it if it's been wrong that long.

  Instead, do SELECT currval('seqname'), which is guaranteed to be isolated
  from other sessions.

 I've also gotten other advice to SELECT next_val ( whatever the exact 
 wording is) will reserve that serial number for you. Is that true?

Yes, please see the documentation.  Both currval() and next_val() are
transaction safe (thus guaranteed not to cause overlapped serials) but
they do slightly different things.

-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] multimaster

2007-06-03 Thread Jeff Davis
On Sun, 2007-06-03 at 22:54 +0200, Alexander Staubo wrote:
 I agree with you and I don't; as it stands now, it's too hard to
 implement validation in the database alone, for the reasons I stated
 earlier. But I would love for it to be possible, so that I can be sure
 that not even plain SQL can screw up the data.

You're blurring the line between an RDBMS and an application.
Applications errors and database errors do not have a one-to-one
mapping, although they do usually overlap. 

There are times when one database error maps onto several possible
user-level errors; and when many database errors map onto the same
user-level error; and when one database error does not cause any
user-level error; and when something that is a user-level error might
not have a matching constraint in the database at all. Trying to equate
the two concepts is a bad idea.

The application has much more information about the user and the context
of the error that the database shouldn't have. For instance, the
language that the user speaks might affect the error message. Or, there
may be two possible user interface actions that result in the same
constaint violation. For instance if you have a two-column unique
constraint, perhaps there is one interface to change one column and one
another. But you might want to return a different error to the user that
makes sense in the context of which value they tried to change. 

A database error doesn't even always need to be propogated back to the
user. If so, there would be no need for SAVEPOINTs and nobody would use
ISOLATION LEVEL SERIALIZABLE (not directly related to constraints, but
can cause an error just the same). 

Some user errors don't have a corresponding database constriant at all.
For instance, how about a re-type your password here field? That
should cause an error if it doesn't match the password field, but the
database would have no matching constraint.

Regards,
Jeff Davis


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] autovacuum vacuums entire database...is this right?

2007-06-03 Thread Alvaro Herrera
[EMAIL PROTECTED] escribió:
 It's been about a month and a half, and I'm getting this VACUUM
 again.  This time, I'm wondering if there's any way to tell if
 autovacuum is doing a database-wide vacuum for the sake of xid
 wraparound or for some other reason.  Is there some sort of entry that
 gets put into the log, and if so, what log level would it be at?

No, I don't think we had yet set more extensive logging in autovacuum.

 If this doesn't get logged, could I make this a feature request?

Not really, because starting from 8.2 autovacuum no longer does that.
There is no longer a need for database-wide vacuums.  So if you're
having problems because of that, your best bet is to upgrade.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: NULLS and User Input WAS Re: [GENERAL] multimaster

2007-06-03 Thread PFC



Yeah, it is awful ;^)  However the existing system is equally awful
because there is no way to enter NULL!


Consider this form :

First name :Edgar
Middle name :   J.
Last name : Hoover

Now, if someone has no middle name, like John Smith, should we use NULL  
or  for the middle name ?
NULL usually means unknown or not applicable, so I believe we have to  
use the empty string here. It makes sense to be able to concatenate the  
three parts of the name, without having to put COALESCE() everywhere.


Now consider this form :

City:   
State   :
Country :

If the user doesn't live in the US, State makes no sense, so it should  
be NULL, not the empty string. There is no unnamed state. Also, if the  
user does not enter his city name, this does not mean he lives in a city  
whose name is . So NULL should be used, too.


It is very context-dependent.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] why postgresql over other RDBMS

2007-06-03 Thread PFC


I believe you have made quite a good description of what happens.

	Index-building isn't very CPU-intensive for integers (geometrics and  
tsearch is another matter, of course), so building all indexes of a large  
table in one pass is a possibility that works now, provided you issue all  
create index commands in concurrent connections at roughly the same time.


I don't think pgrestore does this, though.


So it works right now, except it doesn't have (yet) the infrastructure  
to

keep the scans synchronized


Perhaps you only got one read of the table because the process is
essentially self-synchronizing. Whenever one process gets ahead, it
requires a disk read for the next page, which causes it to block for a
relatively long time, during which time the other two processes either
proceed reading rows from cache, or come to the end of the cache and  
block
waiting for the same page to be read from disk. Obviously not a  
guarantee,
as indexing a relatively more expensive type COULD cause one process to  
get

multiple pages behind, and memory usage by other processes COULD cause
intervening pages to be flushed from cache. But I have a suspicion that  
the
experiment was not just a happy fluke, that there will be a strong  
tendency

for multiple simultaneous index operations to stay sufficiently closely
synch'd that the table will only be read from disk once. (Especially when
such operations are done while the database is otherwise quiescent, as  
would

be the typical case during a restore.)





---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] multimaster

2007-06-03 Thread Alexander Staubo

On 6/4/07, Jeff Davis [EMAIL PROTECTED] wrote:

On Sun, 2007-06-03 at 22:54 +0200, Alexander Staubo wrote:
 I agree with you and I don't; as it stands now, it's too hard to
 implement validation in the database alone, for the reasons I stated
 earlier. But I would love for it to be possible, so that I can be sure
 that not even plain SQL can screw up the data.

You're blurring the line between an RDBMS and an application.
Applications errors and database errors do not have a one-to-one
mapping, although they do usually overlap.


True, and when they overlap you tend to want to describe the
validation errors in one place, not two -- either the database or the
app, not both. Relational databases have traditionally argued that
these rules should be in the former, so that there's one layer through
which every single change has to go.


There are times when one database error maps onto several possible
user-level errors; and when many database errors map onto the same
user-level error; and when one database error does not cause any
user-level error; and when something that is a user-level error might
not have a matching constraint in the database at all. Trying to equate
the two concepts is a bad idea.


I agree. In my experience, however, the best kind of data model is the
one that is immediately mappable to user-level concepts -- to human
concepts. A user relation has attributes like name, birth_date,
etc. If you manage to keep the model flat and friendly enough, you can
map the attributes to forms and translate attribute-level errors
directly to form error messages.

In the cases where a user-level attribute is represented by a set of
columns, or a referenced relation, or similar, you provide simple
shims that translate between them. For example, you probably want to
store date-time attributes as a single timestamp with timezone
column, but offer two fields to the user, one for the date and for the
time. With Rails this kind of shim is simple:

class User  ActiveRecord::Base
 ...
 validates_each :human_birth_date do |record, user, value|
   record.errors.add(attr, Bad date) unless MyDateParser.valid?(value)
 end

 def human_birth_date
   birth_datetime.strftime(%Y-%m-d)
 end

 def human_birth_date=(date)
   year, month, day = MyDateParser.parse(date)
   birth_datetime = Time.local(year, month, day, birth_datetime.hour,
birth_datetime.minute)
 end
end

With a well-designed, normalized schema, mapping relations and their
attributes to user input is very easy. I would argue that if mapping
is a problem, your schema is probably to blame.


The application has much more information about the user and the context
of the error that the database shouldn't have. For instance, the
language that the user speaks might affect the error message.


Localization is easily accomplished by piping the error message through gettext.


Some user errors don't have a corresponding database constriant at all.
For instance, how about a re-type your password here field? That
should cause an error if it doesn't match the password field, but the
database would have no matching constraint.


That's a user-interface detail, and not a data model detail; a
re-typed password has no database counterpart. I am speaking purely
about invariant constraints on the data itself.

Alexander.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] insane index scan times

2007-06-03 Thread Alvaro Herrera
Sergei Shelukhin escribió:

 explain select * from entries inner join stuff on entries.id =
 stuff.id;
 
  -  Seq Scan on stuff  (cost=0.00..193344.00 rows=12550400 width=12)
 
 
 set enable_seqscan = off;
 
 explain select * from entries inner join stuff on entries.id =
 stuff.id;
 
  -  Index Scan using blah on stuff  (cost=0.00..25406232.30
 rows=12550400 width=12)
 
 Query execution resuls are consistent w/explain. wtf? no I mean,
 WTF?!!!
 
 Sorry. But I 'm amazed.

I am not.  You are asking it to give you 12 million rows -- so it does.
What's the surprise if it takes long?

Do you really want to have all 12 million rows as a result?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Multiple customers sharing one database?

2007-06-03 Thread Ron Johnson

On 06/03/07 12:45, Lew wrote:
[snip]


I am always confused by the overloaded term database in such 
discussions. Do we mean the RDBMS engine, i.e., run separate instances 
of PG?  I sure would recommend against that.  Or do we mean it as the PG 
documentation does, e.g.,

http://www.postgresql.org/docs/8.1/interactive/ddl-schemas.html


A (relational) database is a database, and an RDBMS is what 
manipulates that (relational) database.


The schema defines the database.  A schema defines a specific 
logical sub-set of the schema.


--
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] High-availability

2007-06-03 Thread Madison Kelly

Lew wrote:

Madison Kelly wrote:
  Being a quite small company, proprietary hardware and fancy software 
licenses are not possible (ie: 'use oracle' won't help).


How much data do you put in the DB?  Oracle has a free version, but it 
has size limits.


(Ducking the slings and arrows of outraged PG fans: I prefer Postgre, I 
really do.)




  Hrm, it's hard to say as we're (hoping!) to grow. At the moment, a 
few hundred megs. If the company gets off the ground, possibly much 
more. also, we've got a few (dozen or so) side projects that each have 
their own DBs.


  I think the risk of running into a barrier like a size limit would be 
too much. Even if we get off the ground, the storage needs of the DB 
will outgrow our revenue. I'd hate to be in a position where I am 
dependent on a (potentially) very expensive invoice while we are still 
running on a shoe-string.


  Thanks for the suggestion though! I will poke at the free/trial 
version and, if I am unable to load-balance pgSQL and we run into 
performance problems, I will have a better idea of what options I have 
(ie: bigger iron vs. an oracle license).


  Thanks!

Madi

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] monitor stats

2007-06-03 Thread DiegoF

solved. the problem is that miss the line :: 1 localhost in /etc/hosts

thanks

On 6/3/07, Diego Fernando Arce [EMAIL PROTECTED] wrote:


hello again

if, but, I need it is real time not in archives

lynx /usr/share/doc/postgresql-8.1.8/html/monitoring-stats.html

SELECT pg_stat_get_backend_pid(s.backendid) AS procpid,
   pg_stat_get_backend_activity( s.backendid) AS current_query
FROM (SELECT pg_stat_get_backend_idset() AS backendid) AS s;

in 8.1.4

aslecol=# SELECT pg_stat_get_backend_pid(s.backendid) AS
procpid,pg_stat_get_backend_activity(s.backendid ) AS current_query FROM
(SELECT pg_stat_get_backend_idset() AS backendid) AS s;
 procpid
|
current_q
uery

-+--


--
   25506 | IDLE
   25507 | select
entidad,producto,num_doc,f_pago,to_char(sald_mora::numeric(12,2),'999,999,
999.99'),fecha,sald_mora::numeric(12,2) from bcsc_facics where 1=1 and
fecha_car
ga in (select max(fecha_carga) from bcsc_facics where codcli='1448998')
and bcsc_facics.codcli='1448998' order by fecha_carga
desc,entidad,producto,num_doc
   25524 | IDLE
6447 | IDLE
 598 | IDLE

in 8.1.8

aslecol=# SELECT pg_stat_get_backend_pid(s.backendid) AS
procpid,pg_stat_get_backend_activity(s.backendid) AS current_query FROM
(SELECT pg_stat_get_backend_idset() AS backendid) AS s;
 procpid | current_query
-+---
(0 filas)




On 5/29/07, Shoaib Mir [EMAIL PROTECTED]  wrote:

 Make sure there are some queries been executed on the server from
 pg_stat_activity. Make sure your logging level for log files is such that it
 can log the queries in db server log file.

 In addition to log_statement you can also set log_min_duration_statement
 to 0 and that will also log queries in your log file.

 --
 Shoaib Mir
 EnterpriseDB (www.enterprisedb.co,)

 On 5/29/07, Diego Fernando Arce  [EMAIL PROTECTED] wrote:
 
  show log_statement;
 
   log_statement
  ---
   all
  (1 fila)
 
 
  select pg_stat_get_backend_pid(s.backendid
  ),pg_stat_get_backend_activity(s.backendid) from (select
  pg_stat_get_backend_idset() as backendid)s;
 
   pg_stat_get_backend_pid | pg_stat_get_backend_activity
  -+--
  (0 filas)
  executing in another terminal other query
 
 
  On 5/30/07, Shoaib Mir [EMAIL PROTECTED]  wrote:
  
   Did you do a reload or restart of the server after doing this
   change?
  
   what do you see when you do this query? -- show log_statement
  
   --
   Shoaib Mir
   EnterpriseDB ( www.enterprisedb.com)
  
   On 5/29/07, Diego Fernando Arce [EMAIL PROTECTED]  wrote:
   
this is a part of postgrsql.conf
   
   
log_statement = 'all'   # none, mod, ddl, all
#log_hostname = off
   
   
   

#---
# RUNTIME STATISTICS
   

#---
   
# - Statistics Monitoring -
   
#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#log_statement_stats = off
   
# - Query/Index Statistics Collector -
   
stats_start_collector = on
stats_command_string = on
#stats_block_level = off
#stats_row_level = off
#stats_reset_on_server_start = off
   
   
*in 8.1.6 and 8.1.4 it works *
   
   
   
   
On 5/29/07, Shoaib Mir [EMAIL PROTECTED]  wrote:

 Did you try doing log_statements = all in your postgresql.conffile?

 --
 Shoaib Mir
 EnterpriseDB (www.enterprisedb.com)

 On 5/29/07, Diego Fernando Arce [EMAIL PROTECTED] 
 wrote:
 
  hello, I have a question, does not work monitor stats in
  postgres 8.1.8
 
   I cannot see querys in execution
 
  help me please..
 
  DiegoF
 


   
  
 




Re: [GENERAL] multimaster

2007-06-03 Thread Jeff Davis
On Mon, 2007-06-04 at 00:51 +0200, Alexander Staubo wrote:
 True, and when they overlap you tend to want to describe the
 validation errors in one place, not two -- either the database or the
 app, not both. Relational databases have traditionally argued that
 these rules should be in the former, so that there's one layer through
 which every single change has to go.
 

I disagree here. You often _do_ want to describe some types of errors
twice. You check the same thing in different ways at different points in
the code, and that isolates errors and assures developers that certain
assumptions are safe.

In the database world, it's particularly important to use declarative
constraints. If developer A inserts bad data and developer B uses that
bad data, it could compound the problem and yet remain invisible until
the problem is almost impossible to debug. Constraints assure the
developers that they are starting with some known state.

Applications should check for inconsistencies when it makes sense, as
well. Every important API that I can think of checks the input, and
reports some kind of useful error when the assumptions are violated.
Every system call has all sorts of useful error codes. For example,
read() can return EBADF. Nobody thinks Hey, I'll send a random integer
for the file descriptor, but I'd be willing to bet that the error
condition has been reached by accident before, and probably saved
someone a lot of time versus just filling the target buffer with random
bytes and returning success. 

I would argue it's more important in a database, because the error
conditions can persist for a longer period of time and cause more damage
in the process, but the idea is the same.

 I agree. In my experience, however, the best kind of data model is the
 one that is immediately mappable to user-level concepts -- to human
 concepts. A user relation has attributes like name, birth_date,
 etc. If you manage to keep the model flat and friendly enough, you can
 map the attributes to forms and translate attribute-level errors
 directly to form error messages.
 

I think you're oversimplifying. What you say works when user input is a
separate, contained, isolated chunk of data. In that case, any error is
only related to the self-consistency of the input, and can easily be
mapped back to a user-level error.

However, it breaks down when you have constraints on the
interrelationships between pieces of data. These interrelationships are
what can be broken from multiple points in the application code, and
there is no way to map backwards from the constraint violation to a
specific user error. Hence, the application must translate.

Try to imagine some of the complexities in a scheduling system, and what
kind of constraints that might involve. Then, think about how some of
the same constraints might be broken in very different ways. Time
conflicts could come about either by participants overscheduling
themselves, or by the event itself shifting in time such that some
participants are overscheduled. Perhaps someone tries to sign up for an
event that's already full, or perhaps the venue moves to a new location
with a lower capacity. I can't think of any way to map backwards from
the constraint violation to the user level error.

You could probably imagine similar problems with an inventory system.

  The application has much more information about the user and the context
  of the error that the database shouldn't have. For instance, the
  language that the user speaks might affect the error message.
 
 Localization is easily accomplished by piping the error message through 
 gettext.
 

And what about the two-column unique index that can be violated from two
different aspects of the UI? You only get one database error, but you
really should present two different errors to the user.

Any time that a constraint can be violated through two completely
different paths, your one-to-one constraint-to-application-error breaks
down. 

The root of the issue is that the database knows that an inconsistent
state has been reached, it does not know _how_, nor should it. The how
might change as new code is added or perhaps as new bugs are introduced.
Constraints in an RDBMS should be declarative which is very important
(you don't need to prove that a column is always in a unique state, you
look, and it says it is unique). You can add procedural code to an
RDBMS, but you can end up making your database your application that
way.

User-level errors are heavily dependent on _how_ the error occurred. The
database doesn't know this, so the constraint violation message
shouldn't presume to know how it happened. 

I'll use the analogy to read() again: who passes EBADF back to the user
directly? Does that mean it's a bad API? No, it just means it had no
idea why you passed it a bad file descriptor, but it knows it's bad, and
it tells the caller. Similarly with exceptions in java/ruby/python: you
shouldn't pass those exceptions back to the 

Re: [GENERAL] why postgresql over other RDBMS

2007-06-03 Thread Tom Lane
Scott Ribe [EMAIL PROTECTED] writes:
 So it works right now, except it doesn't have (yet) the infrastructure to
 keep the scans synchronized

 Perhaps you only got one read of the table because the process is
 essentially self-synchronizing.

Right.  Multiple seqscans that are anywhere near reading the same block
of a table will tend to self-synchronize.  There is a patch under
consideration for 8.3 that helps this along by making seqscans run
circularly --- that is, not always from block 0 to block N, but from
block M to N and then 0 to M-1, where the start point M can be chosen
by looking to see where any other concurrent seqscan is presently
reading.  Once you've got a reasonable start point, you don't have to do
anything else.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] debugging C functions

2007-06-03 Thread Islam Hegazy
Thanks for your replies, they were very helpful to me. Unfortuantely, I 
can't trace the C function. PostgreSQL returns the results directly and the 
debugger doesn't stop at the breakpoints in the C function.


I think the problem is in the pointers. I use pointers in my function and I 
defined them as static to be preserved between calls, my function returns a 
set of records. When I comment the pointers portion, the function works 
well. But with the pointers, it hangs.


Any idea on how to deal with pointers issue?

Regards
Islam Hegazy

- Original Message - 
From: Tom Lane [EMAIL PROTECTED]

To: Joe Conway [EMAIL PROTECTED]
Cc: Islam Hegazy [EMAIL PROTECTED]; pgsql-general@postgresql.org
Sent: Friday, June 01, 2007 11:38 PM
Subject: Re: [GENERAL] debugging C functions



Joe Conway [EMAIL PROTECTED] writes:

[ much good advice snipped, but I have to weigh in on one point ]



4. Start another console and determine the PID for the backend
session (this will wrap poorly -- I'll do my best to make it
readable)


select pg_backend_pid() is another alternative for finding the PID.

Personally I've gotten to the point where manually determining the
backend PID at all is tedious, and so I tend to use this script:

#!/bin/sh

# tee /dev/tty is for user to see the set of procs considered
PROCS=`ps auxww | \
   grep postgres: | \
   grep -v -e 'grep postgres:' -e 'postgres: stats' -e 'postgres: 
writer' -e 'postgres: archiver' -e 'postgres: logger' | \

   tee /dev/tty | \
   awk '{print $2}'`

if [ `echo $PROCS | wc -w` -eq 1 ]
then
   exec gdb $PGINSTROOT/bin/postgres -silent $PROCS
else
   exec gdb $PGINSTROOT/bin/postgres -silent
fi

This fails (but gives you a list of processes to consider attaching to)
if there's more than one candidate.

regards, tom lane 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Strange delimiters problem

2007-06-03 Thread Andrej Ricnik-Bay

On 6/3/07, Andrej Kastrin [EMAIL PROTECTED] wrote:


I am experiencing difficulty in trying to copy the file structure below
into PostgreSQL table:

||001||,||Classification||,||Economics||,||N||
||001||,||Classification||,||Trends||,||Y||
etc...

Although I can preprocess the input file with e.g awk, I would like to
know is it possible to do that directly in PostgreSQL.

Thanks in advance for any suggestions.

I think you might be able to.  You can have CSV with your pipes
(as QUOTE in [ CSV [ HEADER ] [ QUOTE [ AS ] 'quote' ] )

http://www.postgresql.org/docs/8.2/static/sql-copy.html



Best, Andrej

Cheers,
Andrej

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] debugging C functions

2007-06-03 Thread Tom Lane
Islam Hegazy [EMAIL PROTECTED] writes:
 Thanks for your replies, they were very helpful to me. Unfortuantely, I 
 can't trace the C function. PostgreSQL returns the results directly and the 
 debugger doesn't stop at the breakpoints in the C function.

Well, you need to deal with that last, because you will never get very
far if you can't debug your code.

My experience is that gdb needs help to recognize a shared library's
symbols.  It works for me to LOAD the shared library (from SQL) before
attaching to the backend with gdb.  If you can't do that, gdb's
sharedlibrary command might do it.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] debugging C functions

2007-06-03 Thread Islam Hegazy

I do the same but I use the ddd debugger
1) Load the shared library from the SQL
2) Open the .c file of my function
3) Place the break points
4) Execute the sql statement 'Select * from Myfn(...);'

The result is displayed and the debugger doesn't stop at the breakpoints.

Are there any steps missing?


Regards
Islam Hegazy


- Original Message - 
From: Tom Lane [EMAIL PROTECTED]

To: Islam Hegazy [EMAIL PROTECTED]
Cc: Joe Conway [EMAIL PROTECTED]; pgsql-general@postgresql.org
Sent: Sunday, June 03, 2007 10:00 PM
Subject: Re: [GENERAL] debugging C functions



Islam Hegazy [EMAIL PROTECTED] writes:

Thanks for your replies, they were very helpful to me. Unfortuantely, I
can't trace the C function. PostgreSQL returns the results directly and 
the

debugger doesn't stop at the breakpoints in the C function.


Well, you need to deal with that last, because you will never get very
far if you can't debug your code.

My experience is that gdb needs help to recognize a shared library's
symbols.  It works for me to LOAD the shared library (from SQL) before
attaching to the backend with gdb.  If you can't do that, gdb's
sharedlibrary command might do it.

regards, tom lane 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/