Re: [GENERAL] 60 Seconds to connected to Postgresql using ODBC or PGAdmin

2007-11-29 Thread Peter

Richard Broersma Jr wrote:


It seems the OP's connection string was set to localhost.  Would this still 
indicate a Name Loopup problem?



Do you have some some firewall running ? Also is there a localhost entry 
in your hosts file(e.g /etc/hosts or C:/windows/system32/drivers/etc) ?


Peter


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] 60 Seconds to connected to Postgresql using ODBC or PGAdmin

2007-11-29 Thread Shane Ambler

Richard Broersma Jr wrote:

--- On Wed, 11/28/07, Richard Huxton [EMAIL PROTECTED] wrote:

Name lookups. Something is trying to look up a name,
failing and it's 
timing out after 60 seconds.


It seems the OP's connection string was set to localhost.  Would this still 
indicate a Name Loopup problem?



If there is no entry (or an incorrect one) in /etc/hosts for localhost - 
then yes - try to connect to 127.0.0.1 and see if that makes a difference.


Also if it is set to do namelookup before referring to /etc/hosts it can 
have similar probs.



--

Shane Ambler
[EMAIL PROTECTED]

Get Sheeky @ http://Sheeky.Biz

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Why upgrade?

2007-11-29 Thread Gregory Williamson
Chris --

You asked
 
 I'm currently using 7.4 and I trying find out what the value/advantage
 of upgrading to a more recent version and to which version.
 

Eventually 7.4 will not be supported (there's discussion elsewhere on how to 
deal with 7.3), so it would pay to move off of it well before that time 
(admittedly a year or more off, at a guess).

As another poster pointed out, the release notes indicate improvements that 
have been made, some of which are not ported back to earlier releases. These 
include stability improvements, often speed improvements (depending on your 
useage) and new features or better integration of existing ones (autovacuum 
seems much better in 8.2 than 8.1, and 8.3 seems even better; the integration 
of tsearch2 into the core in 8.3; easier quoting in stored procedures, etc.).

The jump from 7.4.x to 8.x is proving a challenge for us since we have one set 
of databases still on 7.4.14 (whatever the latest and greatest point release 
is); they will benefit from the upgrade, but the old databases accepted some 
bad UTF characters and the newer postgreses (postgresii ?) are stricter about 
bad data and reject them, so there's some clean up involved. Other than that 
upgrades have been relatively painless (7.4 -- 8.1.x -- 8.2 now).

HTH,

Greg Williamson
Senior DBA
GlobeXplorer LLC, a DigitalGlobe company

Confidentiality Notice: This e-mail message, including any attachments, is for 
the sole use of the intended recipient(s) and may contain confidential and 
privileged information and must be protected in accordance with those 
provisions. Any unauthorized review, use, disclosure or distribution is 
prohibited. If you are not the intended recipient, please contact the sender by 
reply e-mail and destroy all copies of the original message.

(My corporate masters made me say this.)


Re: [GENERAL] 60 Seconds to connected to Postgresql using ODBC or PGAdmin

2007-11-29 Thread Richard Huxton

Richard Broersma Jr wrote:

--- On Wed, 11/28/07, Richard Huxton [EMAIL PROTECTED] wrote:

Name lookups. Something is trying to look up a name, failing and
it's timing out after 60 seconds.


It seems the OP's connection string was set to localhost.  Would this
still indicate a Name Loopup problem?


That would make it unlikely.

The only other option that occurs to me would be some firewall type of 
package that's taking a long time to allow a connection. It would be an 
odd bit of security software that timed out and then *allowed* the 
connection to proceed.


Hmm - it looks like the wireshark network sniffer is available for 
Windows (http://www.wireshark.org/) - I'd be tempted to install that and 
connect to an external IP and see what packets go where. That will save 
a lot of head-scratching.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] hibernate + postgresql ?

2007-11-29 Thread Oleg Bartunov

Hello,

any exprerience with Hibernate + PostgreSQL ? How does this combination
is flexible and smart ? Is't possible to tune selected queries ?

Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] test message

2007-11-29 Thread Pedro Doria Meunier
Sorry People,

this is a test message as it seems mail I'm writing to this list isn't
going through--
I just want to see if I get a rebound of it...

Please apologize.

--
Pedro Doria Meunier 
Ips da Olaria
Edf. Jardins do Garajau, 4 r/c Y
9125-163 Caniço
Madeira
Portugal
GSM: +351 96 17 20 188 Skype: pdoriam
http://www.madeiragps.com


signature.asc
Description: This is a digitally signed message part


[GENERAL] viewing definition of CREATE TYPE name AS ENUM...

2007-11-29 Thread Richard Broersma Jr
After a type is created, is it possible to view the definition of this type?

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Declaring multidimensional arrays in pl/pgsql

2007-11-29 Thread Max Zorloff

Hi, all.

I was wondering, can I really declare a 2-dimensional array of arbitrary  
size in pl/pgsql?
According to the docs it seems that only way would be to declare it as  
something like :

myArray := ARRAY[[1,2], [3,4], [5,6]];

But what if I pass the dimensions as function parameters?

My postgresql version is 8.1.

Thanks.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Linux v.s. Mac OS-X Performance

2007-11-29 Thread Magnus Hagander
On Wed, Nov 28, 2007 at 10:33:08AM -0800, Trevor Talbot wrote:
 On 11/28/07, Magnus Hagander [EMAIL PROTECTED] wrote:
  Trevor Talbot wrote:
   On 11/28/07, Magnus Hagander [EMAIL PROTECTED] wrote:
 
   There is at least one other bottleneck, probably more than one. Context
   switching between processes is a lot more expensive than on Unix (given
   that win32 is optimized towards context switching between threads). NTFS
   isn't optimized for having 100+ processes reading and writing to the
   same file. Probably others..
 
   I'd be interested to know what this info is based on.  The only
   fundamental difference between a process and a thread context switch
   is VM mapping (extra TLB flush, possible pagetable mapping tweaks).
 
  Generally, lots of references I've seen around the net and elsewhere. If
  I'm not mistaken, the use of threads over processes was listed as one of
  the main reasons why SQL Server got such good performance on Windows
  compared to it's competitors. But I don't have my Inside SQL Server
  around to check for an actual reference.
 
 Well, yes, in general using multiple threads instead of multiple
 processes is going to be a gain on any common OS for several reasons,
 but context switching is a very minor part of that. Threads let you
 share state much more efficiently than processes do, and in complex
 servers of this type there tends to be a lot to be shared.
 
 SQL Server is somewhat unique in that it doesn't simply throw threads
 at the problem; it has a small pool and uses its own internal task
 scheduler for actual SQL work. There's no OS thread per user or
 anything. Think continuations or pure userspace threading. That design
 also lets it reduce context switches in general.

There are actually two different ways to run SQL Server. Either it runs
with operating system threadpools (the same way that we deal with backend
exits in 8.3), which is IIRC the default. Or it runs with Fibers which are
also an OS feature, but they're scheduled by the application. 


   I mean, I can understand NT having bottlenecks in various areas
   compared to Unix, but this threads are specially optimized thing is
   seeming a bit overblown.  Just how often do you see threads from a
   single process get contiguous access to the CPU?
 
  On a CPU loaded SQL server, fairly often I'd say. But certainly not always.
 
 I meant as a design point for a general-purpose OS. If you consider
 how Windows does GUIs, ignoring the expense of process context
 switching would be fatal, since it forces so much app involvement in
 window painting. Having a system dedicated to a single process with
 multiple threads running full-bore is not particularly common in this
 sense.

Ok, then I understand what you're saying :-) 

//Magnus

---(end of broadcast)---
TIP 6: explain analyze is your friend


[Re] Re: [Re] Re: [Re] Re: [Re] Re: [GENERAL] Unknown winsock error 10061while dumping a big database

2007-11-29 Thread Cyril VELTER
De : mailto:[EMAIL PROTECTED]
 Cyril VELTER wrote:
  
  Is length() supposed to return the very high length in case of 
  corruption 
?
 
 You'd have thought it would. The odd thing (if it is data corruption) is 
 that you would expect to see something in the server logs about a 
 failure to allocate 12345412234124 bytes of memory or some such. Whereas 
 all you get is this winsock error.

I have another theory. The message printed by pg_dump :

pg_dump: Error message from server: out of memory

is printed in dumpTableData_copy (pg_dump.c) : write_msg(NULL, Error 
message 
from server: %s, PQerrorMessage(g_conn));

There are serveral places in libpq where the conn error might be set to 
out 
of memory. I've also discovered that the machine running pg_dump is pretty 
tight in ram (256M) and that no swap file is active (oversight after a disk 
upgrade). May be this error is simply pg_dump running out of memory and not the 
server. This would also explain that the server only report a socket error 
(though I would have expected WSAECONNRESET (10054) instead of WSAECONNREFUSED 
(10061)).


The 8.0.13 pg_dump used to work just fine on the exact same machine. 
Maybe 8.2 
pg_dump need more memory (or maybe a memory leak ?).

I'm running the dump again after adding some swap space and will 
monitor 
memory usage.

What do you think ?

  Is there anythning else i can do ?
 
 Could you try CREATE some_table AS SELECT * FROM c2674 WHERE ... to copy 
 the large rows within the database. If that fails, the table is 
 corrupted but you can identify the problem rows and work around them 
 while you dump the data.


I will try that this week end if my theory prove wrong (I need to make 
disk 
space available on the server for that, the table is 60GB).

Thanks,

Cyril


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [Re] Re: [Re] Re: [Re] Re: [Re] Re: [GENERAL] Unknown winsock error 10061while dumping a big database

2007-11-29 Thread Richard Huxton

Cyril VELTER wrote:

De : mailto:[EMAIL PROTECTED]

Cyril VELTER wrote:
	Is length() supposed to return the very high length in case of corruption 

?
You'd have thought it would. The odd thing (if it is data corruption) is 
that you would expect to see something in the server logs about a 
failure to allocate 12345412234124 bytes of memory or some such. Whereas 
all you get is this winsock error.


I have another theory. The message printed by pg_dump :

pg_dump: Error message from server: out of memory

	is printed in dumpTableData_copy (pg_dump.c) : write_msg(NULL, Error message 
from server: %s, PQerrorMessage(g_conn));


	There are serveral places in libpq where the conn error might be set to out 
of memory. I've also discovered that the machine running pg_dump is pretty 
tight in ram (256M) and that no swap file is active (oversight after a disk 
upgrade). May be this error is simply pg_dump running out of memory and not the 
server. This would also explain that the server only report a socket error 
(though I would have expected WSAECONNRESET (10054) instead of WSAECONNREFUSED 
(10061)).


Aha - that sounds likely. If you're dumping multi-megabyte rows I can 
see how you'd get into problems on a client with a small amount of RAM.


	The 8.0.13 pg_dump used to work just fine on the exact same machine. Maybe 8.2 
pg_dump need more memory (or maybe a memory leak ?).


Might just be slightly higher RAM usage in your particular case. It 
could be there were some trade-offs between size and speed.


	I'm running the dump again after adding some swap space and will monitor 
memory usage.


What do you think ?


I think you've found the problem. If you're short of RAM though you 
might also have difficulty restoring the dump.


You could run pg_dump on the Windows server and copy its output to the 
RAM-limited Linux box. You could even run pg_restore from the Windows 
box - if you don't have a direct channel to the database you can use the 
putty ssh-client to create a tunnelled connection to the Linux box.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Killing a session on windows

2007-11-29 Thread Howard Cole


I have a database I want to drop on a windows server. Unfortunately I 
cannot restart postgres because it is running several live database.


To kill the offending session, I tried select * from pg_stat_activity to 
find the PID of the session, and then tried to kill it with command line:


taskkill /f /pid 1234

This appeared to kill the session, but postgres still thinks the session 
is live with the same process id.


Using the SysInternals process explorer - there doesn't appear to be a 
process with the given ID.


How can I get postgres to drop this session?

Thanks

Howard Cole
www.selestial.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Killing a session on windows

2007-11-29 Thread Richard Huxton

Howard Cole wrote:


I have a database I want to drop on a windows server. Unfortunately I 
cannot restart postgres because it is running several live database.


To kill the offending session, I tried select * from pg_stat_activity to 
find the PID of the session, and then tried to kill it with command line:


taskkill /f /pid 1234

This appeared to kill the session, but postgres still thinks the session 
is live with the same process id.


Hmm - some signalling glitch presumably. Was it not possible to identify 
the client and disconnect that?


Using the SysInternals process explorer - there doesn't appear to be a 
process with the given ID.


How can I get postgres to drop this session?


You could try pg_cancel_backend() - not sure what happens if it can't 
find the process though.


http://www.postgresql.org/docs/8.2/static/functions-admin.html

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Recheck condition

2007-11-29 Thread Martijn van Oosterhout
Please always CC the list so other people can respond.

On Wed, Nov 28, 2007 at 10:21:39PM -0500, Josh Harrison wrote:
  It isn't the recheck that's costing it, it's probably just that you're
  matching a lot of rows. A bitmap scan classically needs a recheck
  because if a lot of rows need to be stored it might remember only
  blocks 2044-2060. It then needs to recheck each row as it comes through
  to make sure it really matches the conditions.
 
 What is this number 2044-2060? Is this a fixed number in postgres?

Ofcourse not. Have you read the documentation on explain yet?
http://www.postgresql.org/docs/8.2/static/using-explain.html

The point is that the bitmap may have an inexact representation of the
tuples that match. If your scan indicates you'll match 10 million
entries and you only want to use 16KB for your bitmap, obviously you
can't store all the locations exactly.

 For example if I have a table Person with 3 fields (name,city_id,age). And
 the table contains 1000 rows. The table has 2 indexes city_id and age
 If I have a query :
 SELECT * FROM PERSON WHERE city_id=5 AND AGE=30

The answer is it depends. Postgres has a cost based planner, it will
estimate the costs of each different way of getting the result and use
the cheapest. The factors that are important is how many rows each
condition will match.

Given your table is only 8MB, the system may decide that it's all in
memory and just do a scan.

Or it maybe see that city_id is almost unique and use that index and
check the matches for the second condition. Or vice-versa.

Or maybe it will scan both indexes, calculate the intersection and then
looks up the matches in the heap (with a recheck).

 In other  words, Will this query cause 1000 random heap access or 10 random
 heap access ?

I don't know, run it and see.

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Those who make peaceful revolution impossible will make violent revolution 
 inevitable.
  -- John F Kennedy


signature.asc
Description: Digital signature


Re: [GENERAL] Killing a session on windows

2007-11-29 Thread Bill Bartlett
Use select pg_cancel_backend(pid) instead -- we have to do this periodically
when queries get timed out by the web server but Postgres doesn't notice /
doesn't get notified...

- Bill

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Howard Cole
Sent: Thursday, November 29, 2007 5:55 AM
To: 'PgSql General'
Subject: [GENERAL] Killing a session on windows


I have a database I want to drop on a windows server. Unfortunately I 
cannot restart postgres because it is running several live database.

To kill the offending session, I tried select * from pg_stat_activity to 
find the PID of the session, and then tried to kill it with command line:

taskkill /f /pid 1234

This appeared to kill the session, but postgres still thinks the session 
is live with the same process id.

Using the SysInternals process explorer - there doesn't appear to be a 
process with the given ID.

How can I get postgres to drop this session?

Thanks

Howard Cole
www.selestial.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] 1 cluster on several servers

2007-11-29 Thread Willy-Bas Loos
Hi,

Is it possible to run one PostgreSQL cluster on more than one (hardware)
server?

WBL


Re: [GENERAL] 1 cluster on several servers

2007-11-29 Thread Shane Ambler

Willy-Bas Loos wrote:

Hi,

Is it possible to run one PostgreSQL cluster on more than one (hardware)
server?

WBL


You would be looking for replication.
Start with
http://www.postgresql.org/docs/8.2/interactive/high-availability.html
to get some idea on what is available for what you wish to achieve.
Some of the projects that add these features are mentioned.




--

Shane Ambler
[EMAIL PROTECTED]

Get Sheeky @ http://Sheeky.Biz

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] hibernate + postgresql ?

2007-11-29 Thread Usama Dar
i have experience with Hibernate and postggresql, what information are you
exactly looking for?

On Nov 29, 2007 1:52 PM, Oleg Bartunov [EMAIL PROTECTED] wrote:

 Hello,

 any exprerience with Hibernate + PostgreSQL ? How does this combination
 is flexible and smart ? Is't possible to tune selected queries ?

Regards,
Oleg
 _
 Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
 Sternberg Astronomical Institute, Moscow University, Russia
 Internet: [EMAIL PROTECTED], 
 http://www.sai.msu.su/~megera/http://www.sai.msu.su/%7Emegera/
 phone: +007(495)939-16-83, +007(495)939-23-83

 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings



Re: [GENERAL] hibernate + postgresql ?

2007-11-29 Thread Edoardo Panfili

Oleg Bartunov ha scritto:

Hello,

any exprerience with Hibernate + PostgreSQL ? How does this combination
is flexible and smart ? Is't possible to tune selected queries ?


You can evaluate also JPA, openJPA (http://openjpa.apache.org/) seems
good but now I have only simple query on it.

Edoardo




--
Jabber: [EMAIL PROTECTED]
tel: 075 9142766

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Recheck condition

2007-11-29 Thread Josh Harrison

  For example if I have a table Person with 3 fields (name,city_id,age).
 And
  the table contains 1000 rows. The table has 2 indexes city_id and age
  If I have a query :
  SELECT * FROM PERSON WHERE city_id=5 AND AGE=30

 The answer is it depends. Postgres has a cost based planner, it will
 estimate the costs of each different way of getting the result and use
 the cheapest. The factors that are important is how many rows each
 condition will match.

 Given your table is only 8MB, the system may decide that it's all in
 memory and just do a scan.

 Or it maybe see that city_id is almost unique and use that index and
 check the matches for the second condition. Or vice-versa.

 Or maybe it will scan both indexes, calculate the intersection and then
 looks up the matches in the heap (with a recheck).


OkaySo If I have a query like the above and the query plan shows  a
'recheck condition' and bitmap scan, then does that mean it scans the
indexes first to get the intermediate results and goto the heap only for the
final data?


 Thanks

jo


Re: [GENERAL] 1 cluster on several servers

2007-11-29 Thread Willy-Bas Loos
I'll take that as a no.
What i mean is to actually run exactly one cluster (no replicated copy) on
more than one server. Of course, if that were possible, why would people
bother with replication..

I guess it is irrational to suggest that it would be possible, since each
server would at least need to have it's own copy of the DBMS software etc,
or it would cease to be a separate server.

Maybe Data Partitioning, as in the documentation link Shane sent, possibly
combined with Slony for the other data per server, would be an option for
me.

Is there an implementation for this in PostgreSQL? It would have to be
something like pgPool (middleware), because: How would the query know on
which server to put it's data? And i guess i would need some Kerberos-like
implementation for my authentication and authorization...

cheers,

WBL


On Nov 29, 2007 1:23 PM, Shane Ambler [EMAIL PROTECTED] wrote:

 Willy-Bas Loos wrote:
  Hi,
 
  Is it possible to run one PostgreSQL cluster on more than one (hardware)
  server?
 
  WBL
 
 You would be looking for replication.
 Start with
 http://www.postgresql.org/docs/8.2/interactive/high-availability.html
 to get some idea on what is available for what you wish to achieve.
 Some of the projects that add these features are mentioned.




 --

 Shane Ambler
 [EMAIL PROTECTED]

 Get Sheeky @ http://Sheeky.Biz



Re: [GENERAL] Recheck condition

2007-11-29 Thread Alvaro Herrera
Josh Harrison escribió:
 
   For example if I have a table Person with 3 fields (name,city_id,age).
  And
   the table contains 1000 rows. The table has 2 indexes city_id and age
   If I have a query :
   SELECT * FROM PERSON WHERE city_id=5 AND AGE=30

 OkaySo If I have a query like the above and the query plan shows  a
 'recheck condition' and bitmap scan, then does that mean it scans the
 indexes first to get the intermediate results and goto the heap only for the
 final data?

Yes.

If the table actually contains 1000 rows, the most likely outcome is
that the bitmaps would not be lossy and therefore no rechecking is
needed at all.  (Tuple bitmaps become lossy only if they have to store a
lot of tuples, in which case they forget the idea of storing each tuple,
and instead compress the representation to storing only the page
numbers where matching tuples are to be found).

Note however, that even if the bitmaps are not lossy, the visit to the
heap is still required, because the need to check for visibility.

-- 
Alvaro Herrera   Valdivia, Chile   ICBM: S 39º 49' 18.1, W 73º 13' 56.4
Industry suffers from the managerial dogma that for the sake of stability
and continuity, the company should be independent of the competence of
individual employees.  (E. Dijkstra)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] 1 cluster on several servers

2007-11-29 Thread Richard Huxton

Willy-Bas Loos wrote:


Is there an implementation for this in PostgreSQL? It would have to be
something like pgPool (middleware), because: How would the query know on
which server to put it's data? And i guess i would need some Kerberos-like
implementation for my authentication and authorization...


Try explaining what it is you're trying to do, and maybe someone can 
suggest something. The people at skype have their DB partitioned over 
more than one machine, for example.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] Slony-I creation scripts not available

2007-11-29 Thread Glyn Astill
From within pgAdminIII I get this message in the status bar when I go
to setup slony-I replication.

I have the scripts in pgdir /shared after installing from source,
should they be somewhere else, or should I have some environment
variables set?




Glyn Astill



  ___
Yahoo! Answers - Got a question? Someone out there knows the answer. Try it
now.
http://uk.answers.yahoo.com/ 


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Declaring multidimensional arrays in pl/pgsql

2007-11-29 Thread Rodrigo De León
I wrote:
 You can declare arbitrary-sized, n-dimensional arrays:

Sorry, I re-read your post.

You want to programatically define the array dimensions depending on
function arguments.

You could try building a string, then casting to the correct array
type (not tested).

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Declaring multidimensional arrays in pl/pgsql

2007-11-29 Thread Max Zorloff
On Thu, 29 Nov 2007 18:11:22 +0400, Rodrigo De León [EMAIL PROTECTED]  
wrote:



On Nov 29, 2007 3:34 AM, Max Zorloff [EMAIL PROTECTED] wrote:

According to the docs it seems that only way would be to declare it as
something like :
myArray := ARRAY[[1,2], [3,4], [5,6]];


You can declare arbitrary-sized, n-dimensional arrays:
...
DECLARE
  myArray integer[][]; -- two-dimensional integer array
BEGIN
...
END;
...

See:
http://www.postgresql.org/docs/8.1/static/arrays.html


I can. But unfortunately :

create or replace function testfunc()
returns setof record as $$
DECLARE
  myArray int[][];
BEGIN
  FOR i IN 1..10 LOOP
FOR j IN 1..10 LOOP
  RAISE NOTICE '% %', i, j;
  myArray[i][j] := 1;

END LOOP;
  END LOOP;

  RETURN;
END
$$ language plpgsql;


ponline=# select testfunc();
NOTICE:  1 1
NOTICE:  1 2
ERROR:  invalid array subscripts
КОНТЕКСТ:  PL/pgSQL function testfunc line 7 at assignment

2-dimensional arrays do not grow like 1-dimensional do (it says so in the  
docs).
The initial array is 1x1 size. I suppose I'm stuck with emulating 2-dim  
arrays through

1-dim arrays because I also need them to grow later.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Declaring multidimensional arrays in pl/pgsql

2007-11-29 Thread Rodrigo De León
On Nov 29, 2007 3:34 AM, Max Zorloff [EMAIL PROTECTED] wrote:
 According to the docs it seems that only way would be to declare it as
 something like :
 myArray := ARRAY[[1,2], [3,4], [5,6]];

You can declare arbitrary-sized, n-dimensional arrays:
...
DECLARE
  myArray integer[][]; -- two-dimensional integer array
BEGIN
...
END;
...

See:
http://www.postgresql.org/docs/8.1/static/arrays.html

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] 1 cluster on several servers

2007-11-29 Thread Geoffrey

Willy-Bas Loos wrote:

I'll take that as a no.
What i mean is to actually run exactly one cluster (no replicated copy) on
more than one server. Of course, if that were possible, why would people
bother with replication..

I guess it is irrational to suggest that it would be possible, since each
server would at least need to have it's own copy of the DBMS software etc,
or it would cease to be a separate server.


I think you need to better identify what you're trying to do.  I can 
think of a couple of different solutions based on the limited info 
provided.  You've already said you don't want replication.


We have a scenario where we have a data silo that is shared between two 
servers, so the data exist in one place.  To make things simple, if one 
server fails, the postmasters running on that server are started on the 
other server.  This is a hot/hot fail over implementation as we have 
multiple postmasters running.


You could implement a hot/warm fail over solution if you're running a 
single postmaster.


Finally, you might be thinking of something like a beowulf cluster where 
multiple machines function as a single machine. I can't help you with 
that scenario as I don't have any experience with it and postgresql.


--
Until later, Geoffrey

Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety.
 - Benjamin Franklin

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Slony-I creation scripts not available

2007-11-29 Thread Hiroshi Saito

Hi.

From: Glyn Astill [EMAIL PROTECTED]



From within pgAdminIII I get this message in the status bar when I go
to setup slony-I replication.

I have the scripts in pgdir /shared after installing from source,
should they be somewhere else, or should I have some environment
variables set?


I can't judge in what stage you encountered the problem. Probably , you 
can grasp the right place by the installer. and as for an early setting script, 
creating to slonik is desirable. Then, the detailed information developed 
in large quantities GUI of pgAdminIII will be difficult to set up if it does 
not have full knowledge of slony. Moreover, if it is the script of slonik, 
Slony-ML([EMAIL PROTECTED]) will be help more.


Regards,
Hiroshi Saito


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Making a query from 2 tables at same time

2007-11-29 Thread Pau Marc Munoz Torres
Hi everybody

 I'm doing a two table query as follow

mhc2db= select t1.sp, t1.pos,t2.p1, t2.p4, t2.p6, t2.p7, t2.p9  from local
as t1, precalc as t2 where t1.ce='ACIAD' and t2.idr(p1, p4, p6, p7, p9,
'HLA-DRB5*0101')2;

and i get the following error

ERROR:  schema t2 does not exist

but  those tables exists!! and are as follow!!

mhc2db= \d precalc  (t2)

 Table public.precalc
 Column | Type |  Modifiers
+--+--
 id | integer  | not null default
nextval('precalc_id_seq'::regclass)
 p1 | character(1) |
 p4 | character(1) |
 p6 | character(1) |
 p7 | character(1) |
 p9 | character(1) |
Indexes:
h2iab btree (idr(p1, p4, p6, p7, p9, 'H-2*IAb'::character varying))
h2iad btree (idr(p1, p4, p6, p7, p9, 'H-2*IAd'::character varying))
h2iak btree (idr(p1, p4, p6, p7, p9, 'H-2*IAk'::character varying))
h2ied btree (idr(p1, p4, p6, p7, p9, 'H-2*IEd'::character varying))
hladqa10501 btree (idr(p1, p4, p6, p7, p9, 'HLA-DQA1*0501'::character
varying))
hladr btree (idr(p1, p4, p6, p7, p9, 'HLA-DR'::character varying))
hladr1 btree (idr(p1, p4, p6, p7, p9, 'HLA-DR1'::character varying))
hladr13 btree (idr(p1, p4, p6, p7, p9, 'HLA-DR13'::character varying))
hladr3 btree (idr(p1, p4, p6, p7, p9, 'HLA-DR3'::character varying))
hladr4 btree (idr(p1, p4, p6, p7, p9, 'HLA-DR4'::character varying))
hladr7 btree (idr(p1, p4, p6, p7, p9, 'HLA-DR7'::character varying))
hladrb10101 btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*0101'::character
varying))
hladrb10102 btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*0102'::character
varying))
hladrb10301 btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*0301'::character
varying))
hladrb10401 btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*0401'::character
varying))
hladrb10402 btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*0402'::character
varying))
hladrb10404 btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*0404'::character
varying))
hladrb10701 btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*0701'::character
varying))
hladrb10802 btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*0802'::character
varying))
hladrb10901 btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*0901'::character
varying))
hladrb11101 btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*1101'::character
varying))
hladrb11102 btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*1102'::character
varying))
hladrb11104 btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*1104'::character
varying))
hladrb11301 btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*1301'::character
varying))
hladrb11302 btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*1302'::character
varying))
hladrb11501 btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*1501'::character
varying))
hladrb40101 btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB4*0101'::character
varying))
hladrb50101 btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB5*0101'::character
varying))
iid btree (id)

mhc2db= \d local (t1)
   Table public.local
 Column | Type | Modifiers
+--+---
 ce | character varying(6) |
 sp | character varying(6) |
 pos| integer  |
 id | integer  |



Someone knows what is the problem?

Thanks

Pau
-- 
Pau Marc Muñoz Torres

Laboratori de Biologia Computacional
Institut de  Biotecnologia   i Biomedicina Vicent
Villar
Universitat Autonoma de Barcelona
E-08193 Bellaterra (Barcelona)

telèfon: 93 5812807
Email : [EMAIL PROTECTED]


Re: [GENERAL] Making a query from 2 tables at same time

2007-11-29 Thread Richard Huxton

Pau Marc Munoz Torres wrote:

Hi everybody

 I'm doing a two table query as follow

mhc2db= select t1.sp, t1.pos,t2.p1, t2.p4, t2.p6, t2.p7, t2.p9  from local
as t1, precalc as t2 where t1.ce='ACIAD' and t2.idr(p1, p4, p6, p7, p9,
'HLA-DRB5*0101')2;

and i get the following error

ERROR:  schema t2 does not exist

but  those tables exists!! and are as follow!!


You have t2.idr(...) which is being read as a function idr in schema t2.

Did you mean idr(t2.p1, t2.p4, ...)?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Slony-I creation scripts not available

2007-11-29 Thread Glyn Astill
Sorry Hiroshi, I failed to understand exactly what you're saying
there.

So far I've installed Slony as per the docs, all seems to be well and
the installer ahs put the replication *.sql files in pgsql/shared.

I haven't started the slon daemons yet.

I thought once slony was set up you could then use the scripts from
pgAdminIII to setup slony before starting the slon daemons.

I have the scripts, but pgAdmin is telling me it can't find them.

--- Hiroshi Saito [EMAIL PROTECTED] wrote:

 Hi.
 
 From: Glyn Astill [EMAIL PROTECTED]
 
 
  From within pgAdminIII I get this message in the status bar when
 I go
  to setup slony-I replication.
  
  I have the scripts in pgdir /shared after installing from source,
  should they be somewhere else, or should I have some environment
  variables set?
 
 I can't judge in what stage you encountered the problem. Probably ,
 you 
 can grasp the right place by the installer. and as for an early
 setting script, 
 creating to slonik is desirable. Then, the detailed information
 developed 
 in large quantities GUI of pgAdminIII will be difficult to set up
 if it does 
 not have full knowledge of slony. Moreover, if it is the script of
 slonik, 
 Slony-ML([EMAIL PROTECTED]) will be help more.
 
 Regards,
 Hiroshi Saito
 
 
 ---(end of
 broadcast)---
 TIP 5: don't forget to increase your free space map settings
 



Glyn Astill



  ___
Yahoo! Answers - Got a question? Someone out there knows the answer. Try it
now.
http://uk.answers.yahoo.com/ 


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Making a query from 2 tables at same time

2007-11-29 Thread A. Kretschmer
am  Thu, dem 29.11.2007, um 15:48:45 +0100 mailte Pau Marc Munoz Torres 
folgendes:
 Hi everybody
 
  I'm doing a two table query as follow
 
 mhc2db= select t1.sp, t1.pos,t2.p1, t2.p4, t2.p6, t2.p7, t2.p9  from local as
 t1, precalc as t2 where t1.ce='ACIAD' and t2.idr(p1, p4, p6, p7, p9,
 'HLA-DRB5*0101')2;
 
 and i get the following error
 
 ERROR:  schema t2 does not exist
 
 but  those tables exists!! and are as follow!!


The error-message means the function-call t2.idr(...), this is wrong.


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Making a query from 2 tables at same time

2007-11-29 Thread Pau Marc Munoz Torres
i test it and now the error is

mhc2db= select t1.sp, t1.pos,t2.p1, t2.p4, t2.p6, t2.p7, t2.p9  from local
as t1, precalc as t2 where t1.ce='ACIAD' and idr(t2.p1, t2.p4, t2.p6, t2.p7,
t2.p9, 'HLA-DRB5*0101')2;
ERROR:  relation pssms does not exist
CONTEXT:  SQL statement SELECT  score from PSSMS where AA= $1  and POS=1
and MOLEC= $2 
PL/pgSQL function idr line 11 at select into variables

pssm was a temporary table that i used to calculate the index in precalc
table with idr function, should i rebuilt it? the problem is that if i can't
use idr as an index it eill be to slow

pau


2007/11/29, Richard Huxton  [EMAIL PROTECTED]:

 Pau Marc Munoz Torres wrote:
  Hi everybody
 
   I'm doing a two table query as follow
 
  mhc2db= select t1.sp, t1.pos,t2.p1, t2.p4, t2.p6, t2.p7, t2.p9  from
 local
  as t1, precalc as t2 where t1.ce='ACIAD' and t2.idr(p1, p4, p6, p7, p9,
  'HLA-DRB5*0101')2;
 
  and i get the following error
 
  ERROR:  schema t2 does not exist
 
  but  those tables exists!! and are as follow!!

 You have t2.idr(...) which is being read as a function idr in schema
 t2.

 Did you mean idr(t2.p1, t2.p4, ...)?


 --
Richard Huxton
Archonet Ltd




-- 
Pau Marc Muñoz Torres

Laboratori de Biologia Computacional
Institut de  Biotecnologia   i Biomedicina Vicent
Villar
Universitat Autonoma de Barcelona
E-08193 Bellaterra (Barcelona)

telèfon: 93 5812807
Email : [EMAIL PROTECTED]


Re: [GENERAL] Slony-I creation scripts not available

2007-11-29 Thread Hiroshi Saito

Hi.

From: Glyn Astill [EMAIL PROTECTED]



Sorry Hiroshi, I failed to understand exactly what you're saying
there.


Ahh, Sorry...



So far I've installed Slony as per the docs, all seems to be well and
the installer ahs put the replication *.sql files in pgsql/shared.

I haven't started the slon daemons yet.

I thought once slony was set up you could then use the scripts from
pgAdminIII to setup slony before starting the slon daemons.

I have the scripts, but pgAdmin is telling me it can't find them.


Um, Here may be the material which helps you. 
http://developer.pgadmin.org/~hiroshi/Slony-I/
Anyhow, the first setup recommends slonik. 


Regards,
Hiroshi Saito



--- Hiroshi Saito [EMAIL PROTECTED] wrote:


Hi.

From: Glyn Astill [EMAIL PROTECTED]


 From within pgAdminIII I get this message in the status bar when
I go
 to setup slony-I replication.
 
 I have the scripts in pgdir /shared after installing from source,

 should they be somewhere else, or should I have some environment
 variables set?

I can't judge in what stage you encountered the problem. Probably ,
you 
can grasp the right place by the installer. and as for an early
setting script, 
creating to slonik is desirable. Then, the detailed information
developed 
in large quantities GUI of pgAdminIII will be difficult to set up
if it does 
not have full knowledge of slony. Moreover, if it is the script of
slonik, 
Slony-ML([EMAIL PROTECTED]) will be help more.


Regards,
Hiroshi Saito


---(end of
broadcast)---
TIP 5: don't forget to increase your free space map settings





Glyn Astill



 ___
Yahoo! Answers - Got a question? Someone out there knows the answer. Try it
now.
http://uk.answers.yahoo.com/ 



---(end of broadcast)---
TIP 6: explain analyze is your friend


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] notify/listen disappearing data

2007-11-29 Thread Sim Zacks

select version()
PostgreSQL 8.0.1 on i686-pc-linux-gnu, compiled by GCC i686-pc-linux-gnu-gcc (GCC) 
3.3.5  (Gentoo Linux 3.3.5-r1, ssp-3.3.2-3, pie-8.7.7.1)

Today I added 2 new fields to a single row table and populated them with values.
I noticed that the values disappeared after a while and could not figure out 
why.
After they disappeared a couple more times I started investigating.


I have a daemon that runs on my database server using Listen. It calls a 
database function when it receives the NOTIFY.
One of the things that this function does is it updates a field in the 
above-mentioned table using the command:
Update tablename set fieldname=now();

If I run the function straight from psql or pgadmin, it works fine.
When I called NOTIFY the values in the new fields disappeared again.
I tested this a number of different ways.
Finally when I stopped the daemon and restarted it, the values stopped 
disappearing when NOTIFY was called.
After it restarted I changed the value in one of the fields and called NOTIFY 
and the value remained changed.

I then added another field to the table and gave it a value and ran NOTIFY and 
it removed the value.

It can't be an issue of transactions, because fields that existed before the 
daemon was started are not reverted to any prior state.
The table itself isn't referenced in the code here, it is only called in a 
function.

If anyone can think of an explanation, I would be happy to hear it.

Below is the Daemon program. It is pretty much copied from the example, with 
very minor modification.


/*
 * testlibpq2.c
 *  Test of the asynchronous notification interface
 *
 * Start this program, then from psql in another window do
 *   NOTIFY TBL2;
 * Repeat four times to get this program to exit.
 *
 * Or, if you want to get fancy, try this:
 * populate a database with the following commands
 * (provided in src/test/examples/testlibpq2.sql):
 *
 *   CREATE TABLE TBL1 (i int4);
 *
 *   CREATE TABLE TBL2 (i int4);
 *
 *   CREATE RULE r1 AS ON INSERT TO TBL1 DO
 * (INSERT INTO TBL2 VALUES (new.i); NOTIFY TBL2);
 *
 * and do this four times:
 *
 *   INSERT INTO TBL1 VALUES (10);
 */
#include stdio.h
#include stdlib.h
#include string.h
#include errno.h
#include sys/time.h
#include unistd.h
#include libpq-fe.h

static void
exit_nicely(PGconn *conn)
{
PQfinish(conn);
exit(1);
}

int
main(int argc, char **argv)
{
const char *conninfo;
PGconn *conn;
PGresult   *res;
PGnotify   *notify;
int nnotifies;
pid_t   pid, sid;

/*
 * If the user supplies a parameter on the command line, use it as the
 * conninfo string; otherwise default to setting dbname=postgres and using
 * environment variables or defaults for all other connection parameters.
 */

pid = fork();

if (pid  0) {
exit(EXIT_FAILURE);
} else if (pid  0) {
exit(EXIT_SUCCESS);
}
sid = setsid();
if (sid  0) {
exit(EXIT_FAILURE);
}   

if (argc  1){
conninfo = strcat(argv[1],  user = myuser);
fprintf(stderr,conninfo);
}
else
conninfo = dbname = mydb user = myuser ;

/* Make a connection to the database */
conn = PQconnectdb(conninfo);

/* Check to see that the backend connection was successfully made */
if (PQstatus(conn) != CONNECTION_OK)
{
fprintf(stderr, Connection to database failed: %s,
PQerrorMessage(conn));
exit_nicely(conn);
}

/*
 * Issue LISTEN command to enable notifications from the rule's NOTIFY.
 */
res = PQexec(conn, LISTEN populateallocation);
if (PQresultStatus(res) != PGRES_COMMAND_OK)
{
fprintf(stderr, LISTEN command failed: %s, PQerrorMessage(conn));
PQclear(res);
exit_nicely(conn);
}

/*
 * should PQclear PGresult whenever it is no longer needed to avoid memory
 * leaks
 */
PQclear(res);
res = PQexec(conn, LISTEN populaterfqrules);
if (PQresultStatus(res) != PGRES_COMMAND_OK)
{
fprintf(stderr, LISTEN command failed: %s, PQerrorMessage(conn));
PQclear(res);
exit_nicely(conn);
}

/*
 * should PQclear PGresult whenever it is no longer needed to avoid memory
 * leaks
 */
PQclear(res);

/* Quit after four notifies are received. */
nnotifies = 0;
while (nnotifies  4)
{
/*
 * Sleep until something happens on the connection.  We use select(2)
 * to wait for input, but you could also use poll() or similar
 * facilities.
 */
int sock;
fd_set  input_mask;

sock = PQsocket(conn);

if (sock  0)
break;  /* shouldn't happen */

FD_ZERO(input_mask);
FD_SET(sock, input_mask);

if (select(sock + 1, input_mask, NULL, NULL, NULL)  0)
{
fprintf(stderr, select() failed: %s\n, strerror(errno));
 

Re: [GENERAL] Declaring multidimensional arrays in pl/pgsql

2007-11-29 Thread Max Zorloff
On Thu, 29 Nov 2007 19:21:03 +0400, Pavel Stehule  
[EMAIL PROTECTED] wrote:



Hello

arrays in PostgreSQL have to be regular allways. And before 8.3 array
cannot contais NULL, so you cannot simpl resize two dim array :(. But
your functions doesn't work in 8.3. too. So you can

a) use 1D array and access to array like myarray[10*(n1-1)+n2]

b) init array with string like

create or replace function testfunc()
returns void as $$
DECLARE
  myArray int[][];
BEGIN
  myArray :=  ('{'||array_to_string(array(select
'{0,0,0,0,0,0,0,0,0,0}'::text from
generate_series(1,10)),',')||'}')::int[][];
  FOR i IN 1..10 LOOP
FOR j IN 1..10 LOOP
  RAISE NOTICE '% %', i, j;
  myArray[i][j] := 1;
END LOOP;
  END LOOP;
  RETURN;
END
$$ language plpgsql;


Thanks for the info, but running the above gives me that :

ponline=# select testfunc();
ERROR:  cannot cast type text to integer[]
CONTEXT:  SQL statement SELECT  ('{'||array_to_string(array(select  
'{0,0,0,0,0,0,0,0,0,0}'::text from  
generate_series(1,10)),',')||'}')::int[][]

PL/pgSQL function testfunc line 4 at assignment

I think 8.1 does not have text - int[] cast available. I think I'm stuck  
with option a.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Making a query from 2 tables at same time

2007-11-29 Thread Richard Huxton

Pau Marc Munoz Torres wrote:

i test it and now the error is

mhc2db= select t1.sp, t1.pos,t2.p1, t2.p4, t2.p6, t2.p7, t2.p9  from local
as t1, precalc as t2 where t1.ce='ACIAD' and idr(t2.p1, t2.p4, t2.p6, t2.p7,
t2.p9, 'HLA-DRB5*0101')2;
ERROR:  relation pssms does not exist
CONTEXT:  SQL statement SELECT  score from PSSMS where AA= $1  and POS=1
and MOLEC= $2 
PL/pgSQL function idr line 11 at select into variables

pssm was a temporary table that i used to calculate the index in precalc
table with idr function, should i rebuilt it? the problem is that if i can't
use idr as an index it eill be to slow


I think you need to take a step back and explain what it is you are 
trying to do - you shouldn't be using an external table in an indexed 
function at all.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] hibernate + postgresql ?

2007-11-29 Thread Joshua D. Drake

Oleg Bartunov wrote:

Hello,

any exprerience with Hibernate + PostgreSQL ? How does this combination
is flexible and smart ? Is't possible to tune selected queries ?


In short, it sucks :).

Long description: Hibernate is a mature ORM for Java. It is in use with 
PostgreSQL in quite a few installations and even a bunch that we are 
responsible for. It does work as well as any ORM can work.


The problem is, its an ORM and ORMs are dumb in there actual ability to 
work with any real relational database.


If you are not lazy you can push outside the standard hibernate methods 
and produce very usable code but then you have to wonder why you have 
hibernate there at all.



Sincerely,

Joshua D. Drake




Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] HD is flooded by Error Log info

2007-11-29 Thread Abraham, Danny
Hello

 

Recently an application endless loop crashed the PG server.

I am looking for the safest way  to limit the growth of the all the
files in the pg_log directory,

While still  maintain reasonable period of log.

 

Any suggestions?

 

Our current setup is:

 

#---


# ERROR REPORTING AND LOGGING

#---


redirect_stderr = on

log_line_prefix = '%m'

client_min_messages = error

log_min_messages = error

 

log_filename = 'CTMD-%d.log

log_truncate_on_rotation = on

log_rotation_age = 1d

log_rotation_size = 10MB

 

 

Thanks

 

 

Danny Abraham

BMC Software

CTMD Business Unit

972-52-4286-513

[EMAIL PROTECTED]

 

 



Re: [GENERAL] HD is flooded by Error Log info

2007-11-29 Thread Abraham, Danny

Hello

Recently an application endless loop crashed the PG server.
I am looking for the safest way  to limit the growth of the all the files in 
the “pg_log” directory,
While still  maintain reasonable period of log.

Any suggestions?

Our current setup is:

#---
# ERROR REPORTING AND LOGGING
#---
redirect_stderr = on
log_line_prefix = '%m'
client_min_messages = error
log_min_messages = error

log_filename = 'CTMD-%d.log
log_truncate_on_rotation = on
log_rotation_age = 1d
log_rotation_size = 10MB


Thanks


Danny Abraham
BMC Software
CTMD Business Unit
972-52-4286-513
[EMAIL PROTECTED]



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] notify/listen disappearing data

2007-11-29 Thread Richard Huxton

Sim Zacks wrote:

select version()
PostgreSQL 8.0.1 on i686-pc-linux-gnu, compiled by GCC 
i686-pc-linux-gnu-gcc (GCC) 3.3.5  (Gentoo Linux 3.3.5-r1, ssp-3.3.2-3, 
pie-8.7.7.1)


Upgrade to 8.0.14 - you are missing 13 sets of bugfixes.

Today I added 2 new fields to a single row table and populated them with 
values.
I noticed that the values disappeared after a while and could not figure 
out why.

After they disappeared a couple more times I started investigating.


It can't be an issue of transactions, because fields that existed before 
the daemon was started are not reverted to any prior state.
The table itself isn't referenced in the code here, it is only called in 
a function.


The function will have its query planned at the start of a connection. 
Your daemon still had a query plan left over from before you added those 
columns. You'll see similar discussion in the archives regarding 
temporary tables and plpgsql.


Of course, if you had executed the SQL directly rather than through a 
function it would have been planned each time and you'd not have noticed 
any problems. Unless of course you had PREPAREd a query which, again, 
would have frozen its plan.


More recent versions of PG have better plan invalidation, although I 
don't know whether they address this particular scenario.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] notify/listen disappearing data

2007-11-29 Thread Gregory Stark
Sim Zacks [EMAIL PROTECTED] writes:

 select version()
 PostgreSQL 8.0.1 on i686-pc-linux-gnu, compiled by GCC i686-pc-linux-gnu-gcc 
 (GCC) 3.3.5  (Gentoo Linux 3.3.5-r1, ssp-3.3.2-3, pie-8.7.7.1)

Several of the bugs listed in the 13 bug-fix releases which follow this one
relate to tuple visibility bugs and data corruption bugs. I would suggest you
first update to the latest bug-fix release for this version before even trying
to debug anything further. 

I'm not sure any of them are necessarily causing this problem but even if
they're not it's only a matter of time before they do something bad to your
data.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's 24x7 Postgres support!

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] hibernate + postgresql ?

2007-11-29 Thread Usama Dar
i has exactly the same discussion with a bunch of java developers over the
use of Hibernate with Postgres, and most of them feel that it gives them the
ability to write code and not worry about which backend will be plugged in
Oracle or postgres or Mysql and any other, if you are using it with
something like spring you just need to change a bunch of xml files and you
are good to go . i guess its the java school of thought they like to be
*platform independent*

On Nov 29, 2007 8:52 PM, Joshua D. Drake [EMAIL PROTECTED] wrote:

 Oleg Bartunov wrote:
  Hello,
 
  any exprerience with Hibernate + PostgreSQL ? How does this combination
  is flexible and smart ? Is't possible to tune selected queries ?

 In short, it sucks :).

 Long description: Hibernate is a mature ORM for Java. It is in use with
 PostgreSQL in quite a few installations and even a bunch that we are
 responsible for. It does work as well as any ORM can work.

 The problem is, its an ORM and ORMs are dumb in there actual ability to
 work with any real relational database.

 If you are not lazy you can push outside the standard hibernate methods
 and produce very usable code but then you have to wonder why you have
 hibernate there at all.


 Sincerely,

 Joshua D. Drake


 
  Regards,
  Oleg
  _
  Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
  Sternberg Astronomical Institute, Moscow University, Russia
  Internet: [EMAIL PROTECTED], 
  http://www.sai.msu.su/~megera/http://www.sai.msu.su/%7Emegera/
  phone: +007(495)939-16-83, +007(495)939-23-83
 
  ---(end of broadcast)---
  TIP 5: don't forget to increase your free space map settings
 


 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/




-- 
Usama Munir Dar http://linked.com/in/usamadar
Consultant Architect
Cell:+92 321 5020666
Skype: usamadar


Re: [GENERAL] HD is flooded by Error Log info

2007-11-29 Thread Usama Dar
what i do is setup a cron job which runs daily or every second day, creates
a tar.gz and copies them to a seperate location, and then another cron job
which clears up archives more than one month old. That seems to be the way
to go, i don't think you will find a postgresql.conf parameter which does
that



On Nov 29, 2007 8:53 PM, Abraham, Danny [EMAIL PROTECTED] wrote:

  Hello



 Recently an application endless loop crashed the PG server.

 I am looking for the safest way  to limit the growth of the all the files
 in the pg_log directory,

 While still  maintain reasonable period of log.



 Any suggestions?



 Our current setup is:




 #---

 # ERROR REPORTING AND LOGGING


 #---

 redirect_stderr = on

 log_line_prefix = '%m'

 client_min_messages = error

 log_min_messages = error



 log_filename = 'CTMD-%d.log

 log_truncate_on_rotation = on

 log_rotation_age = 1d

 log_rotation_size = 10MB





 Thanks





 Danny Abraham

 BMC Software

 CTMD Business Unit

 972-52-4286-513

 [EMAIL PROTECTED]








-- 
Usama Munir Dar http://linkedin.com/in/usamadar
Consultant Architect
Cell:+92 321 5020666
Skype: usamadar


Re: [GENERAL] HD is flooded by Error Log info

2007-11-29 Thread Simon Riggs
On Thu, 2007-11-29 at 09:53 -0600, Abraham, Danny wrote:

 Recently an application endless loop crashed the PG server.
 
 I am looking for the safest way  to limit the growth of the all the
 files in the “pg_log” directory,
 
 While still  maintain reasonable period of log.

Sounds like we should have these features 

- pg_log file archiving when we reach a certain volume/number of logs

- ability to set limits on the number of messages a single backend can
emit within a certain time period.

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Making a query from 2 tables at same time

2007-11-29 Thread Pau Marc Munoz Torres
Ok,

 I have two tables, first one, that i call precalc has the following
structure

id . serial
p1  varchar
p4  varchar
p6  varchar
p7  varchar
p9  varchar

and a numer of index that is a real number resulting of a function (function
is called idr and i talk about it bellow)

another table is local than has the following fields

ce varchar
sp varchar
pos integer
id integer

id values for both tables are the same.


idr function is a function that I wrote

create  function IDR(char,char,char,char,char,varchar(20)) returns real AS'
DECLARE
 output real;
 P1 real;
 P4 real;
 P6 real;
 P7 real;
 P9 real;

BEGIN

select into  P1 score from PSSMS where AA=$1 and POS=1 and
MOLEC=$6;
select into  P4 score from PSSMS where AA=$2 and POS=4 and
MOLEC=$6;
select into  P6 score from PSSMS where AA=$3 and POS=6 and
MOLEC=$6;
select into  P7 score from PSSMS where AA=$4 and POS=7 and
MOLEC=$6;
select into  P9 score from PSSMS where AA=$5 and POS=9 and
MOLEC=$6;

select into output P1+P4+P6+P7+P9;

return output;
END;
' LANGUAGE plpgsql IMMUTABLE;

where PSSMS was a temporay table (it don't exist right now)

now i would like to perform a select to get p1,p2,p6,p7,p9, sp and pos from
those register that her value in the index is bigger than x

mhc2db= select t1.sp, t1.pos,t2.p1, t2.p4, t2.p6, t2.p7, t2.p9  from local
as t1, precalc as t2 where t1.ce='ACIAD' and idr(t2.p1, t2.p4, t2.p6, t2.p7,
t2.p9, 'HLA-DRB5*0101')2;

if i perfom a select like

select * from precalc where dr(p1,p4, p6, p7, p9, 'HLA-DRB5*0101')2;

or

select * from local where ce='ACIAD';

works perfectely

is it clear enough? i don't now if i make myself understand, any way, if it
not, please, ask me!!


thanks
pau

2007/11/29, Richard Huxton [EMAIL PROTECTED]:

 Pau Marc Munoz Torres wrote:
  i test it and now the error is
 
  mhc2db= select t1.sp, t1.pos,t2.p1, t2.p4, t2.p6, t2.p7, t2.p9  from
 local
  as t1, precalc as t2 where t1.ce='ACIAD' and idr(t2.p1, t2.p4, t2.p6,
 t2.p7,
  t2.p9, 'HLA-DRB5*0101')2;
  ERROR:  relation pssms does not exist
  CONTEXT:  SQL statement SELECT  score from PSSMS where AA= $1  and
 POS=1
  and MOLEC= $2 
  PL/pgSQL function idr line 11 at select into variables
 
  pssm was a temporary table that i used to calculate the index in precalc
  table with idr function, should i rebuilt it? the problem is that if i
 can't
  use idr as an index it eill be to slow

 I think you need to take a step back and explain what it is you are
 trying to do - you shouldn't be using an external table in an indexed
 function at all.

 --
Richard Huxton
Archonet Ltd




-- 
Pau Marc Muñoz Torres

Laboratori de Biologia Computacional
Institut de  Biotecnologia   i Biomedicina Vicent
Villar
Universitat Autonoma de Barcelona
E-08193 Bellaterra (Barcelona)

telèfon: 93 5812807
Email : [EMAIL PROTECTED]


Re: [GENERAL] hibernate + postgresql ?

2007-11-29 Thread Oleg Bartunov

On Thu, 29 Nov 2007, Joshua D. Drake wrote:


Oleg Bartunov wrote:

Hello,

any exprerience with Hibernate + PostgreSQL ? How does this combination
is flexible and smart ? Is't possible to tune selected queries ?


In short, it sucks :).

Long description: Hibernate is a mature ORM for Java. It is in use with 
PostgreSQL in quite a few installations and even a bunch that we are 
responsible for. It does work as well as any ORM can work.


The problem is, its an ORM and ORMs are dumb in there actual ability to work 
with any real relational database.


If you are not lazy you can push outside the standard hibernate methods and 
produce very usable code but then you have to wonder why you have hibernate 
there at all.


Thank you, that's what I'm afraid for. 



Sincerely,

Joshua D. Drake




Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings




---(end of broadcast)---
TIP 4: Have you searched our list archives?

 http://archives.postgresql.org/



Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] viewing definition of CREATE TYPE name AS ENUM...

2007-11-29 Thread Usama Dar
There is no direct system information function like pg_get_indexdef etc, but
you can find the enum labels in pg_enum catalog table, and there are some
enum support functions

See if they help you
http://www.postgresql.org/docs/8.3/static/functions-enum.html

On Nov 29, 2007 1:53 PM, Richard Broersma Jr [EMAIL PROTECTED] wrote:

 After a type is created, is it possible to view the definition of this
 type?

 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster




-- 
Usama Munir Dar http://linkedin.com/in/usamadar
Consultant Architect
Cell:+92 321 5020666
Skype: usamadar


Re: [GENERAL] HD is flooded by Error Log info

2007-11-29 Thread Andrew Sullivan
On Thu, Nov 29, 2007 at 04:20:30PM +, Simon Riggs wrote:
 
 - pg_log file archiving when we reach a certain volume/number of logs

IMO, by the time we get that complicated, admins need to start doing some
work.  That is, I think, work that should be nowhere near the main
PostgreSQL code.  If someone wants to set up a foundry project for nice
crontab suggestions and Windows scheduler suggestions (however that works),
that'd be rather another matter.
 
 - ability to set limits on the number of messages a single backend can
 emit within a certain time period.

Please, no.  That sounds like a way to suppress useful error messages when
something Really Bad is happening.  As it happens, that's when the flood of
messages is most important.

The general tone in this thread sounds like, I don't have time to
administer this; help me.  Providing such help is a bad idea. 

A

-- 
Andrew Sullivan
Old sigs will return after re-constitution of blue smoke

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Making a query from 2 tables at same time

2007-11-29 Thread Csaba Nagy
This is the problem:

 ... t2.idr(p1, p4, p6, p7, p9, 'HLA-DRB5*0101')2;

What is that t2.idr ? Based on the syntax postgres must look for a
function (because of the parentheses), and it thinks t2 is the schema
where it must look for it.

Cheers,
Csaba.



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Problems Dumping DB

2007-11-29 Thread Pepe Barbe

Hello Everyone,

We a have a DB running on PostgreSQL 8.0.12 that has been running into  
a string of issues and as of now we have ran out solutions, so we  
would like to hear some input from some more knowledgeable people.


Initially we found out that our DB had the xid-wrap problem. Given  
some advice from the folks of the IRC channel we went for a whole-DB  
vacuum. Starting with this process we ran into tables that had invalid  
pages. Given that we could recover the data in these table, we decided  
to zero out the tables with invalid pages. Later we ran into some  
tables being unable to be vacuumed because we got this error:


INFO:  vacuuming public.cdrs_part_2007_10_08
ERROR:  could not access status of transaction 3591307276
DETAIL:  could not open file /home/postgres/data/pg_clog/0D60: No  
such file or directory


We decided to drop these tables as well, since we could rebuild that  
data without problems.


Given these errors, we started suspecting (A little bit late) hardware  
errors. Well, the only issue we found was some FileSystem corruption  
in one of the partitions were the DB resides, which we seem to have  
been able to recover from, although I don't know if we might have lost  
some data in that process. After that we have searched for bad blocks  
on the drives and bad RAM and have found none. Although we are  
suspecting that the culprit is either SCSI controller on the storage  
array or either a failing drive that the storage array is not  
reporting as bad.


Anyway we were able to finish the whole DB vacuum and recover from the  
xid-wrap. Having done this we decided to do a dump of the DB as a  
backup measure. While doing so we got this error:


pg_dump ts -t cdrs_part_2007_03_01 | gzip 
/home/postgres/cdrs2/backup/cdrs_part_2007_03_01.gz

pg_dump: ERROR:  invalid memory alloc request size 18446744073709551613
pg_dump: SQL command to dump the contents of table  
cdrs_part_2007_03_01

failed: PQendcopy() failed.
pg_dump: Error message from server: ERROR:  invalid memory alloc request
size 18446744073709551613
pg_dump: The command was: COPY public.cdrs_part_2007_03_01 (calling_num,
calling_ser_num, dialed_num, called_num, called_ser_num, call_type,
billing_num, treatment_code, billing_ser_num, event_info,  
first_orig_site,

first_orig_sect, first_orig, first_orig_memb, prev_orig_site,
prev_orig_sect, prev_orig, prev_orig_memb, last_orig_site,  
last_orig_sect,

last_orig, last_orig_memb, orig_time, handoffs, first_term_site,
first_term_sect, first_term, first_term_memb, prev_term_site,
prev_term_sect, prev_term, prev_term_memb, last_term_site,  
last_term_sect,

last_term, last_term_memb, discon_time, call_duration, completion_code,
term_num, answer_time, carrier_id_code, redirecting_num,  
lnp_calling_msid,

lnp_called_msid, lnp_billing_msid, cdr_file) TO stdout;

This error happens on data was on the partition whose Filesystem has  
corrected. At least another table on the same filesystem gives the  
same error. Or tables in other filesystems do not give error this when  
dumping. Given this we checked the FS of this partition and the drives  
of the RAID unit that forms the partition and haven't found any more  
problems.


Any ideas on what we might do to recover from this problem and finish  
the dump? Any other things we might do to test the consistency of our  
database?


Thanks,
Pepe

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] HD is flooded by Error Log info

2007-11-29 Thread Usama Dar
As far as automatic archiving is concerned the easiest would be to provide
something similar to archive_command , however i can't imagine how the
second suggestion could be useful, that sounds like pretty dangerous to me.




On Nov 29, 2007 9:20 PM, Simon Riggs [EMAIL PROTECTED] wrote:

 On Thu, 2007-11-29 at 09:53 -0600, Abraham, Danny wrote:

  Recently an application endless loop crashed the PG server.
 
  I am looking for the safest way  to limit the growth of the all the
  files in the pg_log directory,
 
  While still  maintain reasonable period of log.

 Sounds like we should have these features

 - pg_log file archiving when we reach a certain volume/number of logs

 - ability to set limits on the number of messages a single backend can
 emit within a certain time period.

 --
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster




-- 
Usama Munir Dar http://linkedin.com/in/usamadar
Consultant Architect
Cell:+92 321 5020666
Skype: usamadar


Re: [GENERAL] Making a query from 2 tables at same time

2007-11-29 Thread Richard Huxton

Pau Marc Munoz Torres wrote:

Ok,

 I have two tables, first one, that i call precalc has the following
structure

...

and a numer of index that is a real number resulting of a function (function
is called idr and i talk about it bellow)


...

another table is local than has the following fields
id values for both tables are the same.


idr function is a function that I wrote

create  function IDR(char,char,char,char,char,varchar(20)) returns real AS'

...


select into  P1 score from PSSMS where AA=$1 and POS=1 and
MOLEC=$6;

...

' LANGUAGE plpgsql IMMUTABLE;


This function is not IMMUTABLE, it queries a table. Labelling it 
IMMUTABLE doesn't make it so.


Oh and it takes char parameters but you seem to pass it varchar.


where PSSMS was a temporay table (it don't exist right now)


Then how can the function be meaningful?


now i would like to perform a select to get p1,p2,p6,p7,p9, sp and pos from
those register that her value in the index is bigger than x


OK


mhc2db= select t1.sp, t1.pos,t2.p1, t2.p4, t2.p6, t2.p7, t2.p9  from local
as t1, precalc as t2 where t1.ce='ACIAD' and idr(t2.p1, t2.p4, t2.p6, t2.p7,
t2.p9, 'HLA-DRB5*0101')2;

if i perfom a select like

select * from precalc where dr(p1,p4, p6, p7, p9, 'HLA-DRB5*0101')2;


I assume you mean idr(...)

If you want this to work properly, then your function needs to genuinely 
be immutable. That means it only depends on the values you pass in to 
it, not on any other tables.


If it works in the simple case above, then that is purely chance. So - 
can idr() calculate its score based on its inputs?



or

select * from local where ce='ACIAD';


I don't see what this query has to do with your problem.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] HD is flooded by Error Log info

2007-11-29 Thread Richard Huxton

Andrew Sullivan wrote:

On Thu, Nov 29, 2007 at 04:20:30PM +, Simon Riggs wrote:

- pg_log file archiving when we reach a certain volume/number of logs


IMO, by the time we get that complicated, admins need to start doing some
work.  That is, I think, work that should be nowhere near the main
PostgreSQL code.  If someone wants to set up a foundry project for nice
crontab suggestions and Windows scheduler suggestions (however that works),
that'd be rather another matter.


Surely this is what logrotate and syslog-ng etc are for. This is a 
sysadmin problem not a DBA problem - any service can generate a lot of logs.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] Will PG use composite index to enforce foreign keys?

2007-11-29 Thread John Burger

Hi -

I know that the foreign key machinery will use an index on the  
referring column if one exists.  My question is whether it will use a  
composite index?  For instance:


create table allLemmaSenseMap (
  wordID integer references allLemmas,
  senseIDinteger references allSenses,
  primary key (wordID, senseID)
);

If I delete something from allLemmas, will the FK check use the PK  
index above?  (I know I should at least have an index on senseID as  
well, because of the other foreign key.)


As a secondary question, is there any way I could have answered this  
myself, using analyze, the system catalogs, etc?  ANALYZE DELETE  
doesn't seem to show the FK checking that must go on behind the scenes.


Thanks.

- John D. Burger
  MITRE



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] 1 cluster on several servers

2007-11-29 Thread Usama Dar
On Nov 29, 2007 6:27 PM, Willy-Bas Loos [EMAIL PROTECTED] wrote:

 I'll take that as a no.
 What i mean is to actually run exactly one cluster (no replicated copy) on
 more than one server. Of course, if that were possible, why would people
 bother with replication..

 I guess it is irrational to suggest that it would be possible, since each
 server would at least need to have it's own copy of the DBMS software etc,
 or it would cease to be a separate server.

 Maybe Data Partitioning, as in the documentation link Shane sent,
 possibly combined with Slony for the other data per server, would be an
 option for me.

 Is there an implementation for this in PostgreSQL? It would have to be
 something like pgPool (middleware), because: How would the query know on
 which server to put it's data? And i guess i would need some Kerberos-like
 implementation for my authentication and authorization...


There used to be parallel server type solution for postgresql called ExtenDB
www.extendb.com , but i think its now acquired by a commercial PostgresSQL
vendor, but i have seen others similar to that.



 cheers,

 WBL



 On Nov 29, 2007 1:23 PM, Shane Ambler [EMAIL PROTECTED] wrote:

  Willy-Bas Loos wrote:
   Hi,
  
   Is it possible to run one PostgreSQL cluster on more than one
  (hardware)
   server?
  
   WBL
  
  You would be looking for replication.
  Start with
  http://www.postgresql.org/docs/8.2/interactive/high-availability.html
  to get some idea on what is available for what you wish to achieve.
  Some of the projects that add these features are mentioned.
 
 
 
 
  --
 
  Shane Ambler
  [EMAIL PROTECTED]
 
  Get Sheeky @ http://Sheeky.Biz
 




-- 
Usama Munir Dar http://linkedin.com/in/usamadar
Consultant Architect
Cell:+92 321 5020666
Skype: usamadar


Re: [GENERAL] HD is flooded by Error Log info

2007-11-29 Thread Andrew Sullivan
On Thu, Nov 29, 2007 at 04:50:45PM +, Richard Huxton wrote:
 Surely this is what logrotate and syslog-ng etc are for. This is a 
 sysadmin problem not a DBA problem - any service can generate a lot of logs.

Yes.  But some have complained those things aren't Postgres-centric enough.
I don't see how, but some have argued strongly.  So I say, if there's a
demand, I've no objection to an add-on product (which could be especially
relevant if we get something like CPgAN); but not anything like additional
extensions to the existing logging infrastructure (what is there is nice,
because it provides things that the poastmaster alone may know; but
additions would be gilding the lily, except with pyrites).

A

-- 
Andrew Sullivan
Old sigs will return after re-constitution of blue smoke

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] HD is flooded by Error Log info

2007-11-29 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Thu, 29 Nov 2007 11:58:36 -0500
Andrew Sullivan [EMAIL PROTECTED] wrote:

 On Thu, Nov 29, 2007 at 04:50:45PM +, Richard Huxton wrote:
  Surely this is what logrotate and syslog-ng etc are for. This is a 
  sysadmin problem not a DBA problem - any service can generate a lot
  of logs.
 
 Yes.  But some have complained those things aren't Postgres-centric
 enough. I don't see how, but some have argued strongly.  So I say, if
 there's a demand, I've no objection to an add-on product (which could
 be especially relevant if we get something like CPgAN); but not
 anything like additional extensions to the existing logging
 infrastructure (what is there is nice, because it provides things
 that the poastmaster alone may know; but additions would be gilding
 the lily, except with pyrites).

The only argument I have ever heard that seemed like a realistic issue
with using logrotate or syslog was tgl's. You can loose data in the log
when rotating if you aren't using the PostgreSQL logging facility.

Other than that I think our current logging situation is a bit
ridiculous. [1]

Joshua D. Drake

[1]
http://www.commandprompt.com/blogs/joshua_drake/2007/11/postgresql_logging_time_for_a_change/


 
 A
 


- -- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHTvEeATb/zqfZUUQRAn5gAKCEVbnJ3AND8uoxOyH10wMamgg76wCfUO9m
wKMFNGEyH25TFgAQe2xTuu8=
=DAHh
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Discrpency in the GRANT docs

2007-11-29 Thread Erik Jones
I just noticed this.  In the section of the page, near the end, that  
describes the access privileges display generated by psql's \z (which  
come from pg_class.relacl) there is the following:


/ -- user who granted this privilege

That's not the case.  What get's listed there is the current owner of  
the table (at least in 8.2).  So, then the question is are the docs  
correct or is the behavior correct ?  Here's a test run:


usda=# select current_user;
current_user
--
admin
(1 row)

usda=# create table test (id integer);
CREATE TABLE
usda=# create role test_user1;
CREATE ROLE
usda=# create role test_user2;
CREATE ROLE
usda=# \z test
   Access privileges for database usda
Schema | Name | Type  | Access privileges
+--+---+---
public | test | table |
(1 row)

usda=# grant all on test to test_user1;
GRANT
usda=# \z test
Access privileges for database usda
Schema | Name | Type  |  Access privileges
+--+---+--
public | test | table | {admin=arwdxt/admin,test_user1=arwdxt/admin}
(1 row)

usda=# alter table test owner to test_user2;
ALTER TABLE
usda=# \z test
Access privileges for database usda
Schema | Name | Type  |  Access privileges
+--+--- 
+-
public | test | table | {test_user2=arwdxt/ 
test_user2,test_user1=arwdxt/test_user2}

(1 row)

usda=# alter table test owner to admin;
ALTER TABLE
usda=# \z test
Access privileges for database usda
Schema | Name | Type  |  Access privileges
+--+---+--
public | test | table | {admin=arwdxt/admin,test_user1=arwdxt/admin}
(1 row)


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] viewing definition of CREATE TYPE name AS ENUM...

2007-11-29 Thread Richard Broersma Jr
--- On Thu, 11/29/07, Usama Dar [EMAIL PROTECTED] wrote:

 See if they help you
 http://www.postgresql.org/docs/8.3/static/functions-enum.html


I will give this a try, thanks!

Regards,
Richard Broersma Jr.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Declaring multidimensional arrays in pl/pgsql

2007-11-29 Thread Max Zorloff
On Thu, 29 Nov 2007 21:15:50 +0400, Rodrigo De León [EMAIL PROTECTED]  
wrote:



On Nov 29, 2007 9:33 AM, Max Zorloff [EMAIL PROTECTED] wrote:

I don't think that works.

ponline=# select ('{1,2,3}'::text)::int[];
ERROR:  cannot cast type text to integer[]


Can you try:

select ('{1,2,3}'::unknown)::int[];


Thanks, that works fine with 2-dim arrays too.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Error compiling Slony I

2007-11-29 Thread Vivek Khera


On Nov 28, 2007, at 11:17 AM, Glyn Astill wrote:


I've already tried removing and re-installing bison, but I shall try
again as you suggest.



I recommended uninstalling bison, not re-installing it.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] HD is flooded by Error Log info

2007-11-29 Thread Tom Lane
Andrew Sullivan [EMAIL PROTECTED] writes:
 The general tone in this thread sounds like, I don't have time to
 administer this; help me.  Providing such help is a bad idea. 

Actually, can't it be solved easily enough with suitable configuration
of the existing logging parameters?  Just set things up so that log
files are recycled fairly quickly.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Discrpency in the GRANT docs

2007-11-29 Thread Tom Lane
Erik Jones [EMAIL PROTECTED] writes:
 I just noticed this.  In the section of the page, near the end, that  
 describes the access privileges display generated by psql's \z (which  
 come from pg_class.relacl) there is the following:

 / -- user who granted this privilege

 That's not the case.  What get's listed there is the current owner of  
 the table (at least in 8.2).

Your test case does not show that.  I think you missed this paragraph:

: If a superuser chooses to issue a GRANT or REVOKE command, the command
: is performed as though it were issued by the owner of the affected
: object. In particular, privileges granted via such a command will appear
: to have been granted by the object owner. (For role membership, the
: membership appears to have been granted by the containing role itself.)

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] ERROR: failed to find conversion function from unknown to integer[]

2007-11-29 Thread Rodrigo De León
t=# select version();
version

 PostgreSQL 8.3beta3, compiled by Visual C++ build 1400
(1 row)

t=# -- foo is of type unknown
t=# select '{1,2,3}' as foo;
   foo
-
 {1,2,3}
(1 row)

t=# -- OK. foo is of type int[]
t=# select ('{1,2,3}')::int[] as foo;
   foo
-
 {1,2,3}
(1 row)

t=# -- OK. foo is of type unknown
t=# select (('{1,2,3}'::text)::unknown) as foo;
   foo
-
 {1,2,3}
(1 row)

t=# -- Barfs. Why?
t=# select (('{1,2,3}'::text)::unknown)::int[] as foo;
ERROR:  failed to find conversion function from unknown to integer[]

Thanks for your time.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Discrpency in the GRANT docs

2007-11-29 Thread Erik Jones


On Nov 29, 2007, at 12:05 PM, Tom Lane wrote:


Erik Jones [EMAIL PROTECTED] writes:

I just noticed this.  In the section of the page, near the end, that
describes the access privileges display generated by psql's \z (which
come from pg_class.relacl) there is the following:



/ -- user who granted this privilege



That's not the case.  What get's listed there is the current owner of
the table (at least in 8.2).


Your test case does not show that.  I think you missed this paragraph:

: If a superuser chooses to issue a GRANT or REVOKE command, the  
command

: is performed as though it were issued by the owner of the affected
: object. In particular, privileges granted via such a command will  
appear

: to have been granted by the object owner. (For role membership, the
: membership appears to have been granted by the containing role  
itself.)


regards, tom lane


Ah, thanks, I did miss that one.

Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] PostgresSQL vs. Informix

2007-11-29 Thread David Fetter
On Wed, Nov 28, 2007 at 11:11:07PM -0700, Gregory Williamson wrote:

 * There is no equivalent of a synonym; a view can be used to fake this 
 sometimes
  but where Informix lets you create a synonym to a table in another 
 database /
  instance, PostgreSQL doesn't. dblink can be used to poke a hole to other
  databases though, including non-postgres ones.

dblink, a contrib module, is only for Postgres databases.  There are
several other options including dblink-tds and DBI-Link on pgfoundry
which let you connect to other kinds of data store.

Cheers,
David.
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: [EMAIL PROTECTED]

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Dump/Restore Large Object OID

2007-11-29 Thread Norberto Delle

Hi all

If I don't use the --oids option when dumping a database with pg_dump,
can I assure that the loid field of the pg_largeobject table will keep 
it's value when restoring?


Thanks in advance

Norberto

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] HD is flooded by Error Log info

2007-11-29 Thread Andrew Sullivan
On Thu, Nov 29, 2007 at 12:48:35PM -0500, Tom Lane wrote:
 Andrew Sullivan [EMAIL PROTECTED] writes:
  The general tone in this thread sounds like, I don't have time to
  administer this; help me.  Providing such help is a bad idea. 
 
 Actually, can't it be solved easily enough with suitable configuration
 of the existing logging parameters?  Just set things up so that log
 files are recycled fairly quickly.

If recycling is allowed, yes.  I had the impression that some weren't
willing to do the recycle bits, though. 

A

-- 
Andrew Sullivan
Old sigs will return after re-constitution of blue smoke

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgresSQL vs. Informix

2007-11-29 Thread Jeff Trout


On Nov 28, 2007, at 6:11 PM, Jeff Larsen wrote:

I can't offer much in the way of a case study, but I am an Informix
customer that has been doing an informal feasibility study on
switching to PostgreSQL. I've learned a ton from the high quality PG
docs and from the PG mailing lists.

I switched a semi-large Informix install to PG and I had nearly  
opposite experience.  I had nothing but issues with Informix.  We had  
a nice expensive support contract and got snuffed over bugs and  
problems we'd find.  (It took them 9 weeks to admit a bug, and they  
said they may have a fix in another 10-15 weeks.  In the meantime,  
just learn to live with it). We also hit a cool bug that prevented us  
from taking a real backup of one of our informix installs.  that  
was fun.  Informix support again refused to help us.   The PG  
community on the other hand is very eager to help.  While we don't  
write them a check for $lots, they give better support.  Plus given  
it is open source you can always look under the hood while digging up  
a problem.


I found for some things informix would work nicer, but in other areas  
pg would smash it or would have other things to help make up for it.   
Replication was a breeze to setup in Informix.  I'm trying to think  
of some other things.. failing miserably... heh.


Granted, these tests were a while ago - dealing with Informix 7 and  
some 9.


Some of the big things I loved in PG over Informix were the stored  
procedures were much more flexible, triggers worked better, I could  
store more data in it (I got bit by the 2^21 pages of data per table  
limit in Informix. Luckly that error comes up as another unrelated  
error if you hit it), and I found administration to be much simpler.


Sorry I don't have more details, this was mostly back around 2001-02  
or so.


iirc, didn't some big us govt group move to PG from Informix?  I  
think Bruce mentioned something once about that..

--
Jeff Trout [EMAIL PROTECTED]
http://www.dellsmartexitin.com/
http://www.stuarthamm.net/




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] problems with email...

2007-11-29 Thread Pedro Doria Meunier
Hi.

I seem to have *extreme* problems sending messages to this list... :-(
Could one of the people responsible for the list see if anything is
wrong with my account, please?

Thanks!
--
Pedro Doria Meunier 
Ips da Olaria
Edf. Jardins do Garajau, 4 r/c Y
9125-163 Caniço
Madeira
Portugal
GSM: +351 96 17 20 188 Skype: pdoriam
http://www.madeiragps.com


[GENERAL] free ETL using Postgresql 8.2.x

2007-11-29 Thread Benoît Carpentier
Dear all,

I've released a graphical and simple ETL tool using Postgresql under Windows.
Is working with .txt or .csv files.
It is called Benetl and you can find (and freely download) it at :
www.benetl.net

This is a first version, your comments are welcomed, there is a forum you
can use it to report troubles or needs.

Thanks for your attention,

-- 
Benoît Carpentier
www.benetl.net
Founder of Benetl  Java Developer



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] hibernate + postgresql ?

2007-11-29 Thread Oliver Kohll

Begin forwarded message:



On Nov 29, 2007 8:52 PM, Joshua D. Drake [EMAIL PROTECTED] wrote:
Oleg Bartunov wrote:
 Hello,

 any exprerience with Hibernate + PostgreSQL ? How does this  
combination

 is flexible and smart ? Is't possible to tune selected queries ?

In short, it sucks :).

Long description: Hibernate is a mature ORM for Java. It is in use  
with

PostgreSQL in quite a few installations and even a bunch that we are
responsible for. It does work as well as any ORM can work.

The problem is, its an ORM and ORMs are dumb in there actual ability  
to

work with any real relational database.

If you are not lazy you can push outside the standard hibernate  
methods

and produce very usable code but then you have to wonder why you have
hibernate there at all.


Sincerely,

Joshua D. Drake


Well ORM has its uses.

We use both Hibernate and raw SQL in the same application - SQL when  
we need fast querying on a database storing user data and Hibernate  
just to persist the state of the application to disk so it's saved  
between restarts. It's great not to have to manage this manually.


Hibernate is slow and the generated queries can be massive yes, but in  
our example the only time it gets any real use is as the app is  
booting up and restoring Java objects from the db. On the plus side,  
it produces a fairly easily understood schema that matches your  
objects so you're able to query the state of you app objects with SQL  
on the command line.


Regards

Oliver Kohll




[GENERAL] Cascading Trigger Prevention

2007-11-29 Thread JonXP
Hello All,

I have a table that contains a nested set (using paths), and I'm
trying to create a trigger that updates the timestamps of a node and
all of its parents on a modification.  Unfortunately, this is causing
infinitely recurring triggers.

The insert occurs, which fires the trigger, which then updates all of
the parent rows, which then fires the trigger again for each update,
which recurses yet again.

I have been looking with no success for a way to turn off triggers,
and am surprised that there is no method of preventing a trigger from
running when performing a query.  Even more bothersome is the fact
that a trigger has no way of knowing when it has been called by
another trigger.

I don't want to have to move my modified timestamps to a separate
table just so that the triggers don't recurse themselves, but this is
looking like it will be the case.

Does anyone have any thoughts or alternate suggestions?

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] certificate based authorization

2007-11-29 Thread Sebastian - Anton PONOVESCU
Hello
 
Is there a way to use certificate based authorization with postgresql? I
already implemented authentication, but among the people that my CA
certifies, and which I trust by the way, I want to distinguish to a
particular server who I grand access and who I don't even if they are
who they claim they are. And this based only on certificates not user /
pass or other mechanisms like LDAP / PAM.
 
Thank you and best regards,
Sebastian
 


Re: [GENERAL] postgres 8.3 beta 2 storage question

2007-11-29 Thread Filip Rembiałkowski
2007/11/27, SHARMILA JOTHIRAJAH [EMAIL PROTECTED]:

 1.How does postgres version 8.3 betat 2 handle varchar and numeric data
 types in terms of storage
 I understand for varchar it has 1byte overhead (instead of 4) if length128
 How does it handle for numeric? The manual says
 The actual storage requirement is two bytes for each group of four decimal
 digits, plus five to eight bytes overhead.
which manual?
( not these two: )
http://www.postgresql.org/docs/8.1/static/datatype.html#DATATYPE-NUMERIC-DECIMAL
http://www.postgresql.org/docs/8.2/static/datatype-numeric.html#DATATYPE-NUMERIC-DECIMAL


 2. Does postgres always inserts a complete record into a page unlike oracle
 whose record can span multiple pages? Does this waste space too?
records larger than one page are TOASTed

http://www.postgresql.org/docs/current/static/storage-page-layout.html

-- 
Filip Rembiałkowski

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] log_line_prefix='%t %u %d %h %p %i %l %x ' causes error

2007-11-29 Thread Andrus
I added the following line to postgresql.conf file:

log_line_prefix='%t %u %d %h %p %i %l %x '

but got error.

2007-11-29 12:40:41 LOG:  syntax error in file D:/Program 
Files/PostgreSQL/8.2/data/postgresql.conf line 482, near token %


I checked and all options are legal.

What I'm doing wrong ?

Why error message does not show exact information about error ?

Andrus.



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] \copy ... with null as '' csv doesn't get nulls

2007-11-29 Thread Lew

Lew wrote:

Try eliminating the double quotes in the CSV file.  Wannabe NULL
would then be ,, (consecutive commas)
in the CSV.  From the docs, you don't even need the NULL AS
clause in your COPY statement.


Ivan Sergio Borgonovo wrote:

Exactly what I did because fortunately there weren't too many chances
of weird stuff in 2000 records (sed -e 's/,/,/g').


And this worked, right?


Anyway with NULL AS '' and without it I can't still import NULL
without pre-processing.


What pre-processing is that?


I thought it may be missing total support of UTF-8 or if I did
something wrong or it is actually a feature.


This clearly has nothing to do with UTF-8, and everything to with comma-comma 
representing a NULL and comma-quote-quote-comma representing an empty string.


--
Lew
This post contained two requests for responses.

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Postgres High Availablity Solution needed for hot-standby and load balancing

2007-11-29 Thread Ragnar Heil
Hi

our customer has got the following requirements:

Req1) Master master replication supported, not only master / slave
replication with only the master being writable. If you do have multiple
slave systems they are only useful from a backup and standby
perspective.  Our Application must have a db-connection it can write to.

Req2) Replication of schema should also be possible, not only data 

Req3) Not only a hot-standby-solution is needed. Load Balancing is
wanted for the future.


Currently I am looking at EnterpriseDB but it seems that they dont
support multiple master-replication

best regards
Ragnar



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Porting from FrontBase

2007-11-29 Thread Martin
I am looking at porting a web application from FrontBase to
Postgresql. If anybody has some tips on how to deal with
this, I'd appreciate hearing him. Thanks.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Postgres WarmStandby using ZFS or Snapshot to create Web DB?

2007-11-29 Thread Jennifer Spencer
I am looking for suggestions in setting up a large postgres database scenario. We are running a 
science project with a lot of data expected from the science instrument. If you have time to comment, 
any advice is most welcome!


Here's the deal:
1. We expect to store ~1TB per year of data for 10 years. Mostly this is a warehouse situation - not a 
lot of updates, or deletes, but a lot of inserts.
2. We need to replicate a subset of our postgres data to an international science community, and in 
turn take in some data from them (we plan to do this via Slony-1 unless there is a compelling reason 
not to).

3. We need to make a copy of our database available to the general internet 
community.
4. We need to have a Warm Standby available in case of disaster. We plan to use PITR with WAL files 
for this (again, unless there is a compelling reason not to).
5. We need to make regular full tape backups (~weekly) and occasionally scheduled maintenance (think 
quarterly maintenance).


We do not have an endless budget, sadly, so I could use some help as to how to go about this. Having 
gone from a job where my database software actually had paid tech support to one that doesn't 
(PostGres), I am pretty concerned about what could go wrong.


Assume our Primary server (A) is good enough to serve our in-house users, and our Warm Standby (B) is 
a physical duplicate of A. My plan is to copy WAL files to B. Make a tape backup from B weekly, 
keeping it out of recovery mode for ~6 hours, or alternatively make a snapshot of B's data files at a 
given time and tape off the snapshot. This takes care of A  B, and the backups, but what about the 
other requirements?


How do we get data to our web community w/out fear of hacking to the primary? And how do we do that 
economically? There is one plan in place to use a set of snapshot disks from A's data files to act as 
the web database's files. Can we do that? Is that exceptionally stupid? Another plan involves using a 
Solaris 10 ZFS solution to clone the warm standby B's files to act as a web database's files (see: 
http://www.lethargy.org/~jesus/archives ... crack.html for more). I am not sure either one of the 
above solutions will work quickly. We'd like a turnaround time from A to B to Web of less than 30 
minutes for newly-created tables, or new data in existing tables.


Lastly, we plan to pinhole our firewall for trusted Slony-1 science customers. People that we 
already know who have specific IP addresses. We have yet to figure out the drag to our Primary (A) due 
to Slony-1. Any experience with that out there?


My prior work experience involves a 1TB Sybase database, its warm-standby and regular backups  
quarterly maintenance. I am new to PostGres and the idea of no tech support phone calls when things 
break is a scary one! I am trying to create a belt-and-suspenders redundant solution so that if 
something breaks, I have time to figure out what went wrong and fix it before the users even know 
there's a problem.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] 1 cluster on several servers

2007-11-29 Thread Willy-Bas Loos
This is the system i am talking about:
http://people.planetpostgresql.org/xzilla/index.php?/archives/326-Solving-the-big-science-checklist.html

There are many parties involved, and i am trying to figure out what
configuration would be ideal for ours.

On Nov 29, 2007 3:20 PM, Geoffrey [EMAIL PROTECTED] wrote:

 Willy-Bas Loos wrote:
  I'll take that as a no.
  What i mean is to actually run exactly one cluster (no replicated copy)
 on
  more than one server. Of course, if that were possible, why would people
  bother with replication..
 
  I guess it is irrational to suggest that it would be possible, since
 each
  server would at least need to have it's own copy of the DBMS software
 etc,
  or it would cease to be a separate server.

 I think you need to better identify what you're trying to do.  I can
 think of a couple of different solutions based on the limited info
 provided.  You've already said you don't want replication.

 We have a scenario where we have a data silo that is shared between two
 servers, so the data exist in one place.  To make things simple, if one
 server fails, the postmasters running on that server are started on the
 other server.  This is a hot/hot fail over implementation as we have
 multiple postmasters running.

 You could implement a hot/warm fail over solution if you're running a
 single postmaster.

 Finally, you might be thinking of something like a beowulf cluster where
 multiple machines function as a single machine. I can't help you with
 that scenario as I don't have any experience with it and postgresql.

 --
 Until later, Geoffrey

 Those who would give up essential Liberty, to purchase a little
 temporary Safety, deserve neither Liberty nor Safety.
  - Benjamin Franklin

 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly



[GENERAL] pgcrypto functions fail for asymmetric encryption/decryption

2007-11-29 Thread Stefan Niantschur
Hi,

I have a table with userids and public keys. I want to write a function
which does a select and returns the result pgp encrypted.

However, I have some problems:
SELECT encode(decode((SELECT ens_pubkey FROM
ens_user)::text,'escape'),'escape'::text)::text;
- returns the public key, = ok

SELECT armor(dearmor((SELECT ens_pubkey FROM ens_user)::text));
- returns the key in a different format, = problem

SELECT
armor(pgp_pub_encrypt_bytea(armor(pgp_sym_encrypt('geheim'::text,'test'::text))::bytea,dearmor((SELECT
ens_pubkey FROM ens_user WHERE ens_userid = 10112)::text)));
- returns a pgp-encrypted message which cannot be decrypted by GnuPG,
= problem

SELECT
pgp_pub_decrypt(dearmor(armor(pgp_pub_encrypt(armor(pgp_sym_encrypt('geheim'::text,'test'::text)),dearmor((SELECT
ens_pubkey FROM ens_user WHERE ens_userid =
10112)::text,dearmor((SELECT ens_privkey FROM ens_user WHERE
ens_userid = 10112)::text),'test'::text);
- returns 'ERROR: Corrupt data' = problem

SELECT
pgp_key_id(pgp_pub_encrypt_bytea(armor(pgp_sym_encrypt('geheim'::text,'test'::text))::bytea,dearmor((SELECT
ens_pubkey FROM ens_user WHERE ens_userid = 10112)::text)));
- returns the correct key id of the deployed public key

So, if I cannot decrypt the message which I have been encrypting with
the appropriate keys, how can I proceed?

I want to encrypt messages in postgres and decrypt it elsewhere,
However, the result of the encryption algorithm seems to deliver a
wrong result. Otherwise I cannot explain why encrypting and immidiately
decrypting the message fails.

The same proceeding is succesful when using symmetric keys:
SELECT
pgp_sym_decrypt((pgp_sym_encrypt('geheim'::text,'test'::text)),'test'::text);
- returns 'geheim' which is the encrypted and then again decrypted
message.

What did I wrong when trying to use asymmetric encryption?

tia

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Help needed optimizing query

2007-11-29 Thread Martijn van Oosterhout
On Thu, Nov 29, 2007 at 01:26:00PM +, Pedro Doria Meunier wrote:
 Hi People.
 
 I need some help optimizing this query:

snip
 I still get Seq Scans although all used fields are indexed, hence the
 time used... :-(

A seq scan on a table with 10 rows is *good*. An index would take
longer. What you really need is to send the output of EXPLAIN ANALYZE
so we can see where the time actually went.

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Those who make peaceful revolution impossible will make violent revolution 
 inevitable.
  -- John F Kennedy


signature.asc
Description: Digital signature


Re: [GENERAL] Threads limit for postmaster

2007-11-29 Thread Magnus Hagander
Guillaume Pungeot wrote:
 Hi,
 I have a problem when a lot of connexions are opened on postgresql.
 
 When 243 threads have been created by postmaster, the next CreateThread
 fails and the following error is written in pglogs : FATAL:  could not
 create sigchld waiter thread: error code 8.
 
 I need to open more than 300 connexions on my server.
 Is there anyone to help me ?
 
 OS : windows 2003 server x64 edition.

This is a known limitation and will be fixed in 8.3. We'll consider
backporting this to 8.2 once it has been proven in the field. So if you
can, please download the latest beta of 8.3 and give that a try.

//Magnus

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Porting from FrontBase

2007-11-29 Thread Guido Neitzer

On 29.11.2007, at 12:30, Martin wrote:


I am looking at porting a web application from FrontBase to
Postgresql. If anybody has some tips on how to deal with
this, I'd appreciate hearing him. Thanks.


If it is WebObjects, you can contact me off-list. I've done that for  
some of our apps. It's not really hard.


cug

--
http://www.event-s.net


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] problems with email...

2007-11-29 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


 I seem to have *extreme* problems sending messages to this list... :-(
 Could one of the people responsible for the list see if anything is
 wrong with my account, please?

Out apologies. I've manually approved your earlier message. It got caught 
up in a rule which prevents mail with subjects such as subscribe 
or help from being sent to the main list without moderation.

- --
Greg Sabino Mullane [EMAIL PROTECTED]
PGP Key: 0x14964AC8 200711291651
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iD8DBQFHTzRfvJuQZxSWSsgRA1GPAKCHIdOWsfTNSgehmCwKVOHWDBVGewCgojJu
P6iu+Lu3+F81azWhMKisYgM=
=hmro
-END PGP SIGNATURE-



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Porting from FrontBase

2007-11-29 Thread David Fetter
On Thu, Nov 29, 2007 at 07:30:53PM -, Martin wrote:
 I am looking at porting a web application from FrontBase to
 Postgresql. If anybody has some tips on how to deal with
 this, I'd appreciate hearing him. Thanks.

FrontBase has a Perl DBI driver
http://www.frontbase.com/cgi-bin/WebObjects/FrontBase, so you may be
able to use DBI-Link http://pgfoundry.org/projects/dbi-link/.

If you try that approach, let me know how it works out :)

Cheers,
David.
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: [EMAIL PROTECTED]

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] can't createdb with my new user

2007-11-29 Thread Raymond O'Donnell

On 28/11/2007 14:02, della wrote:


createdb: could not connect to database postgres: FATAL:  Ident
authentication failed for user kokomo

So, given that I do not fail submitting the password for kokomo when
requested... what could be the problem? Have I to create a system user
for kokomo?
If I use pgadmin3 I see the kokomo user with it own encrypted password
and I can create the DB with kokomo as a owner. So what am I missing
in the command line?


Have a look at the docs for the various client authentication 
mechanisms, and particularly the pg_hba.conf file.you need to add a 
line to it requiring a password for your new user and the host from 
which it is connecting.


http://www.postgresql.org/docs/8.2/interactive/client-authentication.html

HTH,

Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Linux v.s. Mac OS-X Performance (now Solaris)

2007-11-29 Thread Jeremy Harris

 Only under Solaris.  With Linux or BSD on it it ran pretty well.  I
 had a Sparc 20 running RH 7.2 back in the day (or whatever the last
 version of RH that ran on sparc was) that spanked an Ultra-2 running
 slowalrus with twice the memory and hard drives handily.
 
 Solaris has gotten much better since then, I'm sure.


	Ubuntu is supposed to be able to spin on a T1000/T2000 and they have 
come out with a magical beast called Solaris 10 and in Sun's infinite 
wisdom they have decided to abandon the /etc/init.d/ and friends way of 
startup for some complex XML way of doing things. But otherwise its 
quite good (ZFS and Cool Thread servers being among the other good 
things out of Sun's shop).


Anybody here running postgresql on a T1000?  What OS, and how is it?

Cheers,
   Jeremy


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] 60 Seconds to connected to Postgresql using ODBC or PGAdmin

2007-11-29 Thread Usama Dar
Or you can try using a simple packet sniffer, maybe ,ethereal , to see if
name resolution is going on, i had similar problems with MySQL name
resolution in the past and they generally went away after disabling its name
resolution and just use IPs

On Nov 29, 2007 1:23 PM, Shane Ambler [EMAIL PROTECTED] wrote:

 Richard Broersma Jr wrote:
  --- On Wed, 11/28/07, Richard Huxton [EMAIL PROTECTED] wrote:
  Name lookups. Something is trying to look up a name,
  failing and it's
  timing out after 60 seconds.
 
  It seems the OP's connection string was set to localhost.  Would this
 still indicate a Name Loopup problem?
 

 If there is no entry (or an incorrect one) in /etc/hosts for localhost -
 then yes - try to connect to 127.0.0.1 and see if that makes a difference.

 Also if it is set to do namelookup before referring to /etc/hosts it can
 have similar probs.


 --

 Shane Ambler
 [EMAIL PROTECTED]

 Get Sheeky @ http://Sheeky.Biz

 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly



[GENERAL] Threads limit for postmaster

2007-11-29 Thread Guillaume Pungeot
Hi,
I have a problem when a lot of connexions are opened on postgresql.

When 243 threads have been created by postmaster, the next CreateThread
fails and the following error is written in pglogs : FATAL:  could not
create sigchld waiter thread: error code 8.

I need to open more than 300 connexions on my server.
Is there anyone to help me ?

OS : windows 2003 server x64 edition.

Regards

-- 
=
Guillaume Pungeot - mappy


[GENERAL] Help needed optimizing query

2007-11-29 Thread Pedro Doria Meunier
Hi People.

I need some help optimizing this query:

SELECT u.friendly_name, distance(transform(u.curr_location,32628),
rc.agg_geometry) AS thedistance, u.mobile 
FROM (SELECT transform(collect(r.geometry),32628) AS agg_geometry,
t.county FROM pt_madeira_roads r, pt_madeira_toponymy t 
WHERE r.name ILIKE '%salão são roque%' AND t.county='Funchal' AND
r.geometry  t.geometry GROUP BY t.county) rc, units AS u 
WHERE expand(rc.agg_geometry,1000)  transform(u.curr_location,32628)
AND u.customer_id='14' AND (u.even_code='A' OR u.even_code='B') 
ORDER BY thedistance LIMIT 10;

Total query runtime: 1578 ms.
16 rows retrieved.

this is the explain analyze output:

Limit  (cost=93.92..93.93 rows=1 width=78)
  -  Sort  (cost=93.92..93.93 rows=1 width=78)
Sort Key: distance(transform(u.curr_location, 32628),
rc.agg_geometry)
-  Nested Loop  (cost=85.59..93.91 rows=1 width=78)
  Join Filter: (expand(rc.agg_geometry, 1000::double
precision)  transform(u.curr_location, 32628))
  -  HashAggregate  (cost=85.59..85.60 rows=1 width=226)
-  Nested Loop  (cost=0.00..85.58 rows=1
width=226)
  -  Seq Scan on pt_madeira_toponymy t
(cost=0.00..2.74 rows=10 width=15770)
Filter: (county = 'Funchal'::text)
  -  Index Scan using pt_madeira_roads_idx on
pt_madeira_roads r  (cost=0.00..8.27 rows=1 width=213)
Index Cond: (r.geometry  t.geometry)
Filter: ((r.name ~~* '%salão são roque
%'::text) AND (r.geometry  t.geometry))
  -  Index Scan using units_customerID_idx on units u
(cost=0.00..8.28 rows=1 width=46)
Index Cond: (customer_id = 14::bigint)
Filter: (((even_code)::text = 'A'::text) OR
((even_code)::text = 'B'::text))

I still get Seq Scans although all used fields are indexed, hence the
time used... :-(

Although this is almost 'targeted' to the beautiful SQL head of Regina's
any input would be extremely appreciated... ;-)

Thank you!

--
Pedro Doria Meunier 
Ips da Olaria
Edf. Jardins do Garajau, 4 r/c Y
9125-163 Caniço
Madeira
Portugal
GSM: +351 96 17 20 188 Skype: pdoriam
http://www.madeiragps.com


signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] PostgresSQL vs. Informix

2007-11-29 Thread Trent Shipley
In answering the question, it might help to have information on two background 
items.

The first item is the politics of the situation.  I gather from Chad Hendren's 
post that an individual in the top-8-OEM-customer enterprise is advocating 
for Postgresql.  

The second item is the existing technical situation.  What is unsatisfactory 
about Informix at this time?  What is mission critical about Informix's 
performance that Postgresql will have to meet?  (Postgresql has yet to 
conquer the computer cluster/grid arena and this frustrates many would-be 
power users.)  Does the customer have mission critical applications that are 
closely coupled or dependent on Informix features or disfeatures?

On Wednesday 2007-11-28 08:32, Chad Hendren wrote:
 PostgreSQL Team,

 I have a large OEM customer (one of the top 8 for Sun worldwide) that is
 considering embedding PostgreSQL into the core of their primary product
 instead of Informix.  He is trying to build his case for this change.
 Do we have anything that I can forward to him (something like the
 presentation you did at CEC?) to help him build his case?

 This is an urgent request from my customer given that his timeline is
 relatively short.  Any help you can give me will be very appreciated.

 Thanks,

 Chad Hendren

 Original question:

 Have you seen any studies (either by Sun or others) that compares
 PostgresSQL to other commercial database software (Informix, Oracle,
 Sybase,etc.)? I am interested seeing a feature by feature comparison of
 PostgresSQL and Informix.

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] can't createdb with my new user

2007-11-29 Thread della
hi, I describe my steps:

1. installed postgresql 8.2 via apt-get (ubuntu gutsy)
2. # sudo -u postgres psql template1
3. template1=# CREATE ROLE kokomo WITH LOGIN CREATEDB PASSWORD
'kokomo';
4. template1=# \q
5. # createdb kokomo_dev -U kokomo -W

Here I receive:
createdb: could not connect to database postgres: FATAL:  Ident
authentication failed for user kokomo

So, given that I do not fail submitting the password for kokomo when
requested... what could be the problem? Have I to create a system user
for kokomo?
If I use pgadmin3 I see the kokomo user with it own encrypted password
and I can create the DB with kokomo as a owner. So what am I missing
in the command line?

Thanks for help.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Cascading Trigger Prevention

2007-11-29 Thread Martijn van Oosterhout
On Wed, Nov 28, 2007 at 02:00:58PM -0800, JonXP wrote:
 I have a table that contains a nested set (using paths), and I'm
 trying to create a trigger that updates the timestamps of a node and
 all of its parents on a modification.  Unfortunately, this is causing
 infinitely recurring triggers.
 
 The insert occurs, which fires the trigger, which then updates all of
 the parent rows, which then fires the trigger again for each update,
 which recurses yet again.

You can disable triggers on a table but it's definitly not recommended
(deadlock prone) but it seems to me that if when the trigger is fired
it only updates its parent everything should work, right? As it
recurses up the tree eventually it reaches the end, surely?

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Those who make peaceful revolution impossible will make violent revolution 
 inevitable.
  -- John F Kennedy


signature.asc
Description: Digital signature


Re: [GENERAL] \copy ... with null as '' csv doesn't get nulls

2007-11-29 Thread Ivan Sergio Borgonovo
On Tue, 27 Nov 2007 21:12:00 -0500
Lew [EMAIL PROTECTED] wrote:

 Lew wrote:
  Try eliminating the double quotes in the CSV file.  Wannabe
  NULL would then be ,, (consecutive commas)
  in the CSV.  From the docs, you don't even need the NULL AS
  clause in your COPY statement.
 
 Ivan Sergio Borgonovo wrote:
  Exactly what I did because fortunately there weren't too many
  chances of weird stuff in 2000 records (sed -e 's/,/,/g').
 
 And this worked, right?

right and I call it pre-processing.

  I thought it may be missing total support of UTF-8 or if I did
  something wrong or it is actually a feature.

 This clearly has nothing to do with UTF-8, and everything to with
 comma-comma representing a NULL and comma-quote-quote-comma
 representing an empty string.

OK... misinterpreted.

I thought that NULL AS '' means ,'',[1] so that empty strings could be
imported as NULL if necessary and as empty string if not.

So at my understanding there is no way to use \copy and insert NULL
when it encounter an empty string and NULL AS '' doesn't do anything
in CSV mode since when I have ,, it actually imports NULL and when I
have ,'', it imports empty strings that is the same behaviour I get
without NULL AS ''.

Correct?
If it is I found the documentation a bit misleading. I admit it could
be due to not being English mother tongue.

thx

[1] I did try with '', '', '\\'...

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] log_line_prefix='%t %u %d %h %p %i %l %x ' causes error

2007-11-29 Thread Paul Lambert

Andrus wrote:

log_line_prefix='%t %u %d %h %p %i %l %x '

I checked and all options are legal.



That works fine for me... are you sure log_line_prefix is line 482 in 
your config file? You might have inadvertently put a superfluous % 
somewhere else.


--
Paul Lambert
Database Administrator
AutoLedgers - A Reynolds  Reynolds Company


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] 8.2.3: Server crashes on Windows using Eclipse/Junit

2007-11-29 Thread Laurent Duperval
Hi,

I'm not sure if this is good netiquette, or not. I'm reviving a month-old
thread, because I'm trying to figure out how to resolve the issue.

To summarize: when I run unit tests with eclipse (and with Ant) on
Windows, at some point, I run out of available connections. I tried
increasing the maximum number of connections, but then I started seeing
the postgres server die and restart.

I'm trying to fix this, yet again, but I don't have a clear idea of what
to fix.

On Tue, 23 Oct 2007 20:07:22 +0200, Magnus Hagander wrote:
 Rainer Bauer wrote:
 After increasing the session heap size in the registry from 512KB to 1024KB
 the no. of connections was roughly doubled. So this might be a solution for
 people running out of Desktop heap.
 
 Alter the value of the following key
 HKLM\System\CurrentControlSet\Control\Session Manager\SubSystems\Windows
 
 The numeric values following SharedSection= control the heap management:
 On WinXP these are the default values: SharedSection=1024,3072,512
 Altering this to SharedSection=1024,3072,1024 will increase the heap for 
 all
 non-interactive window stations to 1024KB.
 

Does this allow creating more connections? At some point, the discussion
became too technical for me, and I no longer could tell if the answer was
for developers of for users.

I saw other messages dealing with semaphores/connection relations, etc.
But unless I really did not understand the discussion, none of them seemed
to address the issue I was seeing.

I'm thinking that the Java driver combined with Hibernate may be keeping
handles open for too long, because my tests aren't supposed to maintain
connections open for very long. I also would expect the connections to
either be closed or released once the statements are executed.



 This part should go in the FAQ, I think. It's
valid for 8.2 as well,
 from what I can tell, and it's valid for 8.3 both before and after the
 patch I just applied.
 
 Dave, you're listed as maintainer :-P
 
 //Magnus
 
 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?
 
http://archives.postgresql.org/



-- 
Prenez la parole en public en étant Speak to an audience while being
moins nerveux et plus convaincant! less nervous and more convincing!
Abonnez-vous au bulletin gratuit!   Sign up for the free newsletter!

 http://www.duperval.com   (514) 902-0186


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


  1   2   >